Commit Graph

314 Commits

Author SHA1 Message Date
Sam Gross
57549b7e44
Bind functions with out= arguments in VariableType (#4565)
This adds overrides in VariableType for the xxx_out ATen functions and
implements Python bindings. There is no support for automatic
differentiation. If any of the inputs (or outputs) requires grad, then the
function will throw an exception unless it's running in "no-grad" mode.

The bindings for calling torch.xxx functions on Variables are moved to a
different object. Previously, they were static method on VariableBase.
This change prevents users from accidentally calling static methods as if
they were instance methods.
2018-01-17 18:27:42 -05:00
Sam Gross
f8a4b1a266
Split off load_derivatives and gen_autograd_functions from gen_variable_type (#4370) 2017-12-27 18:59:41 -05:00
Tongzhou Wang
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
Sam Gross
c813ce3787 Implement Variable._sparse_mask (#4124)
* Implement Variable._sparse_mask

* Use SparseTensor as the dyanmic_type
2017-12-15 17:25:20 -05:00
Sam Gross
aeb7a3668d
Implement Variable.new (#4080) 2017-12-11 15:45:43 -05:00
Tongzhou Wang
c681b03d37 Add determinant function on variable; Add backward on svd (#3816)
* determinant on variable

* svd bwd
2017-12-01 13:22:46 -05:00
Adam Paszke
65e0d5bad8 Fix void* wrapping in autograd codegen
Also, add assertions here and there to make sure bad things
never happen again.
2017-11-24 13:33:13 +01:00
Sam Gross
9cb8b43778
Split off in-place NN functions (#3683)
For example, this splits threshold into threshold(), which is now
never in-place, and threshold_() which is always in-place.

This simplifies the in-place vs. non-in-place logic in
gen_variable_type.py, which was bug-prone.
2017-11-14 12:59:06 -05:00
Zach DeVito
5aa5b572e4 update build so that all of TH* is in libATen 2017-11-02 19:53:36 -04:00
Edward Z. Yang
53fe804322 Make ONNX work with new C++ autograd world.
The general strategy is there is a new module, torch.onnx.symbolic, which
contains a function for every ATen method name with the ONNX translation.
While implementing this, I took the opportunity to expunge all references
of 'g' from the public API; instead, it is managed by a global variable in
torch.onnx which tracks the "current graph".

Other changes:

- If you pass a Tensor to op as an argument, it will now automatically be
  converted into a Constant ONNX node.  This lets us remove needing to
  implement ONNX

- Rename value to other, wherever there is both a Scalar and Tensor overload.
  This way, keyword dispatch can work uniformly in both cases.

- Deleted any autograd Function classes that both had a symbolic and were ported
  to the new C++ autograd implementation.  There may still be some straggling
  classes that didn't have symbolic.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-20 15:38:01 -04:00
Sam Gross
f1f64c8d07 Generate autograd functions for NN / more refactors (#3136)
Generate autograd functions for NN and implement more derivatives in derivatives.yaml

A big refactor of gen_variable_type.py
2017-10-19 15:03:26 -04:00
Sam Gross
f29bcab67e Use Declarations.yaml to generate python bindings 2017-10-07 00:41:29 -04:00
Sam Gross
558d26a69e Fix argument indices 2017-10-07 00:41:29 -04:00
Sam Gross
dcb8d0f088 Refactor out python binding generation from gen_variable_type.py
- Also includes some prep work for binding NN functions
2017-10-07 00:41:29 -04:00