Commit Graph

173 Commits

Author SHA1 Message Date
Sam Gross
a8bdce38fe
Replace PowConstant (#4711) 2018-01-17 17:30:56 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
gchanan
e426020c87
Move prod, cumprod backwards to C++ (#4394)
* Add view_as as a native_function.

* Move prod, cumprod backwards to C++.

* Update for review requets.

* Review comments.

* Reorder slice parameters so dim is first.

* Update test_slice.

* Update test_autograd.

* Fix flake8.
2018-01-03 16:27:50 -05:00
Sam Gross
bec0349280 Implement Variable.cuda and Variable.type using ATen (#4139)
* Implement Variable.cuda using ATen

This adds an optional async flag to Tensor::copy_, which attempts to do
a non-blocking copy if the one of the tensors is in pinned memory and
the other is a CUDA tensor.

* Perform cross-device copy in CopyBackwards

Also call torch.cuda._lazy_init() from Variable.cuda()

* Implement Variable.type via ATen

* Changes from review:

 - remove copy_out
 - remove unnecessary include
 - fix default device for .cuda()

* Combine if statements in dispatch_type
2017-12-18 01:54:35 -05:00
Dmytro Dzhulgakov
709fcfda8a Now actually fix padding (the tests are added in onnx-pytorch) (#3893)
* Now actually fix padding (the tests are added in onnx-pytorch)

* fix test
2017-11-27 23:39:48 -05:00
Sam Gross
ed640010ce
Delete unused autograd functions (#3856) 2017-11-24 14:31:11 -05:00
Sam Gross
4518793aa2
Implement indexing in ATen (#3725)
Implements basic and advanced indexing using ATen tensors/variables.
Basic indexing is translated at the Python-binding level
(python_variable_indexing.cpp) to slice/squeeze/unsqueeze/select calls.
Advanced indexing is implemented in ATen in terms of take() and put()
calls.
2017-11-21 13:19:00 -05:00
Adam Paszke
cf407213f9 Clean up stochastic function related dead code (#3782) 2017-11-20 12:44:45 -05:00
Fritz Obermeyer
1f64c2ef91 Rename pyro.distributions.Multinomial -> .Categorical (#3766)
* Rename distributions.Multinomial -> distributions.Categorical

* Rename Multinomial -> Categorical

* Update docs

* Update variable.py

* Update distributions.py

* Update variable.py
2017-11-18 16:10:07 -05:00
Sam Gross
2453bc2876
Implement clamp using ATen (#3739) 2017-11-17 13:12:36 -05:00
Sam Gross
fde355f7d4
Allow in-place operations on views (#3384)
Allow in-place operations on views

Adds VariableViewImpl, a subclass of VariableImpl which has a pointer to
the base Variable on which it is a view. In-place operations on views
change the grad_fn of the base.

Note that in-place operations only work on views that are the first output of the function that created them. All C++/ATen implemented functions have this behavior, but it's possible to write Python-implemented autograd functions that do not. In-place operations on these view will raise an exception.

Fixes #3313
2017-11-06 18:19:56 -05:00
Sam Gross
48fe5d4622
Move select and permute to ATen/C++ (#3421)
Move select and permute to ATen/C++
2017-11-02 15:17:36 -04:00
Edward Z. Yang
8fbe003d4e Miscellaneous ONNX fixes and behavior changes.
- Deleted Addmm/Concat Function class, as this is now native ATen operator

- Resurrected ONNX operator for Concat (now called 'cat')

- Add a "fake" Expand ONNX operator, which we now do the optimization on;
  this helps prevent us from emitting a warning that 'expand' is not supported.
  We still fail if any of these Expand operators make it to the final model,
  until we actually formalize Expand in ONNX.  This also simplifies the
  fuseBroadcast code, because single-return ONNX nodes don't get select nodes.

- New error reporting strategy.  If we fail to export an operator because of
  something, we emit a warning, but otherwise keep going.  At the very end,
  in export.cpp, we now check if there are any ATen operators left over.  If
  there are, we bug out.  This assumes that ATen is lower case and ONNX is upper
  case.  You're now supposed to 'return _unimplemented(msg)' in these cases.

- New toString() method on Graph, for getting the string graph (useful for
  slapping it into error messages.)

- Some of the legacy symbolics (still in Python symbolic method of Function
  subclass) have been cleaned up for clarity.)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-29 23:50:34 -04:00
bddppq
ac8f56656d Adapt ONNX Slice op changes (#3316) 2017-10-28 00:03:29 -04:00
James Reed
869bdeb936 Symbolic implementation of Index supporting tuple of slices. (#3294) 2017-10-27 02:39:38 +05:30
Edward Z. Yang
9989bb1a43 Export index constants as long, not int (onnx-caffe2 needs it.) (#3274)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-25 09:50:33 +02:00
Gregory Chanan
4b1e85d266 Remove split/chunk python autograd. 2017-10-24 19:33:37 -04:00
Lu Fang
5691b0b8d2 Fix the Slice changes in ONNX (#3216) 2017-10-24 14:12:54 -04:00
Edward Z. Yang
53fe804322 Make ONNX work with new C++ autograd world.
The general strategy is there is a new module, torch.onnx.symbolic, which
contains a function for every ATen method name with the ONNX translation.
While implementing this, I took the opportunity to expunge all references
of 'g' from the public API; instead, it is managed by a global variable in
torch.onnx which tracks the "current graph".

Other changes:

- If you pass a Tensor to op as an argument, it will now automatically be
  converted into a Constant ONNX node.  This lets us remove needing to
  implement ONNX

- Rename value to other, wherever there is both a Scalar and Tensor overload.
  This way, keyword dispatch can work uniformly in both cases.

- Deleted any autograd Function classes that both had a symbolic and were ported
  to the new C++ autograd implementation.  There may still be some straggling
  classes that didn't have symbolic.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-20 15:38:01 -04:00
Sam Gross
d9b89a352c Replace StochasticFunctions v2 (#3165)
This removes the StochasticFunctions for bernoulli, multinomial, and
normal and replaces them with classes in the torch.distributions
package. Each distribution supports the differentiable log_prob function
that returns the log of the pdf/pmf of the samples.

The current StochasticFunction implementation has a few problems: it can
be painful to use when there are multiple stochastic outputs which need
to be back-propagated through. It also requires that we store grad_fns
on Variables that have requires_grad=False in order to find stochastic
nodes.
2017-10-19 15:05:07 -04:00
SsnL
fce3ed19e5 Change device_id to device in python land (#3133)
* change device_id to device in python land

* cuda/random.py
2017-10-17 00:54:26 +02:00
Lu Fang
3261e1337a Use 0D (1-element) tensor instead of 1D tensor 2017-10-16 17:47:36 -04:00
Lu Fang
93e1749c85 Add ONNX support for AddConstant and SubConstant 2017-10-16 17:47:36 -04:00
Lu Fang
a1deb2d47f Move the exception logic to the helper function 2017-10-16 16:57:16 -04:00
Lu Fang
cad9438bb9 Add unit tests for onnx helper functions 2017-10-16 16:57:16 -04:00
Lu Fang
864bd934b0 Add a helper function to check broadcasting (#3115) 2017-10-13 23:22:16 -04:00
Sam Gross
61bb0d2954 Remove unused parameter 'input' from Tanh 2017-10-13 01:31:48 +02:00
Edward Z. Yang
b9cd45adcf Add note about inplace status in ONNX and JIT.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-13 01:31:22 +02:00
Lu Fang
9ef39a50ee Fix the broadcast in Addmm's symbolic (#3063)
* Fix the broadcast in Addmm's symbolic

* fix the non-matching dimension cases

* Add exception for non-supported case, remove onnx test cases (moved to onnx-pytorch repo)

* remove the test_onnx.py in run_test.sh

* lint the code
2017-10-11 22:23:11 -04:00
Sam Gross
7bc154f8ea Remove unused argument 'input' to Sigmoid_updateGradInput (#3079) 2017-10-11 23:52:50 +02:00
bddppq
bd9b4df6e9 Add support for exporting MulConstant, DivConstant and Softmax to ONNX (#2923)
* Add support for exporting MulConstant and Softmax

* Add support for MulConstant in autograd execution

* Also add support for DivConstant
2017-10-11 13:03:33 -04:00
Lu Fang
8d8a99c244 Add ONNX Pad reflect and edge mode support (#3048) 2017-10-10 17:02:08 -04:00
Lu Fang
c489445c46 Add ONNX support for Mean (#2956) 2017-10-03 18:16:45 -04:00
Edward Z. Yang
6fbdf40284 Translate addmm into Gemm operator / fix alpha-beta mixup / execute in JIT.
The alpha/beta naming in addmm was flipped; this commit fixes that
problem.  It also fixes the ONNX export of alpha/beta parameters.
Finally, it supports executing matmul in the JIT.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-10-03 17:23:43 -04:00
Junjie Bai
e4701e63f6 Fix exporting Reshape with single torch.Size argument 2017-10-02 23:29:49 +02:00
Alykhan Tejani
621603169c initialize new tensor 2017-10-02 09:53:21 -04:00
Alykhan Tejani
ca644ca204 Add inplace zero to variable (#2212) 2017-10-02 14:02:24 +02:00
Junjie Bai
287f434900 Add support for exporting Addmm with alpha != 1 or beta != 1 2017-09-23 11:17:27 -04:00
IraKorshunova
2b9765ad02 Erf and erfinv (#2799) 2017-09-20 21:23:45 -04:00
Trevor Killeen
ad414908d7 Advanced Indexing with variables for autograd (#2590) 2017-09-20 14:50:07 -04:00
Adam Paszke
b66d90c84f Add a pass to remove all non-standard ONNX nodes before export (#225) 2017-09-19 10:53:32 -04:00
Adam Paszke
aa1a94058b Add AddConstant node to the JIT 2017-09-19 10:53:32 -04:00
Adam Paszke
28828e033f Make certain functions traceable 2017-09-19 10:53:32 -04:00
albanD
2763bfc49e Norm subgradient at 0 (#2775) 2017-09-18 12:26:36 -04:00
Soumith Chintala
462f95ed6d fix bug in autograd type() for non-default GPU input 2017-09-13 15:33:37 -04:00
Zach DeVito
3c61b59fd4 codemod primspec -> symbol, PrimSpec -> Symbolic 2017-09-06 13:45:39 -04:00
Zach DeVito
6d8d5bab4c Codemod Toffee -> ONNX, toffee -> onnx. Change file names to match 2017-09-06 13:45:39 -04:00
Edward Z. Yang
4fc54af010 Code review comments.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Lu Fang
f1e4de9a63 Add primspec for Sub, Index, Chunk, and Embedding 2017-09-05 17:48:55 -04:00
Edward Z. Yang
394ff072eb Update to latest ToffeeIR operator schema.
- Conv no longer supports bias, so we create an explicit broadcasted
  addition afterwards.  There is one minor problem, however, which is that
  ConvTranspose in Caffe2 has mandatory bias.  So there's a hack.
  See Note [Caffe2ConvTranspose] for the details.
- Squeeze: dims -> axes
- Transpose: axes -> perm
- Reshape lost its extra output (yay!)

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00