Commit Graph

169 Commits

Author SHA1 Message Date
Jon Malmaud
c71c84ee04 Tweak 'detach' docstring. (#5292) 2018-02-17 23:35:30 -05:00
gchanan
712a6c6362
Deprecate out-of-place resize and resize_as on Variables. (#4886)
* Deprecate out-of-place resize and resize_as on Variables.

* Use default UserWarning instead of DeprecationWarning for Variable resize.
2018-01-29 18:02:06 -05:00
gchanan
260a246192
Move repeat autograd to C++. (#4885) 2018-01-29 15:09:59 -05:00
gchanan
0844b5b25c
Fix deepcopy with scalars. (#4854) 2018-01-25 15:12:36 -05:00
Sam Gross
57549b7e44
Bind functions with out= arguments in VariableType (#4565)
This adds overrides in VariableType for the xxx_out ATen functions and
implements Python bindings. There is no support for automatic
differentiation. If any of the inputs (or outputs) requires grad, then the
function will throw an exception unless it's running in "no-grad" mode.

The bindings for calling torch.xxx functions on Variables are moved to a
different object. Previously, they were static method on VariableBase.
This change prevents users from accidentally calling static methods as if
they were instance methods.
2018-01-17 18:27:42 -05:00
Sam Gross
a8bdce38fe
Replace PowConstant (#4711) 2018-01-17 17:30:56 -05:00
Richard Zou
ddb767f214 Add printing support for sparse variables (#4683) 2018-01-16 13:18:10 -05:00
gchanan
eb857ec367
Introduce a (non-public) autograd scalar method and improve printing (#4586)
* Specialize Variable pinting and always print device for GPU tensors/Variables.

* Introduce a (non-public) _scalar_sum() method for autograd scalar testing.
2018-01-12 14:26:38 -05:00
gchanan
e426020c87
Move prod, cumprod backwards to C++ (#4394)
* Add view_as as a native_function.

* Move prod, cumprod backwards to C++.

* Update for review requets.

* Review comments.

* Reorder slice parameters so dim is first.

* Update test_slice.

* Update test_autograd.

* Fix flake8.
2018-01-03 16:27:50 -05:00
SsnL
658d4c7ea8 allow optional int tensor 2017-12-24 03:08:28 +08:00
Sam Gross
d605058212
Replace Variable.volatile with torch.no_grad() (#3970)
This removes volatile from Variable. The functionality is mostly
replaced by a global (thread-local) flag, which is controlled by
torch.set_grad_enabled() and the context manager torch.no_grad().

In C++, the flag is exposed through GradMode::is_enabled() and GradMode::set_enabled()

Fixes #3627
2017-12-18 15:46:13 -05:00
Tongzhou Wang
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
Sam Gross
bec0349280 Implement Variable.cuda and Variable.type using ATen (#4139)
* Implement Variable.cuda using ATen

This adds an optional async flag to Tensor::copy_, which attempts to do
a non-blocking copy if the one of the tensors is in pinned memory and
the other is a CUDA tensor.

* Perform cross-device copy in CopyBackwards

Also call torch.cuda._lazy_init() from Variable.cuda()

* Implement Variable.type via ATen

* Changes from review:

 - remove copy_out
 - remove unnecessary include
 - fix default device for .cuda()

* Combine if statements in dispatch_type
2017-12-18 01:54:35 -05:00
Sam Gross
d41b6c7daa Implement remaining random methods through ATen (#4137)
* Implement remaining random methods through ATen

* Change test_bernoulli on Tensor to avoid broadcasting

The new ATen-dispatched bernoulli_ supports broadcasting. The old
Tensor.bernoulli_ bindings instead require the tensors to have the same
number of elements. I haven't change the old code because it will be
deleted soon.
2017-12-13 15:40:34 -05:00
Sam Gross
d0cabbde74
Implement Variable.from_numpy (#4043)
Implements from_numpy using ATen tensors. Variable.from_numpy is a
convenient placeholder for the variant that returns Variables until we
merge Tensor and Variable.

The behavior is slightly changed:

 - from_numpy() on an empty array now returns an empty tensor instead of
   throwing an exception. The shape may not be preserved.
 - CharTensor(ndarray) used to throw an exception. It now copies the
   ndarray. Copying is implemented via ATen toType.
2017-12-06 14:08:56 -05:00
Sam Gross
535a13dbc2
Move renorm to C++ and expose cumsum (#4013)
Also allow cumprod forward in C++
2017-12-05 11:24:03 -05:00
Sam Gross
7e1fccb8f5
Add is_pinned, is_shared, and share_memory_ to Variable (#4015)
These are copied directly from Tensor. We'll need them before we can
merge Tensor and Variable.
2017-12-04 20:47:10 -05:00
Fritz Obermeyer
165d0897e4 Implement distributions.Gamma (#3841) 2017-12-02 01:10:08 +01:00
gchanan
4c7219b3b0
Implement matmul as a native function; use it for Variable impl (#3943)
* Implement matmul as a native function; use it for Variable impl.

This also includes an (inefficient) version of allclose, which was necessary for testing.
A more efficient version would use some apply logic to fuse the ops and exit early (coming in future PR).

On small tensors [(2, 5, 5) @ (5,5)], this yields ~2.5x speedup over the python implementation.

* Make maybeSqueeze static.
2017-11-29 23:13:04 -05:00
gchanan
157f949cef
Implement python scalar conversions via ATen; allow localScalar if numel == 1 (#3908)
* Have localScalar work with all 1 element tensors, not just scalars.

Also have toCFloat, etc. call localScalar so 1 element tensors work as well.

* Implement python number conversions.

* Implement __bool__, __nonzero__ as ATen functions.

* Remove merge artifacts.

* Simplify by dispatching to toCDouble.
2017-11-28 12:56:51 -05:00
gchanan
e91b75615e Use ATen version of Variable type_as. (#3840)
* Use ATen version of Variable type_as.

* type_as can't handle Tensors (non-Variables) in the parsing code, handle this in python.
2017-11-27 19:10:33 -05:00
gchanan
9c498aa523
Implement Variable cpu() as an ATen method. (#3802) 2017-11-22 11:25:52 -05:00
Sam Gross
4518793aa2
Implement indexing in ATen (#3725)
Implements basic and advanced indexing using ATen tensors/variables.
Basic indexing is translated at the Python-binding level
(python_variable_indexing.cpp) to slice/squeeze/unsqueeze/select calls.
Advanced indexing is implemented in ATen in terms of take() and put()
calls.
2017-11-21 13:19:00 -05:00
gchanan
ee08120b46
Move Variable conversion methods to ATen. (#3762)
* Move Variable conversion methods to ATen.

* Add a test to ensure type conversions work through backwards.

* Fix VariableType copy for type conversions.

* Add comment about needing to handle device movement.

* Move back to opposite order for copy function params -- inplace views depend on it.

* Use is_available() rather than is_available.
2017-11-20 13:28:08 -05:00
Adam Paszke
cf407213f9 Clean up stochastic function related dead code (#3782) 2017-11-20 12:44:45 -05:00
Fritz Obermeyer
1f64c2ef91 Rename pyro.distributions.Multinomial -> .Categorical (#3766)
* Rename distributions.Multinomial -> distributions.Categorical

* Rename Multinomial -> Categorical

* Update docs

* Update variable.py

* Update distributions.py

* Update variable.py
2017-11-18 16:10:07 -05:00
gchanan
067f799e9f
Implement remaining Variable fallthrough methods via ATen (#3744)
* Use aten version of is_signed.

* Define is_cuda native function and use it for variable.

* Use ATen dim for Variable dim/ndimension.

* Get rid of dim, ndimension fallthroughs in variable.py.

* Move size/stride Variable methods to use ATen.

* Implement shape property on Variable via ATen.

* Remove the _getattr__ function from Variable.

* Get rid of dispatch functions and avoid cast.

* Add THPUtils_packInt64Array.

* Throw python errors.

* Use fallthrough and fix fallthrough generation for native functions.

* is_cuda is a property, not a method.
2017-11-17 15:57:56 -05:00
Sam Gross
2453bc2876
Implement clamp using ATen (#3739) 2017-11-17 13:12:36 -05:00
gchanan
b96976fceb
Use ATen equivalents for variable element_size and nelement. (#3724)
* Use aten numel for variable nelement.

* Use ATen elementSizeInBytes for element_size.
2017-11-15 17:54:02 -05:00
Sam Gross
feb0a145c3
Move Variable.var and Variable.std to ATen (#3704) 2017-11-15 14:36:15 -05:00
Sam Gross
1d198c4f8c
Use ATen for Variable.contiguous() (#3701) 2017-11-14 17:13:15 -05:00
Gregory Chanan
a3bf06c0c7 Use ATen implementations for is_contiguous, is_set_to, numel, get_device. 2017-11-14 08:29:55 +01:00
Vladislav Zavadskyy
30d06218cb Solved boolean ambiguity for variables and tensors which contain one value. (#3656)
* Solved boolean ambiguity for variables and tensors which contain one value.

* Update variable.py

* Update tensor.py
2017-11-12 11:07:50 -05:00
Sam Gross
1bf717e17d
Raise exception when Variable.reinforce is called (#3555)
Fixes #3554
2017-11-09 12:30:12 -05:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
Sam Gross
fde355f7d4
Allow in-place operations on views (#3384)
Allow in-place operations on views

Adds VariableViewImpl, a subclass of VariableImpl which has a pointer to
the base Variable on which it is a view. In-place operations on views
change the grad_fn of the base.

Note that in-place operations only work on views that are the first output of the function that created them. All C++/ATen implemented functions have this behavior, but it's possible to write Python-implemented autograd functions that do not. In-place operations on these view will raise an exception.

Fixes #3313
2017-11-06 18:19:56 -05:00
陈云
c2bdda1224 implement __dir__for Variable (#3501)
* implement __dir__ for Variable

* Update test_autograd.py
2017-11-06 08:08:15 -05:00
Sam Gross
48fe5d4622
Move select and permute to ATen/C++ (#3421)
Move select and permute to ATen/C++
2017-11-02 15:17:36 -04:00
Filip Binkiewicz
c2e8b7aafe Allow casting Variables onto Python scalars 2017-10-31 08:51:55 -04:00
gchanan
3e6e81da46 Dispatch trivial variable operators to C++ aten functions. (#3372)
Implement __comparison_ops__ by calling the VariableBase methods.
2017-10-30 19:46:05 -04:00
Adam Paszke
fa0f3cf98a Re-enable and fix most JIT tests 2017-10-27 02:40:09 +05:30
Sam Gross
a65db4e956 Use ATen for torch.cat, torch.addmm, and friends on Variables. (#3286)
This includes some changes to the dispatch code for torch.xxx functions:

 - Since Variable.addmm is an instance-method, the self argument has to
   come first. The dispatch code swaps the first two arguments if
   necessary to suppor the deprecated signatures where 'alpha' or 'beta'
   comes before the 'self' tensor.
 - Delete IMPLEMENT_STATELESS_REVERSED. These functions require output
   arguments to be passed in using the keyword 'out'. They were meant to
   handle torch.gt(out, a, b), but we haven't allowed that for a while.
2017-10-25 14:27:45 -04:00
Gregory Chanan
4b1e85d266 Remove split/chunk python autograd. 2017-10-24 19:33:37 -04:00
Sam Gross
5989b05ecc Enable ATen implementation of some NN functions and Variable methods 2017-10-20 15:38:01 -04:00
Sam Gross
d9b89a352c Replace StochasticFunctions v2 (#3165)
This removes the StochasticFunctions for bernoulli, multinomial, and
normal and replaces them with classes in the torch.distributions
package. Each distribution supports the differentiable log_prob function
that returns the log of the pdf/pmf of the samples.

The current StochasticFunction implementation has a few problems: it can
be painful to use when there are multiple stochastic outputs which need
to be back-propagated through. It also requires that we store grad_fns
on Variables that have requires_grad=False in order to find stochastic
nodes.
2017-10-19 15:05:07 -04:00
Alykhan Tejani
ca644ca204 Add inplace zero to variable (#2212) 2017-10-02 14:02:24 +02:00
Taehoon Lee
5d9de014bd Fix typos 2017-10-01 03:09:25 -04:00
SsnL
dcee596a8b change Variable.cuda to be consistent with Tensor.cuda 2017-09-26 23:48:40 -04:00
IraKorshunova
2b9765ad02 Erf and erfinv (#2799) 2017-09-20 21:23:45 -04:00
Edward Z. Yang
bcad604ea6 Move imap to six.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-14 14:33:08 -04:00