Commit Graph

178 Commits

Author SHA1 Message Date
Edward Yang
ba81074c40 Fix B902 lint error: invalid first argument. (#18181)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18181
ghimport-source-id: 9c23551584a1a1b0b7ac246367f3a7ae1c50b315

Stack from [ghstack](https://github.com/ezyang/ghstack):
* #18184 Fix B903 lint: save memory for data classes with slots/namedtuple
* **#18181 Fix B902 lint error: invalid first argument.**
* #18178 Fix B006 lint errors: using mutable structure in default argument.
* #18177 Fix lstrip bug revealed by B005 lint

A variety of sins were committed:
- Some code was dead
- Some code was actually a staticmethod
- Some code just named it the wrong way
- Some code was purposely testing the omitted case

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14530876

fbshipit-source-id: 292a371d9a76ddc7bfcfd38b6f0da9165290a58e
2019-03-21 09:10:28 -07:00
Sam Gross
6b3a4637d6
Make the tensor type torch.Tensor instead of torch.autograd.Variable (#5785)
This changes type(tensor) to return `torch.Tensor` instead of
`torch.autograd.Variable`.

This requires a few implementation changes:

 - torch.Tensor is now a regular Python class instead of a
   pseudo-factory like torch.FloatTensor/torch.DoubleTensor
 - torch.autograd.Variable is just a shell with a __new__ function.
   Since no instanes are constructed it doesn't have any methods.
 - Adds torch.get_default_dtype() since torch.Tensor.dtype returns
   <attribute 'dtype' of 'torch._C._TensorBase' objects>
2018-04-03 16:29:25 -04:00
Edward Z. Yang
404b8e9442
Revert "introduce size_as_tensor and resize_from_tensor" (#5818)
* Revert "introduce size_as_tensor and resize_from_tensor (#5792)"

This reverts commit 4fa08535ed.
2018-03-15 15:05:51 -04:00
anderspapitto
4fa08535ed introduce size_as_tensor and resize_from_tensor (#5792)
these two operators use a Tensor to hold the sizes, which allows
symbolic implementations to be attached
2018-03-15 14:47:35 -04:00
Tongzhou Wang
71d73211f4 [ready] torch.* doc update for Variable/Tensor merge, and other improvements (#5443)
* 1. Update doc to reflect changes in Variable/Tensor merge, and new printing style
2. Remove functions in torch/functional.py that are already implemented with native_function
3. Add set_detault_tensor_type doc

* fix torch.split

* py2 unicode string fix

* update torch.gels doc

* address @fmassa 's comments

* double-colon
2018-03-08 23:02:38 -05:00
theweiho
c2721ab503 Add per-element unique op for CPU (#5503)
Questions/possible future works:

How to template-ize to extend support beyond LongTensor?
How to check if autograd works (and if not, how to add explicit gradient)?
CUDA support?
Testing command:
DEBUG=1 NO_CUDA=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build && DEBUG=1 NO_CUDA=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py develop && python3 test/test_torch.py

Partially fixes #2031

* Initial commit for unique op

* Working unique with test

* Make inverse indices shape conform to input

* flake8 whitespace removal

* address review comment nits

* Expose fn and add docs. Explicitly declare no gradients

* Trial generic dispatch implementation

* Add tests for generics

* flake8 whitespace

* Add basic CUDA error throwing and templateize set

* Explicit contiguous and AT_DISPATCH_ALL_TYPES return

* Remove extraneous numpy conversion

* Refactor out .data calls

* Refactored to variable return length API with wrapper fn as opposed to returning a 0-length tensor, per off-line reviewer comments

* Remove A

* Don't use hidden torch._unique() in test

* Fix documentations
2018-03-07 18:16:51 -05:00
Sam Gross
70ba50c3d4 Remove some uses of torch.is_tensor in favor of isinstance (#5473) 2018-03-02 06:17:38 -05:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
gchanan
853dba8e3b
Improve sparse variable printing. (#5335) 2018-02-21 18:01:58 -05:00
Jon Malmaud
c71c84ee04 Tweak 'detach' docstring. (#5292) 2018-02-17 23:35:30 -05:00
gchanan
712a6c6362
Deprecate out-of-place resize and resize_as on Variables. (#4886)
* Deprecate out-of-place resize and resize_as on Variables.

* Use default UserWarning instead of DeprecationWarning for Variable resize.
2018-01-29 18:02:06 -05:00
gchanan
260a246192
Move repeat autograd to C++. (#4885) 2018-01-29 15:09:59 -05:00
gchanan
0844b5b25c
Fix deepcopy with scalars. (#4854) 2018-01-25 15:12:36 -05:00
Sam Gross
57549b7e44
Bind functions with out= arguments in VariableType (#4565)
This adds overrides in VariableType for the xxx_out ATen functions and
implements Python bindings. There is no support for automatic
differentiation. If any of the inputs (or outputs) requires grad, then the
function will throw an exception unless it's running in "no-grad" mode.

The bindings for calling torch.xxx functions on Variables are moved to a
different object. Previously, they were static method on VariableBase.
This change prevents users from accidentally calling static methods as if
they were instance methods.
2018-01-17 18:27:42 -05:00
Sam Gross
a8bdce38fe
Replace PowConstant (#4711) 2018-01-17 17:30:56 -05:00
Richard Zou
ddb767f214 Add printing support for sparse variables (#4683) 2018-01-16 13:18:10 -05:00
gchanan
eb857ec367
Introduce a (non-public) autograd scalar method and improve printing (#4586)
* Specialize Variable pinting and always print device for GPU tensors/Variables.

* Introduce a (non-public) _scalar_sum() method for autograd scalar testing.
2018-01-12 14:26:38 -05:00
gchanan
e426020c87
Move prod, cumprod backwards to C++ (#4394)
* Add view_as as a native_function.

* Move prod, cumprod backwards to C++.

* Update for review requets.

* Review comments.

* Reorder slice parameters so dim is first.

* Update test_slice.

* Update test_autograd.

* Fix flake8.
2018-01-03 16:27:50 -05:00
SsnL
658d4c7ea8 allow optional int tensor 2017-12-24 03:08:28 +08:00
Sam Gross
d605058212
Replace Variable.volatile with torch.no_grad() (#3970)
This removes volatile from Variable. The functionality is mostly
replaced by a global (thread-local) flag, which is controlled by
torch.set_grad_enabled() and the context manager torch.no_grad().

In C++, the flag is exposed through GradMode::is_enabled() and GradMode::set_enabled()

Fixes #3627
2017-12-18 15:46:13 -05:00
Tongzhou Wang
d8b2e5d091 Add python only default init expression; Implement stft, hann/hamming/bartlett window. (#4095)
* implement stft

* addressed comments; implemented window functions; added support for python only default initialization
2017-12-18 12:28:23 -05:00
Sam Gross
bec0349280 Implement Variable.cuda and Variable.type using ATen (#4139)
* Implement Variable.cuda using ATen

This adds an optional async flag to Tensor::copy_, which attempts to do
a non-blocking copy if the one of the tensors is in pinned memory and
the other is a CUDA tensor.

* Perform cross-device copy in CopyBackwards

Also call torch.cuda._lazy_init() from Variable.cuda()

* Implement Variable.type via ATen

* Changes from review:

 - remove copy_out
 - remove unnecessary include
 - fix default device for .cuda()

* Combine if statements in dispatch_type
2017-12-18 01:54:35 -05:00
Sam Gross
d41b6c7daa Implement remaining random methods through ATen (#4137)
* Implement remaining random methods through ATen

* Change test_bernoulli on Tensor to avoid broadcasting

The new ATen-dispatched bernoulli_ supports broadcasting. The old
Tensor.bernoulli_ bindings instead require the tensors to have the same
number of elements. I haven't change the old code because it will be
deleted soon.
2017-12-13 15:40:34 -05:00
Sam Gross
d0cabbde74
Implement Variable.from_numpy (#4043)
Implements from_numpy using ATen tensors. Variable.from_numpy is a
convenient placeholder for the variant that returns Variables until we
merge Tensor and Variable.

The behavior is slightly changed:

 - from_numpy() on an empty array now returns an empty tensor instead of
   throwing an exception. The shape may not be preserved.
 - CharTensor(ndarray) used to throw an exception. It now copies the
   ndarray. Copying is implemented via ATen toType.
2017-12-06 14:08:56 -05:00
Sam Gross
535a13dbc2
Move renorm to C++ and expose cumsum (#4013)
Also allow cumprod forward in C++
2017-12-05 11:24:03 -05:00
Sam Gross
7e1fccb8f5
Add is_pinned, is_shared, and share_memory_ to Variable (#4015)
These are copied directly from Tensor. We'll need them before we can
merge Tensor and Variable.
2017-12-04 20:47:10 -05:00
Fritz Obermeyer
165d0897e4 Implement distributions.Gamma (#3841) 2017-12-02 01:10:08 +01:00
gchanan
4c7219b3b0
Implement matmul as a native function; use it for Variable impl (#3943)
* Implement matmul as a native function; use it for Variable impl.

This also includes an (inefficient) version of allclose, which was necessary for testing.
A more efficient version would use some apply logic to fuse the ops and exit early (coming in future PR).

On small tensors [(2, 5, 5) @ (5,5)], this yields ~2.5x speedup over the python implementation.

* Make maybeSqueeze static.
2017-11-29 23:13:04 -05:00
gchanan
157f949cef
Implement python scalar conversions via ATen; allow localScalar if numel == 1 (#3908)
* Have localScalar work with all 1 element tensors, not just scalars.

Also have toCFloat, etc. call localScalar so 1 element tensors work as well.

* Implement python number conversions.

* Implement __bool__, __nonzero__ as ATen functions.

* Remove merge artifacts.

* Simplify by dispatching to toCDouble.
2017-11-28 12:56:51 -05:00
gchanan
e91b75615e Use ATen version of Variable type_as. (#3840)
* Use ATen version of Variable type_as.

* type_as can't handle Tensors (non-Variables) in the parsing code, handle this in python.
2017-11-27 19:10:33 -05:00
gchanan
9c498aa523
Implement Variable cpu() as an ATen method. (#3802) 2017-11-22 11:25:52 -05:00
Sam Gross
4518793aa2
Implement indexing in ATen (#3725)
Implements basic and advanced indexing using ATen tensors/variables.
Basic indexing is translated at the Python-binding level
(python_variable_indexing.cpp) to slice/squeeze/unsqueeze/select calls.
Advanced indexing is implemented in ATen in terms of take() and put()
calls.
2017-11-21 13:19:00 -05:00
gchanan
ee08120b46
Move Variable conversion methods to ATen. (#3762)
* Move Variable conversion methods to ATen.

* Add a test to ensure type conversions work through backwards.

* Fix VariableType copy for type conversions.

* Add comment about needing to handle device movement.

* Move back to opposite order for copy function params -- inplace views depend on it.

* Use is_available() rather than is_available.
2017-11-20 13:28:08 -05:00
Adam Paszke
cf407213f9 Clean up stochastic function related dead code (#3782) 2017-11-20 12:44:45 -05:00
Fritz Obermeyer
1f64c2ef91 Rename pyro.distributions.Multinomial -> .Categorical (#3766)
* Rename distributions.Multinomial -> distributions.Categorical

* Rename Multinomial -> Categorical

* Update docs

* Update variable.py

* Update distributions.py

* Update variable.py
2017-11-18 16:10:07 -05:00
gchanan
067f799e9f
Implement remaining Variable fallthrough methods via ATen (#3744)
* Use aten version of is_signed.

* Define is_cuda native function and use it for variable.

* Use ATen dim for Variable dim/ndimension.

* Get rid of dim, ndimension fallthroughs in variable.py.

* Move size/stride Variable methods to use ATen.

* Implement shape property on Variable via ATen.

* Remove the _getattr__ function from Variable.

* Get rid of dispatch functions and avoid cast.

* Add THPUtils_packInt64Array.

* Throw python errors.

* Use fallthrough and fix fallthrough generation for native functions.

* is_cuda is a property, not a method.
2017-11-17 15:57:56 -05:00
Sam Gross
2453bc2876
Implement clamp using ATen (#3739) 2017-11-17 13:12:36 -05:00
gchanan
b96976fceb
Use ATen equivalents for variable element_size and nelement. (#3724)
* Use aten numel for variable nelement.

* Use ATen elementSizeInBytes for element_size.
2017-11-15 17:54:02 -05:00
Sam Gross
feb0a145c3
Move Variable.var and Variable.std to ATen (#3704) 2017-11-15 14:36:15 -05:00
Sam Gross
1d198c4f8c
Use ATen for Variable.contiguous() (#3701) 2017-11-14 17:13:15 -05:00
Gregory Chanan
a3bf06c0c7 Use ATen implementations for is_contiguous, is_set_to, numel, get_device. 2017-11-14 08:29:55 +01:00
Vladislav Zavadskyy
30d06218cb Solved boolean ambiguity for variables and tensors which contain one value. (#3656)
* Solved boolean ambiguity for variables and tensors which contain one value.

* Update variable.py

* Update tensor.py
2017-11-12 11:07:50 -05:00
Sam Gross
1bf717e17d
Raise exception when Variable.reinforce is called (#3555)
Fixes #3554
2017-11-09 12:30:12 -05:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
Sam Gross
fde355f7d4
Allow in-place operations on views (#3384)
Allow in-place operations on views

Adds VariableViewImpl, a subclass of VariableImpl which has a pointer to
the base Variable on which it is a view. In-place operations on views
change the grad_fn of the base.

Note that in-place operations only work on views that are the first output of the function that created them. All C++/ATen implemented functions have this behavior, but it's possible to write Python-implemented autograd functions that do not. In-place operations on these view will raise an exception.

Fixes #3313
2017-11-06 18:19:56 -05:00
陈云
c2bdda1224 implement __dir__for Variable (#3501)
* implement __dir__ for Variable

* Update test_autograd.py
2017-11-06 08:08:15 -05:00
Sam Gross
48fe5d4622
Move select and permute to ATen/C++ (#3421)
Move select and permute to ATen/C++
2017-11-02 15:17:36 -04:00
Filip Binkiewicz
c2e8b7aafe Allow casting Variables onto Python scalars 2017-10-31 08:51:55 -04:00
gchanan
3e6e81da46 Dispatch trivial variable operators to C++ aten functions. (#3372)
Implement __comparison_ops__ by calling the VariableBase methods.
2017-10-30 19:46:05 -04:00
Adam Paszke
fa0f3cf98a Re-enable and fix most JIT tests 2017-10-27 02:40:09 +05:30