Commit Graph

182 Commits

Author SHA1 Message Date
Marcin Elantkowski
4d28b65fb8 fix serialization of nn.Parameter with dill (#10296)
Summary:
Should resolve #9981.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10296

Differential Revision: D9196353

Pulled By: soumith

fbshipit-source-id: 109b6da42b7240cdbc7a0586745c735bce5e1279
2018-09-01 23:55:40 -07:00
Richard Zou
1709484a40 Restore tensor.type, tensor.type_as docs (#5746) 2018-03-14 17:59:31 -04:00
Richard Zou
439aae7e94 Add tensor.repeat docs. Remove legacy tensor repeat function. (#5666)
* Add tensor.repeat docs. Remove legacy tensor repeat function.

* Fix nit
2018-03-09 23:51:47 -05:00
gchanan
6ab33a820c
Support type conversion via type(dtype). (#5441)
* Support type conversion via type(dtype).

* Merge overloads.
2018-02-28 13:05:38 -05:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Sam Gross
895aebac08
Use Variable instead of Tensor in Function.forward (#4786)
The Tensor and Variable classes are being merged.
autograd.Function.forward is now called on Variables, but with "no-grad"
mode (torch.no_grad()) enabled.

One benefit is that we no longer have to explicitly track shared
storages.
2018-02-06 17:24:27 -05:00
Peter Goldsborough
86fd5fd524 Replace async with non_blocking for Python 3.7 (#4999)
* Replace async with non_blocking for Python 3.7 upgrade

* Remove trailing whitespace

* Give _cuda and _type kwargs and accept async for compatibility

* Rename async to non_blocking in all C++ code

* Add entries for async in python_variable_methods

* Friendlier backward compatibility for cuda and type
2018-02-02 09:23:51 -05:00
Sam Gross
720c7b1e2c
Move repeat to torch/_utils.py (#4712)
This moves the implementation of repeat to _utils so that the autograd
function can call it directly instead of relying on forward being called
on tensors.

This also removes _range, which was previously necessary because we
shadowed the built-in range() function.
2018-01-17 17:30:43 -05:00
Natalia Gimelshein
ea28deee75 use torch.cat in _flatten 2017-11-29 10:54:57 +01:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
SsnL
fa5efab669 comments and case where not all sparse (#3370) 2017-11-01 06:05:17 -04:00
SsnL
01be4d6b20 sparse broadcast_coalesce and reduce_add_coalesced 2017-10-28 18:52:35 -04:00
SsnL
3a0aee71f3 fix sparse tensor .cpu() 2017-10-28 18:52:35 -04:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Gregory Chanan
bb3779efe8 Add broadcasting to masked_select. 2017-06-24 09:45:21 -04:00
Adam Paszke
12813b88f6 Add DistributedDataParallel 2017-06-12 22:00:22 -04:00
Kai Arulkumaran
ddf6328990 Document type function returns type with no args (#1719) 2017-06-05 11:54:55 -04:00
Edward Z. Yang
743e4894d2 Prefix values/indices/sparse_mask/nnz with underscore (#1457)
As discussed in #1441.

I also added some docs giving clear guidance about how to coalescing
in sparse tensors.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-05-03 11:14:10 -04:00
Martin Raison
01d84c5f9d revert sparse cuda index type change 2017-04-18 12:46:54 -07:00
Martin Raison
88b42324e7 spcadd, sparseMask, cadd, csub, cmul + tests 2017-04-18 12:46:54 -07:00
Sam Gross
c4d1318662 Fix map_location in torch.load (#1006) 2017-03-15 16:54:19 -04:00
Martin Raison
f17cfe4293 sparse tensor operations (#735) 2017-03-03 18:37:03 +01:00
Sam Gross
6464e69e21 Docs for torch.Storage (#475) 2017-01-18 03:22:30 -05:00
Sam Gross
d951d5b1cd Fix tensor.cuda(0) when on non-zero device. (#472) 2017-01-18 01:08:37 -05:00
Adam Paszke
0325e2f646 Major autograd refactor
Improves autograd performance by more than 2x and fixes a couple
of bugs. All core functions have been moved to C.
2016-10-13 17:17:49 -07:00
Sam Gross
2bc9da4f5e Support "device" keyword argument (#79)
Adds the optional "device" keyword argument to Tensor and Storage
constructors and .new methods.
2016-10-01 19:32:55 -04:00
Adam Paszke
e034f258e3 Fix ffi utils in Python 2.7 2016-10-01 15:37:05 -07:00
Adam Paszke
11b38a6895 Add more functions to autograd 2016-09-30 16:37:07 -04:00
Sam Gross
cb5d4e836f Lazy load CUDA and THNN modules (#64) 2016-09-28 19:29:53 -04:00
Adam Paszke
3eac7164f4 Add data parallel functions to nn 2016-09-27 15:45:45 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
Adam Paszke
da5bb373e6 Type conversions now use auto gpu 2016-09-15 18:48:27 -07:00