Commit Graph

169 Commits

Author SHA1 Message Date
Edward Z. Yang
d1346c75ec Always use generator version of map for Variable iteration.
In Python 2, the non-generator map will always perform the indexing
even when it is not used in the end.  Using the generator can let
us avoid indexing when it is not used.

As an added bonus, it makes the ordering of operations deterministic
between Python 2 and Python 3 in LSTM.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-12 11:03:03 -04:00
Alykhan Tejani
d81d71f24c fix docs for variable.backward (#2678) 2017-09-08 20:23:34 -04:00
Zhou Mo
2c07f88ea3 Fix typos. 2017-08-25 14:27:07 -04:00
John Pearson
14d8c03424 adding backward capability for potrf (Cholesky) (#2386) 2017-08-24 17:18:11 -04:00
Gregory Chanan
50c208a50b Revert "Fix typos."
This reverts commit 4622b33952.
2017-08-10 13:57:00 -04:00
Zhou Mo
4622b33952 Fix typos. 2017-08-08 11:05:38 -04:00
Hugh Perkins
1654bc9335 add shape to pass-throughs 2017-08-06 10:54:02 -04:00
Adam Paszke
04f31aa034 Improve Variable.retain_grad 2017-07-27 20:36:14 +05:30
Hugh Perkins
ae59e008cd add retain_grad method, to variable, so gradient gets stored during backpop, on non-user variables 2017-07-27 20:36:14 +05:30
Soumith Chintala
09abaa2189 make keepdim backcompat warnings emit in autograd as well (#2157) 2017-07-20 01:48:05 -04:00
Adam Paszke
02e23f4f6b Unify argument names in tensor and Variable methods 2017-07-20 01:45:57 -04:00
Adam Paszke
8946502348 Accept all kinds of arguments in Variable.expand 2017-07-20 01:45:57 -04:00
Luca Antiga
366299f9f3 Wrap unbiased flag in var, std, varall, stdall 2017-07-14 17:29:06 -04:00
Alykhan Tejani
0a9e8a23ef add atan2 function to autograd (#2040) 2017-07-10 16:04:35 -04:00
Leonid Vlasenkov
46a868dab7 [Ready] Limit docs line length (#1900)
* some docs are ready

* docs

* docs

* fix some more

* fix some more
2017-07-10 10:24:54 -04:00
Sam Gross
da0fad8a7a Use torch.matmul in nn.Linear (#1935)
This takes advantage of the broadcasting behavior of torch.matmul to
support inputs with more than two dimensions. The extra dimensions are
treated like part of the batch dimension, much like nn.Bottle in Lua
Torch.

There are a few related small performance changes:

 * Addmm computes the gradient in column-major for inputs in
   column-major format
 * Variable.mm calls Addmm in-place with the desired output buffer
2017-06-30 16:53:26 -04:00
Sam Gross
0a95613cef Improve error message when accessing attributes that don't exist (#1936)
New:
   >>> torch.autograd.Variable(torch.randn(3, 3)).foobar
   AttributeError: 'Variable' object has no attribute 'foobar'

Old:
   >>> torch.autograd.Variable(torch.randn(3, 3)).foobar
   AttributeError: foobar
2017-06-28 20:13:15 -04:00
Gregory Chanan
bc032be13e Implement negative dimensions and double backwards cumprod. 2017-06-27 18:44:14 -04:00
Gregory Chanan
7da77c4255 Add ScatterAdd autograd function. 2017-06-24 09:45:21 -04:00
Trevor Killeen
a45ad7cfba Advanced Indexing Part 1 -- Purely Integer Array Indexing 2017-06-22 17:21:50 -04:00
Anton Osokin
172a356668 forgotten import in variables.py
Fixing error on line 661: 
warnings.warn("masked_copy_ is deprecated and renamed to masked_scatter_, and will be removed in v0.3")
NameError: name 'warnings' is not defined
2017-06-19 14:23:48 +02:00
Gregory Chanan
7714b5a088 Fix autograd shape tracking for 1-d reduction ops. 2017-06-17 09:38:28 -04:00
Gregory Chanan
5cfb1329b5 Make implementation of Variable.mul_ and Variable.div_ consistent. 2017-06-17 09:38:28 -04:00
Isac Arnekvist
88e4bec8fa resize bug fix 2017-06-17 11:07:22 +02:00
Soumith Chintala
8d33603901 make t() of Variable consistent with Tensor (#1823) 2017-06-16 16:08:53 +02:00
Sam Gross
9c53c6dcb9 Fix errors and warnings when building docs (#1806) 2017-06-14 13:50:14 -04:00
gchanan
4e356528b4 Add torch.matmul function. (#1780)
* Add torch.matmul function.

Includes test_torch, test_autograd and docs changes.

* Add __all__ to functional so imports are accidentally imported.

* Include unbind in __all__.

* Add matmul case for when one argument is 1-dimensional and the other
at least 3-dimensional.

* Add squeeze_ to Variable.

* Use squeeze_ instead of squeeze for matmul.
2017-06-14 08:14:53 -04:00
martinarjovsky
7c024e93c6 Implement Cumprod function for autograd (#1439) 2017-06-13 17:48:15 +02:00
Gregory Chanan
e772a440cb Revert "Change keepdim default to False."
This reverts commit e124790cb2.

Note the original commit message is incorrect; this changes keepdim
back to false.
2017-06-11 05:37:58 -04:00
gchanan
e3d5826b92 Add Cumsum double backwards support. (#1758) 2017-06-10 18:27:44 +02:00
Adam Paszke
a53cde09b5 Rename masked_copy_ to masked_scatter_ 2017-06-06 01:06:14 -04:00
Francisco Massa
511cb20e7d Add Gesv to autograd (#1733)
* Add Gesv to autograd

* Add TODO for backprop through LU
2017-06-05 21:38:49 -04:00
陈云
4853cc0194 convert linalg.py to new-style functions (#1638) 2017-06-04 09:27:01 -04:00
gchanan
ac1c674723 Fix a couple of selection reduce function autograd bugs (#1702)
* Fix Median/Mode autograd functions.

* Fix kthvalue autograd function.

* Double backward for selection reduce functions.
2017-06-03 02:12:15 -04:00
Francisco Massa
75e0df271a Add Inverse to autograd (#1670)
* Add Inverse to autograd

* Add SkipTest to autograd tests
2017-06-02 12:00:13 -04:00
Adam Paszke
c573d53939 Bug fixes (#1573)
* Fix clang warnings
* Raise errors when unsupported ConvNd configurations are used
* Properly handle Variable indexing with LongTensors
* Support both tensors and variables in Variable.type_as
2017-05-17 15:28:16 -04:00
Thomas Viehmann
6107d15d14 Twice differentiability of pointwise functions (#1531) 2017-05-15 12:00:59 -06:00
Marvin Cao
0ba20435ce Add high order grad support for Some operator (#1507) 2017-05-14 23:02:04 +02:00
Francisco Massa
be843eb26b Add unfold to autograd (#1523) 2017-05-11 17:53:16 +02:00
Adam Paszke
a86adf43a1 Fix comparison functions 2017-05-10 16:43:14 +02:00
Adam Paszke
e7220380bc Add new flags to Variable.backward 2017-05-10 16:43:14 +02:00
Gregory Chanan
e124790cb2 Change keepdim default to False. 2017-05-09 14:49:21 -07:00
Gregory Chanan
ae2b2cbbec Make keepdim work with autograd. 2017-05-09 14:15:59 -07:00
Luca Antiga
e694db0eeb Raise error when Variable is converted to bool. Fixes #1482. (#1491) 2017-05-08 23:14:11 +02:00
t-vi
c5ae79fe4e Make clamp twice differentiable (#1514) 2017-05-08 23:12:42 +02:00
Marvin CAO
e3f41a4962 Add high order gradient support for Sigmoid (#1496) 2017-05-07 13:00:20 +02:00
Ankit Vani
4e18d89791 added twice differentiation for a bunch of ops (#1426) 2017-05-04 06:47:14 -04:00
Adam Paszke
20aa5b066f Convert some of the functions to new format
Also, fix a lot of issues that appeared after the previous commits.
2017-05-01 16:44:56 -04:00
Adam Paszke
2ca787fcf4 Refactor attribute names in autograd 2017-05-01 16:44:56 -04:00
Adam Paszke
6a69f7007b Revert "add keyword out for autograd function Concat to match torch.cat (#1336)" (#1340)
This reverts commit 71b9dea6ec.
2017-04-23 19:19:27 +02:00
陈云
71b9dea6ec add keyword out for autograd function Concat to match torch.cat (#1336) 2017-04-23 15:36:24 +02:00
albanD
f0c7124420 Allow support for negative dimension argument for all functions 2017-04-06 16:37:00 -07:00
Adam Paszke
03f1cab801 Unify argument names in norm and renorm 2017-04-03 10:38:58 -04:00
Adam Paszke
fa2c566353 Add Variable.type_as 2017-04-03 10:38:58 -04:00
陈云
30fd222b80 implement autograd function cross (#1138) 2017-03-31 01:45:51 -04:00
Aalekh Jain
761eef1f19 Minor typo fix in backward function in torch/autograd/variable.py (#1143) 2017-03-30 11:23:28 -04:00
Rudy Bunel
0d908d813b Implements Cumsum function for autograd (#1122) 2017-03-29 17:45:57 +02:00
chenyuntc
ca376d4584 implement autograd function trace 2017-03-23 10:37:52 +01:00
Sam Gross
6336300880 Fix bug where adding a hook could replace an existing hook.
We were keying hooks by RemovableHandle id. However, we don't hold onto
handles and ids of dead objects can be reused. This replaces id(handle)
with a global counter.
2017-03-06 12:47:53 -08:00
Sam Gross
5073132837 Implement 'pre' and 'post' hooks at the C++ autograd level 2017-03-06 12:47:53 -08:00
Martin Raison
f17cfe4293 sparse tensor operations (#735) 2017-03-03 18:37:03 +01:00
Luke Yeager
61bd5a0643 [Lint] Address F811 2017-02-27 19:33:00 -05:00
Adam Paszke
c0c62d099a Make detach() actually remove the creator 2017-02-17 10:40:08 +05:30
Sam Gross
bd5303010d Refactor autograd package to separate Python dependencies. (#662)
The core autograd Variable, Function, and Engine no longer depend on the
Python API. This let's us implement functions in C++. In the future, we
can also multithread engine and release the GIL for most of the
non-Python backwards.
2017-02-13 16:00:16 -08:00
Adam Paszke
e3e7b76310 Rename all normal and log_normal args to std 2017-02-01 21:48:11 +01:00
Adam Paszke
659b2f3154 Add more autograd functions 2017-01-31 00:39:34 +01:00
Luke Yeager
3ed720079e [pep8] Fix most remaining lint manually 2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
4f5a6c366e Make Variables non-comparable 2017-01-24 17:30:50 -05:00
Adam Paszke
7ced682ff5 Add notes 2017-01-16 20:38:14 -05:00
Adam Paszke
f91bb96071 Remove cmin, cmax and cinv 2017-01-16 19:07:37 -05:00
Adam Paszke
95f0fa8a92 Change .grad attribute of Variables to be a Variable 2017-01-16 12:59:47 -05:00
Sam Gross
7e4ddcfe8a Remove names from register_hook calls (#446)
The register hook calls now return an object that can be used to remove
the hook. For example,

   >>> h = module.register_forward_hook(callback)
   >>> h.remove()  # removes hook

Or as a context manager:

   >>> with module.register_forward_hook(callback):
   ...     pass

This makes it easier for libraries to use hooks without worrying about
name collisions.
2017-01-13 15:57:03 -05:00
Adam Paszke
b7f36f93d5 Expand autograd docs and add sections 2017-01-03 18:31:08 -05:00
Adam Paszke
1c6fe58574 Add gather and scatter to autograd 2017-01-02 13:42:59 -05:00
Adam Paszke
9f2111af73 Rename Variable.no_grad to Variable.detach 2017-01-02 13:42:59 -05:00
Adam Paszke
62ac1b4bdd Implement missing cases of __matmul__ 2016-12-31 16:25:39 -05:00
Adam Paszke
b123bace1b Rename torch.autograd.functions to torch.autograd._functions 2016-12-30 23:02:57 +01:00
Adam Paszke
bc6a71b1f5 Add Function docs 2016-12-30 00:15:06 -05:00
Adam Paszke
26f1e2ca9c Add basic autograd docs 2016-12-30 00:15:06 -05:00
Adam Paszke
0d30f77889 Make variables picklable with protocols <2 2016-12-28 18:15:17 +01:00
Adam Paszke
e27bb3e993 Minor fixes 2016-12-28 18:15:17 +01:00
Adam Paszke
cd82b2b869 Implement comparison and logical operators for tensors 2016-12-28 00:04:08 +01:00
Adam Paszke
b140e70b58 Add autograd.backward (#341) 2016-12-26 19:10:35 -05:00
Adam Paszke
3e49a2b4b7 Prevent deepcopy from changing Parameters into Variables 2016-12-19 20:35:08 -05:00
Adam Paszke
26516f667e Fix multinomial bug and decrease precision of normal test (#325) 2016-12-17 21:40:13 +01:00
Adam Paszke
8a70067b92 Add support for stochastic functions in autograd (#294) 2016-12-16 13:14:37 +01:00
Adam Paszke
7914cc119d Fix bmm for Variables 2016-12-15 00:47:55 +01:00
Adam Paszke
8768e64e97 Allow returning changed gradients from the hooks 2016-12-15 00:47:55 +01:00
Adam Paszke
656dca6edb Implement in-place operators for variables 2016-11-25 00:40:36 +01:00
Adam Paszke
830adfd151 Allow passing torch.Size to expand 2016-11-25 00:40:36 +01:00
Adam Paszke
3928f7740a Implement functional interface for Variables (torch.*) 2016-11-08 16:13:25 -05:00
Adam Paszke
be085b8f6c Allow marking non-leaf variables as non-requiring grad 2016-10-31 22:47:09 +01:00
Adam Paszke
fb593d5f28 Fix bugs in variable __setitem__ and improve __getitem__ 2016-10-30 00:16:06 +02:00
Sam Gross
f2d7e94948 Use torch.Size for Tensor sizes and tuple for strides
See issue #20

The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Adam Lerer
b5d13296c6 addressing comments 2016-10-23 21:11:22 -07:00
Adam Lerer
f88c3e9c12 fix some missing features in pytorch needed for RNNs 2016-10-23 20:23:48 -07:00
Sam Gross
c295f26a00 Support async argument to Variable.cuda (#137) 2016-10-18 23:27:11 +02:00
Sam Gross
94e52e1d17 Fix Variable.cat 2016-10-17 15:36:08 -07:00
Soumith Chintala
9cd68129da fixing typo 2016-10-16 19:07:09 -04:00