陈云
71b9dea6ec
add keyword out for autograd function Concat to match torch.cat ( #1336 )
2017-04-23 15:36:24 +02:00
albanD
f0c7124420
Allow support for negative dimension argument for all functions
2017-04-06 16:37:00 -07:00
Adam Paszke
03f1cab801
Unify argument names in norm and renorm
2017-04-03 10:38:58 -04:00
Adam Paszke
fa2c566353
Add Variable.type_as
2017-04-03 10:38:58 -04:00
陈云
30fd222b80
implement autograd function cross ( #1138 )
2017-03-31 01:45:51 -04:00
Aalekh Jain
761eef1f19
Minor typo fix in backward function in torch/autograd/variable.py ( #1143 )
2017-03-30 11:23:28 -04:00
Rudy Bunel
0d908d813b
Implements Cumsum function for autograd ( #1122 )
2017-03-29 17:45:57 +02:00
chenyuntc
ca376d4584
implement autograd function trace
2017-03-23 10:37:52 +01:00
Sam Gross
6336300880
Fix bug where adding a hook could replace an existing hook.
...
We were keying hooks by RemovableHandle id. However, we don't hold onto
handles and ids of dead objects can be reused. This replaces id(handle)
with a global counter.
2017-03-06 12:47:53 -08:00
Sam Gross
5073132837
Implement 'pre' and 'post' hooks at the C++ autograd level
2017-03-06 12:47:53 -08:00
Martin Raison
f17cfe4293
sparse tensor operations ( #735 )
2017-03-03 18:37:03 +01:00
Luke Yeager
61bd5a0643
[Lint] Address F811
2017-02-27 19:33:00 -05:00
Adam Paszke
c0c62d099a
Make detach() actually remove the creator
2017-02-17 10:40:08 +05:30
Sam Gross
bd5303010d
Refactor autograd package to separate Python dependencies. ( #662 )
...
The core autograd Variable, Function, and Engine no longer depend on the
Python API. This let's us implement functions in C++. In the future, we
can also multithread engine and release the GIL for most of the
non-Python backwards.
2017-02-13 16:00:16 -08:00
Adam Paszke
e3e7b76310
Rename all normal and log_normal args to std
2017-02-01 21:48:11 +01:00
Adam Paszke
659b2f3154
Add more autograd functions
2017-01-31 00:39:34 +01:00
Luke Yeager
3ed720079e
[pep8] Fix most remaining lint manually
2017-01-28 01:15:51 +01:00
Luke Yeager
e7c1e6a8e3
[pep8] Fix most lint automatically with autopep8
...
Here's the command I used to invoke autopep8 (in parallel!):
git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i
Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.
Also configures flake8 to match pep8's behavior.
Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Adam Paszke
4f5a6c366e
Make Variables non-comparable
2017-01-24 17:30:50 -05:00
Adam Paszke
7ced682ff5
Add notes
2017-01-16 20:38:14 -05:00
Adam Paszke
f91bb96071
Remove cmin, cmax and cinv
2017-01-16 19:07:37 -05:00
Adam Paszke
95f0fa8a92
Change .grad attribute of Variables to be a Variable
2017-01-16 12:59:47 -05:00
Sam Gross
7e4ddcfe8a
Remove names from register_hook calls ( #446 )
...
The register hook calls now return an object that can be used to remove
the hook. For example,
>>> h = module.register_forward_hook(callback)
>>> h.remove() # removes hook
Or as a context manager:
>>> with module.register_forward_hook(callback):
... pass
This makes it easier for libraries to use hooks without worrying about
name collisions.
2017-01-13 15:57:03 -05:00
Adam Paszke
b7f36f93d5
Expand autograd docs and add sections
2017-01-03 18:31:08 -05:00
Adam Paszke
1c6fe58574
Add gather and scatter to autograd
2017-01-02 13:42:59 -05:00
Adam Paszke
9f2111af73
Rename Variable.no_grad to Variable.detach
2017-01-02 13:42:59 -05:00
Adam Paszke
62ac1b4bdd
Implement missing cases of __matmul__
2016-12-31 16:25:39 -05:00
Adam Paszke
b123bace1b
Rename torch.autograd.functions to torch.autograd._functions
2016-12-30 23:02:57 +01:00
Adam Paszke
bc6a71b1f5
Add Function docs
2016-12-30 00:15:06 -05:00
Adam Paszke
26f1e2ca9c
Add basic autograd docs
2016-12-30 00:15:06 -05:00
Adam Paszke
0d30f77889
Make variables picklable with protocols <2
2016-12-28 18:15:17 +01:00
Adam Paszke
e27bb3e993
Minor fixes
2016-12-28 18:15:17 +01:00
Adam Paszke
cd82b2b869
Implement comparison and logical operators for tensors
2016-12-28 00:04:08 +01:00
Adam Paszke
b140e70b58
Add autograd.backward ( #341 )
2016-12-26 19:10:35 -05:00
Adam Paszke
3e49a2b4b7
Prevent deepcopy from changing Parameters into Variables
2016-12-19 20:35:08 -05:00
Adam Paszke
26516f667e
Fix multinomial bug and decrease precision of normal test ( #325 )
2016-12-17 21:40:13 +01:00
Adam Paszke
8a70067b92
Add support for stochastic functions in autograd ( #294 )
2016-12-16 13:14:37 +01:00
Adam Paszke
7914cc119d
Fix bmm for Variables
2016-12-15 00:47:55 +01:00
Adam Paszke
8768e64e97
Allow returning changed gradients from the hooks
2016-12-15 00:47:55 +01:00
Adam Paszke
656dca6edb
Implement in-place operators for variables
2016-11-25 00:40:36 +01:00
Adam Paszke
830adfd151
Allow passing torch.Size to expand
2016-11-25 00:40:36 +01:00
Adam Paszke
3928f7740a
Implement functional interface for Variables (torch.*)
2016-11-08 16:13:25 -05:00
Adam Paszke
be085b8f6c
Allow marking non-leaf variables as non-requiring grad
2016-10-31 22:47:09 +01:00
Adam Paszke
fb593d5f28
Fix bugs in variable __setitem__ and improve __getitem__
2016-10-30 00:16:06 +02:00
Sam Gross
f2d7e94948
Use torch.Size for Tensor sizes and tuple for strides
...
See issue #20
The torch.Size class is a tuple subclass which distinguishes sizes from
other tuples so that torch.Tensor(size) is interpreted as size instead
of data.
2016-10-28 19:37:09 +02:00
Adam Lerer
b5d13296c6
addressing comments
2016-10-23 21:11:22 -07:00
Adam Lerer
f88c3e9c12
fix some missing features in pytorch needed for RNNs
2016-10-23 20:23:48 -07:00
Sam Gross
c295f26a00
Support async argument to Variable.cuda ( #137 )
2016-10-18 23:27:11 +02:00
Sam Gross
94e52e1d17
Fix Variable.cat
2016-10-17 15:36:08 -07:00
Soumith Chintala
9cd68129da
fixing typo
2016-10-16 19:07:09 -04:00
Adam Paszke
0325e2f646
Major autograd refactor
...
Improves autograd performance by more than 2x and fixes a couple
of bugs. All core functions have been moved to C.
2016-10-13 17:17:49 -07:00
Adam Paszke
a22af69335
Add versioning and shared storage handling to autograd ( #105 )
2016-10-06 17:12:58 -04:00
Adam Paszke
3cbe66ba8c
Change requires_grad default to False
2016-10-05 08:46:34 -07:00
Adam Paszke
1d0afdf9f7
Make requires_grad read only (except for leaves)
2016-10-05 07:55:07 -07:00
Adam Paszke
64dd1419c5
Fix Variable indexing bugs ( #96 )
2016-10-03 14:49:21 -04:00
Adam Paszke
11b38a6895
Add more functions to autograd
2016-09-30 16:37:07 -04:00
Adam Paszke
f9d9c92560
Fix type conversions in autograd
2016-09-27 15:45:52 -07:00
Adam Paszke
1828e7c42f
Add async CUDA copy
2016-09-27 15:12:48 -07:00
Sam Gross
980300b381
Combine autograd.Leaf and autograd.Variable ( #52 )
...
Prior to this change, there was a circular reference between Leaf and
Variable. This means that the objects (and referenced Tensors) are not
collected as soon as they go out of scope, which lead to higher memory
usage and out-of-memory errors.
2016-09-25 20:21:14 -04:00
Adam Paszke
8fdec15a55
Codemod to remove camel case method naming
2016-09-20 08:40:28 -07:00
Adam Paszke
7847d77405
Add more functions to autograd
2016-09-16 15:26:24 -07:00
Adam Paszke
4bad029fd4
Add more functions to autograd
2016-09-15 13:01:24 -07:00
Adam Paszke
fb39971464
Add more modules to nn
2016-09-14 11:05:56 -07:00
Sam Gross
b738b09606
Clean up Module forward and __call__ ( #14 )
...
* _forward is renamed forward since users should override it
* some __call__ overrides are changed to forward
* function which return a single variable are changed to return that
variable instead of a one-element tuple
2016-09-07 15:41:39 -04:00
Adam Paszke
774a6f1093
Add in-place operations to autograd and nn
2016-08-25 09:34:54 -07:00
Adam Paszke
24476090df
Add volatile variables
2016-08-24 08:43:11 -07:00
Adam Paszke
ea93fb7ac0
Add more nn modules
2016-08-23 19:15:21 -07:00
Adam Paszke
2bf68e72d5
Add hook system to autograd and nn
2016-08-23 13:51:34 -07:00
Adam Paszke
53f00ae429
Add autograd
2016-08-19 14:24:07 -07:00