Commit Graph

194 Commits

Author SHA1 Message Date
Adam Paszke
4afd62db09 Add TracedModule to the JIT (#5409) 2018-02-28 22:50:50 -08:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Sam Gross
d605058212
Replace Variable.volatile with torch.no_grad() (#3970)
This removes volatile from Variable. The functionality is mostly
replaced by a global (thread-local) flag, which is controlled by
torch.set_grad_enabled() and the context manager torch.no_grad().

In C++, the flag is exposed through GradMode::is_enabled() and GradMode::set_enabled()

Fixes #3627
2017-12-18 15:46:13 -05:00
Richard Zou
43dd6319db Exclude attrs with invalid python variable names from __dir__ (#4011) 2017-12-18 02:19:55 -05:00
Luca Antiga
4eb8e12765 Introduce scopes during tracing (#3016) 2017-12-04 09:19:06 -08:00
Luca Antiga
af58bfbb1b Make integer parameters and buffers immune to float(), double() and half() (#3820)
* Avoid casting integer params and buffers to float(), double() and half()

* Add test for immune integer buffers

* Fix documentation for float(), double() and half()

* Fix test
2017-11-22 18:34:53 -05:00
rluo
efe4386d24 Fix module load_state_dict error information. 2017-11-10 22:11:30 +01:00
Ozan Çağlayan
cc757acd36 docs: clarify the difference between net() and net.forward() (#3596) 2017-11-09 08:16:01 -05:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
Richard Zou
eac0942f6d Add more nn docs (#3374) 2017-10-30 18:37:36 -04:00
vfdev
acb73c729b Space is missing in __repr___ of conv (#3229)
* - Remove spaces in `__repr__` of layers
- Replace `size` by `kernel_size` in `__repr__` of a pooling layer

* Fix flake8 errors
2017-10-30 13:45:37 -04:00
SsnL
de1f4e69dd raw text (#3327) 2017-10-28 01:24:02 +05:30
andreh7
b46ced4aab clarification in docstring of Module.register_forward_hook() (#3279)
* made it explicit in the docstring of Module.register_forward_hook() that the hook(s) will be called AFTER calling forward().

* added "every time" in docstring of Module.register_forward_pre_hook()
2017-10-25 15:36:00 +02:00
Sam Gross
8e58135a26 Fix E722 ('do not use bare except') (#3239)
The new version of flake8 includes a check for not using bare except. We
should avoid this since it catches things like KeyboardInterrupt.
2017-10-23 23:03:37 -04:00
Alykhan Tejani
95556f4075 add ignored_keys param to load_state_dict (#3159)
* add ignored_keys param to load_state_dict

* remove ignored_keys in favour of a strict param

* raise KeyError only if strict is enables
2017-10-18 14:14:19 +02:00
SsnL
fce3ed19e5 Change device_id to device in python land (#3133)
* change device_id to device in python land

* cuda/random.py
2017-10-17 00:54:26 +02:00
SsnL
828048f578 Add document on how Module.cuda() and optims should work together (#3056) 2017-10-10 22:55:23 -04:00
Mark Neumann
a64daf2c59 support dictionary return types in nn.Module's __call__ (#2037) 2017-10-01 20:33:03 -04:00
Edward Z. Yang
63c835bbe7 Add keep_vars parameter to state_dict.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Alykhan Tejani
c5a8a59116 raise KeyError if registering buffer/param when attr exists (#2108) 2017-09-01 14:08:49 -04:00
Zhou Mo
2c07f88ea3 Fix typos. 2017-08-25 14:27:07 -04:00
Gregory Chanan
50c208a50b Revert "Fix typos."
This reverts commit 4622b33952.
2017-08-10 13:57:00 -04:00
Luca Antiga
1ac98b1bce Add documentation for apply (#2327) 2017-08-08 21:53:26 -04:00
Zhou Mo
4622b33952 Fix typos. 2017-08-08 11:05:38 -04:00
Kaiyu Shi
4a4d8841e6 Delete unused import 2017-07-23 12:48:11 -04:00
greaber
95ccbf8b0b better error message in load_state_dict when there are inconsistent tensor sizes (#2151) 2017-07-19 15:50:29 -04:00
Tzu-Wei Huang
c011d4f3d6 resolves #1991 (#2073) 2017-07-13 09:57:33 -04:00
Sam Gross
10e23943b3 Fix missing _forward_pre_hooks in serialized modules (#2057) 2017-07-11 18:23:35 -04:00
Sam Gross
2c038f2074 Add weight normalization implementation (#1945)
* Add weight normalization implementation

This adds forward "pre-hooks" which get called before the module's
forward() method. Weight norm is implemented as a hook which calculates
the weight variable from the weight_g and weight_v every iteration.

Based on @rtqichen implementation.

* Specify return type
2017-06-30 15:41:40 -04:00
Adam Paszke
23ab9d481a Add Module._all_buffers 2017-06-12 21:58:38 -04:00
Matt Dering
d1a4467682 fix a bug when calling modules
a module that returns a non-standard data structure currently breaks
due to checks for backwards hooks. This refactors the code slightly so
this will only break in the event of backwards hooks.
2017-05-11 23:00:45 +02:00
Adam Paszke
feef54ec34 Don't modify non-volatile grads in zero_grad 2017-05-10 16:43:14 +02:00
Wojciech Jaśkowski
3a7e068439 Remove spurious memo argument in Module.parameters() (#1527) 2017-05-10 13:55:15 +02:00
Adam Paszke
20aa5b066f Convert some of the functions to new format
Also, fix a lot of issues that appeared after the previous commits.
2017-05-01 16:44:56 -04:00
Adam Paszke
2ca787fcf4 Refactor attribute names in autograd 2017-05-01 16:44:56 -04:00
Soumith Chintala
c852883086 add named_parameters that yield name and value of parameters (#1242) 2017-04-12 16:32:36 -07:00
陈云
d82cad3019 implement nn.Module.__dir__ (#1142) 2017-04-05 22:18:34 -04:00
Adam Paszke
2d1122739c Raise AttributeError in Module.__getattr__ 2017-04-03 10:38:58 -04:00
Alykhan Tejani
3eab8a71e2 Added docstring to add_module (#1116) 2017-03-27 11:09:24 -04:00
Francisco Massa
dd893391d5 Add argument to children to yield the name of the modules (#941) 2017-03-24 20:02:05 +01:00
Adam Paszke
d602b3a834 Allow submodules and parameters to shadow attrs on assignment 2017-03-12 13:31:32 -04:00
Sam Gross
6336300880 Fix bug where adding a hook could replace an existing hook.
We were keying hooks by RemovableHandle id. However, we don't hold onto
handles and ids of dead objects can be reused. This replaces id(handle)
with a global counter.
2017-03-06 12:47:53 -08:00
Sam Gross
5073132837 Implement 'pre' and 'post' hooks at the C++ autograd level 2017-03-06 12:47:53 -08:00
Adam Paszke
c238ee3681 Fix issues with lazy grad initialization (#912) 2017-03-03 14:23:51 -05:00
Sergey Zagoruyko
b5f7592140 boolean mode in module.train 2017-03-02 09:18:05 -05:00
Adam Paszke
85e82e85d8 Fix bug in zero_grad, when some parameters didn't require grad 2017-02-14 21:28:50 +01:00
Sam Gross
bd5303010d Refactor autograd package to separate Python dependencies. (#662)
The core autograd Variable, Function, and Engine no longer depend on the
Python API. This let's us implement functions in C++. In the future, we
can also multithread engine and release the GIL for most of the
non-Python backwards.
2017-02-13 16:00:16 -08:00
Francisco Massa
833b8cbc7a Remove unused code from module 2017-02-02 17:20:11 +01:00
Luke Yeager
e7c1e6a8e3 [pep8] Fix most lint automatically with autopep8
Here's the command I used to invoke autopep8 (in parallel!):

    git ls-files | grep '\.py$' | xargs -n1 -P`nproc` autopep8 -i

Several rules are ignored in setup.cfg. The goal is to let autopep8
handle everything which it can handle safely, and to disable any rules
which are tricky or controversial to address. We may want to come back
and re-enable some of these rules later, but I'm trying to make this
patch as safe as possible.

Also configures flake8 to match pep8's behavior.

Also configures TravisCI to check the whole project for lint.
2017-01-28 01:15:51 +01:00
Sam Gross
0f65c9267d Fix typo 2017-01-18 08:46:04 -08:00
Adam Paszke
4cc11066b2 Add torch.utils.data docs and improve notes (#460)
* Add torch.utils.data docs and improve notes
2017-01-17 14:51:05 -05:00
Francisco Massa
8d9f6c2583 Minor fixes to docs 2017-01-17 10:19:14 -05:00
Adam Paszke
d6fa3b3fd5 Deprecate nn.Container in favor of nn.Module 2017-01-16 19:07:37 -05:00
Sam Gross
38967568ca Make load_state_dict() more restrictive (#451)
The load_state_dict() function now raises an error if the argument
state_dict has extra keys or is missing keys.

Previously, load_state_dict() ignored extra and missing keys, which made
it hard to notice when you load an invalid state_dict. This could
happen, for example, if you save the state_dict for a DataParallel, but
load it into a single model.

The state_dict() function now only includes the Tensor data from the
paramters, which reduces checkpoint size by not saving gradients.
2017-01-16 13:06:00 -05:00
Adam Paszke
95f0fa8a92 Change .grad attribute of Variables to be a Variable 2017-01-16 12:59:47 -05:00
Sam Gross
69d8331195 Use functools.partial 2017-01-13 23:10:45 +01:00
Sam Gross
7e4ddcfe8a Remove names from register_hook calls (#446)
The register hook calls now return an object that can be used to remove
the hook. For example,

   >>> h = module.register_forward_hook(callback)
   >>> h.remove()  # removes hook

Or as a context manager:

   >>> with module.register_forward_hook(callback):
   ...     pass

This makes it easier for libraries to use hooks without worrying about
name collisions.
2017-01-13 15:57:03 -05:00
Adam Paszke
f4870ca5c6 Fix nn docs 2016-12-30 00:15:06 -05:00
Sam Gross
24af02154c Use ForkingPickler for sharing tensor/storages across processes (#344)
This hooks into the (internal) ForkingPickler class in multiprocessing
to reduce tensors, storages, and CUDA events instead of our queue from
joblib. This makes it easier to use the standard multiprocessing classes
in later versions of Python.

This also exposes:

 - Tensor/Storage.share_memory_()
 - Module.share_memory()

These methods move the CPU tensors and storages to shared memory. If
you're using the "fork" method of multiprocessing, these objects can be
directly inherited instead of serialized through a queue.
2016-12-28 20:34:23 -05:00
Sam Gross
ffcc38cf05 Deterministic ordering of parameters and buffers. (#317)
Uses the assignment syntax to get deterministic ordering of parameters.
The ordering of parameters using the constructor syntax is
non-deterministic because kwargs use dict() in Python 3.5 and earlier.
2016-12-16 14:45:56 -05:00
Adam Paszke
12cf96e358 Don't change requires_grad of parameters in train() and eval() 2016-12-15 00:47:55 +01:00
Adam Paszke
8768e64e97 Allow returning changed gradients from the hooks 2016-12-15 00:47:55 +01:00
Adam Paszke
87748ffd4c Add .type() for torch.nn modules 2016-12-01 23:14:41 +01:00
Sam Gross
18a3c62d9b Allow NoneType for parameters in Module.load_state_dict 2016-12-01 20:12:15 +01:00
Adam Paszke
2e24da2a0b Change parameter_dict to state_dict in torch.nn 2016-11-23 18:48:41 +01:00
Soumith Chintala
26d626a47c adding docs for loss functions, container, module and fix typos 2016-11-17 15:11:27 -05:00
Adam Paszke
78c1094d93 Don't override __call__ in modules 2016-11-16 15:32:18 -08:00
Soumith Chintala
28e3f07b63 adding apply function 2016-11-07 16:17:49 -05:00
Adam Paszke
b4f4cca875 Rename training and evaluation methods 2016-10-30 00:16:06 +02:00
Adam Paszke
e2458bce97 Add Parameter class to nn 2016-10-27 22:31:36 +02:00
Adam Paszke
30be715900 Add training and evaluation to torch.nn 2016-10-24 22:29:43 +02:00
Adam Lerer
b5d13296c6 addressing comments 2016-10-23 21:11:22 -07:00
Adam Lerer
f88c3e9c12 fix some missing features in pytorch needed for RNNs 2016-10-23 20:23:48 -07:00
Sam Gross
fee67c2e1a Allow parameters and child modules to be assigned by attribute (#136)
For example:
  self.linear = nn.Linear(10, 20)
  self.weight = torch.autograd.Variable(torch.Tensor(10, 20))
2016-10-18 23:34:20 +02:00
Adam Paszke
a22af69335 Add versioning and shared storage handling to autograd (#105) 2016-10-06 17:12:58 -04:00
Adam Lerer
1213149a2f add bias option to linear; allow modules to return nested lists/tuples of tensors (#106)
* add bias option to linear; allow modules to return nested lists/tuples of tensors
2016-10-06 15:59:12 -04:00
Adam Paszke
3cbe66ba8c Change requires_grad default to False 2016-10-05 08:46:34 -07:00
Adam Paszke
6efefac2df Add parameter_dict and load_parameter_dict methods for modules 2016-10-04 14:47:56 -07:00
Sam Gross
f4ebc65a12 Add Module.modules() and Module.children() (#90)
modules(): returns an iterator over all modules in the network
 children(): returns an iterator over immediate children

Also fix __getitem__ in Sequential
2016-10-01 21:18:53 -04:00
Adam Paszke
2d8c2972ae Only allow leaf variables as module parameters 2016-09-29 11:31:26 -07:00
Sam Gross
cb5d4e836f Lazy load CUDA and THNN modules (#64) 2016-09-28 19:29:53 -04:00
Adam Paszke
7f4ff0e615 Fix type conversions in nn 2016-09-27 15:45:49 -07:00
Adam Paszke
f9d25e8e72 Refactor nn (require specifying parameters explicitly) 2016-09-27 15:22:26 -07:00
Adam Paszke
4cdeae3283 Return only unique variables from parameters() 2016-09-25 12:23:43 -07:00
Adam Paszke
eefa0c7b40 Require torch.nn.cuda automatically when calling .cuda() 2016-09-23 18:06:26 -07:00
Adam Paszke
8fdec15a55 Codemod to remove camel case method naming 2016-09-20 08:40:28 -07:00
Adam Paszke
fb39971464 Add more modules to nn 2016-09-14 11:05:56 -07:00
Sam Gross
b738b09606 Clean up Module forward and __call__ (#14)
* _forward is renamed forward since users should override it

 * some __call__ overrides are changed to forward

 * function which return a single variable are changed to return that
   variable instead of a one-element tuple
2016-09-07 15:41:39 -04:00
Adam Paszke
774a6f1093 Add in-place operations to autograd and nn 2016-08-25 09:34:54 -07:00
Adam Paszke
ff785e5f17 Make optimizers accept a closure 2016-08-25 09:23:39 -07:00
Adam Paszke
ea93fb7ac0 Add more nn modules 2016-08-23 19:15:21 -07:00
Adam Paszke
7bcb2a4081 Initial optim version 2016-08-23 19:03:30 -07:00
Adam Paszke
2bf68e72d5 Add hook system to autograd and nn 2016-08-23 13:51:34 -07:00
Adam Paszke
e055ffbdc7 Add nn 2016-08-19 14:56:55 -07:00