Commit Graph

112 Commits

Author SHA1 Message Date
Tongzhou Wang
a77b391de7 [SpectralNorm] don't register original weight as buffer (#8170)
* don't register original weight as buffer; fixes for buffers that require grad

* add test
2018-06-12 14:42:05 -04:00
Tongzhou Wang
c0a419e6ba
Add non_blocking to Tensor/Module.to (#7312)
* Add non_blocking to Tensor/Module.to

* flake8

* Add argparse tests

* cpp parse

* Use C++ parser

* use a commong parse function with Tensor.to

* fix test_jit

* use THPObjectPtr

* increase refcount for None, True, and False

* address comments

* address comments
2018-06-04 18:46:52 -04:00
li-roy
d564ecb4a5 Update docs with new tensor repr (#6454)
* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Update docs with new tensor repr

* remove cuda in dtype

* remove changes to gloo submodule

* [docs] document tensor.new_* ctor

* [docs] Add docs for tensor.to(), tensor.float(), etc

* [docs] Moar examples for docs.

* [docs] Warning for tensor ctor copy behavior

* Quick fix

* [docs] Document requires_grad_()

* [docs] Add example for requires_grad_()

* update slogdet and *fft

* update tensor rst

* small fixes

* update some docs

* additional doc changes

* update torch and tensor docs

* finish changing tensor docs

* fix flake8

* slogdet with negative det

* Update functional.py tensor ctors

* Fix nll_loss docs

* reorder to move device up

* torch.LongTensor -> torch.tensor or torch.empty in docs

* update tensor constructors in docs

* change tensor constructors

* change constructors

* change more Tensor() to tensor()

* Show requires_grads_ docs

* Fix set_default_dtype docs

* Link to torch.no_grad, etc, from torch doc

* Add dtype aliases to table

* regen docs again

* Tensor attributes stub page

* link to inplace sampling

* Link torch.dtype, device, and layout

* fix dots after nonfinite floats

* better layout docs
2018-04-21 07:35:37 -04:00
Tongzhou Wang
6a41e2dc47 Add BC mechanism to Module.load_state_dict (#6639)
* Add version counter to module, change load_state_dict to use load_local_state_dict which does class specific loading

* Clarifies version number in docs

* fix jit tests

* fix state_dict tests

* typo

* fix ddp

* exclude version numbers from state dict entries

* Fix jit test and empty modules

* address comments

* test for "."

* revert the private version change in state_dict

* make IN case a hard error

* fix not reporting error when unexpected submodule

* address comments

* disallow empty string in name and remvoe trailing dot
2018-04-19 15:36:30 -04:00
Tongzhou Wang
de9bdf1d31
Module.to doc udpate and example format update (#6774) 2018-04-19 13:30:40 -04:00
Tongzhou Wang
354dac9769
updates module.to doc for the new tensor.to(requires_grad) (#6733) 2018-04-18 18:42:15 -04:00
Tongzhou Wang
1c01eabd3c
Codemod to update our codebase to 0.4 standard (#6641)
* Codemod to update our codebase to 0.4 standard

* Update some of the test scri[ts

* remove Variable in test_clip_grad_value

* fix _symbolic_override_wrapper_maker
2018-04-17 22:06:54 -04:00
Tongzhou Wang
0e93a2c334
Add Module.to (#6629) 2018-04-16 17:46:52 -04:00
Kento NOZAWA
3b58b859b2 Fix typos in docs (#6389) 2018-04-07 12:41:15 -04:00
Kaiyu Shi
605307f8f3 Add support for printing extra information in Module and refactor redundant codes (#5936)
This PR enables users to print extra information of their subclassed nn.Module.
Now I simply insert the user-defined string at the ending of module name, which should be discussed in this PR.

Before this PR, users should redefine the __repr__ and copy&paste the source code from Module.

* Add support for extra information on Module

* Rewrite the repr method of Module

* Fix flake8

* Change the __repr__ to get_extra_repr in Linear

* Fix extra new-line for empty line

* Add test for __repr__ method

* Fix bug of block string indent

* Add indent for multi-line repr test.

* Address review comments

* Update tutorial for creating nn.Module

* Fix flake8, add extra_repr of bilinear

* Refactor DropoutNd

* Change to extra_repr in some Modules

* Fix flake8

* Refactor padding modules

* Refactor pooling module

* Fix typo

* Change to extra_repr

* Fix bug for GroupNorm

* Fix bug for LayerNorm
2018-04-02 13:52:33 -04:00
Tongzhou Wang
b21e135ab8 Add class-specific error when key mismatch in load_state_dict (#6086) 2018-03-29 12:22:23 +02:00
Tongzhou Wang
261dd6ea83 fix named_modules doc, clarify eval doc (#5691) 2018-03-10 17:35:07 -05:00
Kaiyu Shi
248c93372d Check value type for register_buffer (#5657)
* Check value type when registering buffer

* Fix PEP8

* Use isinstance in favor of is_tensor
2018-03-10 13:02:04 +01:00
Tongzhou Wang
57c7d132c9 Fix nn.Module.apply doc formatting (#5623)
* fix nn.Module.apply doc example

* other examples' double-colon and newline'
2018-03-08 22:26:01 -05:00
anderspapitto
b9cc035654 import torch.jit in torch/__init__.py (#5638)
previously, it was being implicitly imported via the import of
torch.onnx

this is no longer the case, and is a hacky thing to depend on anyway,
so import it explicitly
2018-03-08 22:17:47 -05:00
Vishwak Srinivasan
32b3841553 [ready] General documentation improvements (#5450)
* Improvize documentation
1. Add formula for erf, erfinv
2. Make exp, expm1 similar to log, log1p
3. Symbol change in ge, le, ne, isnan

* Fix minor nit in the docstring

* More doc improvements
1. Added some formulae
2. Complete scanning till "Other Operations" in Tensor docs

* Add more changes
1. Modify all torch.Tensor wherever required

* Fix Conv docs
1. Fix minor nits in the references for LAPACK routines

* Improve Pooling docs
1. Fix lint error

* Improve docs for RNN, Normalization and Padding
1. Fix flake8 error for pooling

* Final fixes for torch.nn.* docs.
1. Improve Loss Function documentation
2. Improve Vision Layers documentation

* Fix lint error

* Improve docstrings in torch.nn.init

* Fix lint error

* Fix minor error in torch.nn.init.sparse

* Fix Activation and Utils Docs
1. Fix Math Errors
2. Add explicit clean to Makefile in docs to prevent running graph generation script
while cleaning
3. Fix utils docs

* Make PYCMD a Makefile argument, clear up prints in the build_activation_images.py

* Fix batch norm doc error
2018-03-08 13:21:12 -05:00
anderspapitto
28b1c94f0f allow application of @symbolic decorators without circular imports (#5595) 2018-03-08 12:44:16 -05:00
Tongzhou Wang
27265503ad nn.* doc update after Variable/Tensor merge (#5459)
The nn.* counterpart of #5443 . Mostly removed Variable wrapper. Also added doc for nn.RReLU.

Notice that torch.randn(*, requires_grad=True) isn't documented until #5462 is done.
2018-03-01 18:11:39 -05:00
Adam Paszke
4afd62db09 Add TracedModule to the JIT (#5409) 2018-02-28 22:50:50 -08:00
Sam Gross
30ec06c140
Merge Variable and Tensor classes (#5225)
This replaces the torch.Tensor constructors with factories that produce
Variables. Similarly, functions on the torch module (e.g. torch.randn)
now return Variables.

To keep the PR to a reasonable size, I've left most of the unused tensor
code. Subsequent PRs will remove the dead code, clean-up calls to
torch.autograd.Variable, and rename Variable to Tensor everywhere.

There are some breaking changes because Variable and Tensors had
slightly different semantics. There's a list of those changes here:

 https://github.com/pytorch/pytorch/wiki/Breaking-Changes-from-Variable-and-Tensor-merge
2018-02-23 18:03:31 -05:00
Sam Gross
d605058212
Replace Variable.volatile with torch.no_grad() (#3970)
This removes volatile from Variable. The functionality is mostly
replaced by a global (thread-local) flag, which is controlled by
torch.set_grad_enabled() and the context manager torch.no_grad().

In C++, the flag is exposed through GradMode::is_enabled() and GradMode::set_enabled()

Fixes #3627
2017-12-18 15:46:13 -05:00
Richard Zou
43dd6319db Exclude attrs with invalid python variable names from __dir__ (#4011) 2017-12-18 02:19:55 -05:00
Luca Antiga
4eb8e12765 Introduce scopes during tracing (#3016) 2017-12-04 09:19:06 -08:00
Luca Antiga
af58bfbb1b Make integer parameters and buffers immune to float(), double() and half() (#3820)
* Avoid casting integer params and buffers to float(), double() and half()

* Add test for immune integer buffers

* Fix documentation for float(), double() and half()

* Fix test
2017-11-22 18:34:53 -05:00
rluo
efe4386d24 Fix module load_state_dict error information. 2017-11-10 22:11:30 +01:00
Ozan Çağlayan
cc757acd36 docs: clarify the difference between net() and net.forward() (#3596) 2017-11-09 08:16:01 -05:00
Ozan Çağlayan
dd6d04ddf2 doc: Normalize all true/false in docstrings to `True|False` (#3593)
* doc: Normalize all true/false in docstrings to ``True|False``

This makes them more apparent in the documentation.

* doc: fix flake8
2017-11-09 08:12:29 -05:00
Richard Zou
eac0942f6d Add more nn docs (#3374) 2017-10-30 18:37:36 -04:00
vfdev
acb73c729b Space is missing in __repr___ of conv (#3229)
* - Remove spaces in `__repr__` of layers
- Replace `size` by `kernel_size` in `__repr__` of a pooling layer

* Fix flake8 errors
2017-10-30 13:45:37 -04:00
SsnL
de1f4e69dd raw text (#3327) 2017-10-28 01:24:02 +05:30
andreh7
b46ced4aab clarification in docstring of Module.register_forward_hook() (#3279)
* made it explicit in the docstring of Module.register_forward_hook() that the hook(s) will be called AFTER calling forward().

* added "every time" in docstring of Module.register_forward_pre_hook()
2017-10-25 15:36:00 +02:00
Sam Gross
8e58135a26 Fix E722 ('do not use bare except') (#3239)
The new version of flake8 includes a check for not using bare except. We
should avoid this since it catches things like KeyboardInterrupt.
2017-10-23 23:03:37 -04:00
Alykhan Tejani
95556f4075 add ignored_keys param to load_state_dict (#3159)
* add ignored_keys param to load_state_dict

* remove ignored_keys in favour of a strict param

* raise KeyError only if strict is enables
2017-10-18 14:14:19 +02:00
SsnL
fce3ed19e5 Change device_id to device in python land (#3133)
* change device_id to device in python land

* cuda/random.py
2017-10-17 00:54:26 +02:00
SsnL
828048f578 Add document on how Module.cuda() and optims should work together (#3056) 2017-10-10 22:55:23 -04:00
Mark Neumann
a64daf2c59 support dictionary return types in nn.Module's __call__ (#2037) 2017-10-01 20:33:03 -04:00
Edward Z. Yang
63c835bbe7 Add keep_vars parameter to state_dict.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
2017-09-05 17:48:55 -04:00
Alykhan Tejani
c5a8a59116 raise KeyError if registering buffer/param when attr exists (#2108) 2017-09-01 14:08:49 -04:00
Zhou Mo
2c07f88ea3 Fix typos. 2017-08-25 14:27:07 -04:00
Gregory Chanan
50c208a50b Revert "Fix typos."
This reverts commit 4622b33952.
2017-08-10 13:57:00 -04:00
Luca Antiga
1ac98b1bce Add documentation for apply (#2327) 2017-08-08 21:53:26 -04:00
Zhou Mo
4622b33952 Fix typos. 2017-08-08 11:05:38 -04:00
Kaiyu Shi
4a4d8841e6 Delete unused import 2017-07-23 12:48:11 -04:00
greaber
95ccbf8b0b better error message in load_state_dict when there are inconsistent tensor sizes (#2151) 2017-07-19 15:50:29 -04:00
Tzu-Wei Huang
c011d4f3d6 resolves #1991 (#2073) 2017-07-13 09:57:33 -04:00
Sam Gross
10e23943b3 Fix missing _forward_pre_hooks in serialized modules (#2057) 2017-07-11 18:23:35 -04:00
Sam Gross
2c038f2074 Add weight normalization implementation (#1945)
* Add weight normalization implementation

This adds forward "pre-hooks" which get called before the module's
forward() method. Weight norm is implemented as a hook which calculates
the weight variable from the weight_g and weight_v every iteration.

Based on @rtqichen implementation.

* Specify return type
2017-06-30 15:41:40 -04:00
Adam Paszke
23ab9d481a Add Module._all_buffers 2017-06-12 21:58:38 -04:00
Matt Dering
d1a4467682 fix a bug when calling modules
a module that returns a non-standard data structure currently breaks
due to checks for backwards hooks. This refactors the code slightly so
this will only break in the event of backwards hooks.
2017-05-11 23:00:45 +02:00
Adam Paszke
feef54ec34 Don't modify non-volatile grads in zero_grad 2017-05-10 16:43:14 +02:00