Commit Graph

27 Commits

Author SHA1 Message Date
David Riazati
10c4b98ade Remove weak script (#22212)
Summary:
* Deletes all weak script decorators / associated data structures / methods
   * In order to keep supporting the standard library in script, this enables recursive script on any function defined in `torch.nn`
   * Most changes in `torch/nn` are the result of `ag -Q "weak" torch/nn/ -l | xargs sed -i '/weak/d'`, only `rnn.py` needed manual editing to use the `ignore` and `export` to continue supporting the overloaded `forward` methods
* `Sequential`/`ModuleList` no longer need to be added to constants since they are compiled on demand

This should also fix https://github.com/pytorch/pytorch/issues/22212
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22212

Differential Revision: D15988346

Pulled By: driazati

fbshipit-source-id: af223e3ad0580be895377312949997a70e988e4f
2019-07-03 17:28:25 -07:00
davidriazati
61cc03fb8d Make ScriptModule.training an attribute instead of a parameter (#21078)
Summary:
Redo of #19587
](https://our.intern.facebook.com/intern/diff/15560540/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21078

Pulled By: driazati

Differential Revision: D15560540

fbshipit-source-id: f415775d87c163f93b3bbdd5f87c9ff73f58b049
2019-06-06 12:06:49 -07:00
David Riazati
fa8c132e24 Revert D15502768: [pytorch][PR] [jit] Make ScriptModule.training an attribute instead of a parameter
Differential Revision:
D15502768

Original commit changeset: 3022f2d57ec6

fbshipit-source-id: 5cd08d3c3a75e38e3aa9b75a0c0059a2c6c85a1e
2019-05-29 12:18:18 -07:00
David Riazati
28079c3906 Make ScriptModule.training an attribute instead of a parameter (#19587)
Summary:
Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#19587 [jit] Make ScriptModule.training an attribute instead of a parameter**

Remove the hack we had previously where `training` was a buffer
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19587

Differential Revision: D15502768

Pulled By: driazati

fbshipit-source-id: 3022f2d57ec6849868f9225d9bc2bfb7828cb318
2019-05-28 16:06:46 -07:00
Zachary DeVito
6cb1b994d8 Trace directly into first-class module form. (#19722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19722
ghimport-source-id: b024666feccb324f5ba9aae4a6301723e04d9846

Reviewed By: jamesr66a

Differential Revision: D15078535

Pulled By: zdevito

fbshipit-source-id: b866b31c1864a090c545560cbecee81e34ad2d16
2019-04-25 15:53:03 -07:00
Zachary DeVito
87a6974193 Make it possible for self.forward to return a ScriptMethod (#19217)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19217
ghimport-source-id: 6fdd7f5ac041dae950b47ca316f30682ede0b083

Reviewed By: suo

Differential Revision: D14922120

Pulled By: zdevito

fbshipit-source-id: 5e82e5d7ee72df6f401146d2519c80ea336ff40e
2019-04-24 11:14:34 -07:00
Shen Li
344acaa0ca Revert replicate.py to disallow replicating multi-device modules (#19278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19278

Based on discussion in https://github.com/pytorch/pytorch/pull/19278 and https://github.com/pytorch/pytorch/pull/18687, changes to replicate.py will be reverted to disallow replicating multi-device modules.

Reviewed By: pietern

Differential Revision: D14940018

fbshipit-source-id: 7504c0f4325c2639264c52dcbb499e61c9ad2c26
2019-04-16 10:03:38 -07:00
Shen Li
7ae0263e1b Support replicating multi-GPU modules (#18687)
Summary:
If the input `network` resides on multiple GPUs, `devices` must be a 2D list with `devices[0]` matching `network`'s devices. See  #18591
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18687

Differential Revision: D14706162

Pulled By: mrshenli

fbshipit-source-id: dca630d3308f2dbcf8b75629c452d7a64092ba42
2019-04-03 14:43:07 -07:00
David Riazati
a2381fa346 Add module attributes (#17309)
Summary:
Similar to `nn.Parameter`s, this PR lets you store any `IValue` on a module as an attribute on a `ScriptModule` (only from the Python front-end currently). To mark something as an attribute, it should wrapped in `jit.Attribute(value, type)` (ex. `self.table = torch.jit.Attribute(table, Dict[str, torch.Tensor])`)

Followup Work:
* (de)serializing for use in C++
* change `self.training` to be a `bool` attribute instead of a buffer
* mutable attributes
* string frontend support
* documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17309

Differential Revision: D14354316

Pulled By: driazati

fbshipit-source-id: 67e08ab5229366b67fbc837e67b58831a4fb3318
2019-03-07 10:44:10 -08:00
Tongzhou Wang
44a607b90c Fix autograd with buffers requiring grad in DataParallel (#13352)
Summary:
Causing a problem with spectral norm, although SN won't use that anymore after #13350 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13352

Differential Revision: D14209562

Pulled By: ezyang

fbshipit-source-id: f5e3183e1e7050ac5a66d203de6f8cf56e775134
2019-02-26 20:53:19 -08:00
Guoqiang Jerry Chen
678a472ee5 Script module data parallel (#16891)
Summary:
support data parallel for ScriptModule.

see unit tests for testing done for this PR. I also tried traced version of resnet18 from torchvision.

I'm yet to try a complete end-to-end data parallel training. This will be next steps.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16891

Differential Revision: D14002222

Pulled By: gqchen

fbshipit-source-id: fce3598169113215599815c6978e66d3c3a8c282
2019-02-14 22:52:19 -08:00
Wei Yang
54107ae8cf convert output_device at data_parallel from torch.device to index (#10189)
Summary:
- fixes #9984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10189

Differential Revision: D9545390

Pulled By: weiyangfb

fbshipit-source-id: 3a6a705437553ba319e9fd4b7f676ff73857a27e
2018-09-11 20:27:07 -07:00
Jerry Ma
afd7477eaa Add `buffers(), named_buffers()` methods. (#10554)
Summary:
This commit adds the ``buffers()`` and ``named_buffers()`` methods as
analogues of ``parameters()`` and ``named_parameters()``.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10554

Reviewed By: SsnL

Differential Revision: D9367762

Pulled By: jma127

fbshipit-source-id: f2042e46a7e833dce40cb41681dbd80d7885c74e
2018-08-16 16:26:48 -07:00
Ailing
f5aa8d55ad fix detach in place error in DDP (#5829)
* fix detach in DDP

* fix typo

* make lint happy
2018-03-16 09:22:04 -04:00
gchanan
22ec5f37ca Support double backwards with parallel nn autograd functions. (#2508) 2017-08-22 03:57:45 -04:00
Sam Gross
b4414c0dc3 Handle None in modules list.
It's often useful to add None to an nn.ModuleList to keep the indexing
of the module list to match some other property.
2017-07-03 18:53:21 -04:00
Adam Paszke
23ab9d481a Add Module._all_buffers 2017-06-12 21:58:38 -04:00
Sam Gross
65b66264d4 Improve broadcast/reduce performance by coalescing tensors 2017-03-06 12:47:53 -08:00
Adam Paszke
d6fa3b3fd5 Deprecate nn.Container in favor of nn.Module 2017-01-16 19:07:37 -05:00
Adam Paszke
8d60e39fdc Rename torch.nn.functions to torch.nn._functions 2016-12-30 23:02:57 +01:00
Adam Paszke
aa8916e7c6 Don't unpack single element tuples returned by functions 2016-11-23 18:48:41 +01:00
Adam Paszke
80a827d3da Fix data_parallel bugs 2016-11-23 18:48:41 +01:00
Sam Gross
15377ac391 Copy Module._buffers in nn.parallel.replicate (#180) 2016-10-31 12:12:29 -04:00
Sam Gross
f4ebc65a12 Add Module.modules() and Module.children() (#90)
modules(): returns an iterator over all modules in the network
 children(): returns an iterator over immediate children

Also fix __getitem__ in Sequential
2016-10-01 21:18:53 -04:00
Sam Gross
e8a5f00866 Auto GPU for CUNN (#71) 2016-09-30 14:04:53 -04:00
Soumith Chintala
412019dbe4 fixing CPU builds by making cuda imports optional 2016-09-28 11:56:18 -04:00
Adam Paszke
3eac7164f4 Add data parallel functions to nn 2016-09-27 15:45:45 -07:00