Commit Graph

17 Commits

Author SHA1 Message Date
Guoqiang Jerry Chen
678a472ee5 Script module data parallel (#16891)
Summary:
support data parallel for ScriptModule.

see unit tests for testing done for this PR. I also tried traced version of resnet18 from torchvision.

I'm yet to try a complete end-to-end data parallel training. This will be next steps.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16891

Differential Revision: D14002222

Pulled By: gqchen

fbshipit-source-id: fce3598169113215599815c6978e66d3c3a8c282
2019-02-14 22:52:19 -08:00
Wei Yang
54107ae8cf convert output_device at data_parallel from torch.device to index (#10189)
Summary:
- fixes #9984
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10189

Differential Revision: D9545390

Pulled By: weiyangfb

fbshipit-source-id: 3a6a705437553ba319e9fd4b7f676ff73857a27e
2018-09-11 20:27:07 -07:00
Jerry Ma
afd7477eaa Add `buffers(), named_buffers()` methods. (#10554)
Summary:
This commit adds the ``buffers()`` and ``named_buffers()`` methods as
analogues of ``parameters()`` and ``named_parameters()``.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10554

Reviewed By: SsnL

Differential Revision: D9367762

Pulled By: jma127

fbshipit-source-id: f2042e46a7e833dce40cb41681dbd80d7885c74e
2018-08-16 16:26:48 -07:00
Ailing
f5aa8d55ad fix detach in place error in DDP (#5829)
* fix detach in DDP

* fix typo

* make lint happy
2018-03-16 09:22:04 -04:00
gchanan
22ec5f37ca Support double backwards with parallel nn autograd functions. (#2508) 2017-08-22 03:57:45 -04:00
Sam Gross
b4414c0dc3 Handle None in modules list.
It's often useful to add None to an nn.ModuleList to keep the indexing
of the module list to match some other property.
2017-07-03 18:53:21 -04:00
Adam Paszke
23ab9d481a Add Module._all_buffers 2017-06-12 21:58:38 -04:00
Sam Gross
65b66264d4 Improve broadcast/reduce performance by coalescing tensors 2017-03-06 12:47:53 -08:00
Adam Paszke
d6fa3b3fd5 Deprecate nn.Container in favor of nn.Module 2017-01-16 19:07:37 -05:00
Adam Paszke
8d60e39fdc Rename torch.nn.functions to torch.nn._functions 2016-12-30 23:02:57 +01:00
Adam Paszke
aa8916e7c6 Don't unpack single element tuples returned by functions 2016-11-23 18:48:41 +01:00
Adam Paszke
80a827d3da Fix data_parallel bugs 2016-11-23 18:48:41 +01:00
Sam Gross
15377ac391 Copy Module._buffers in nn.parallel.replicate (#180) 2016-10-31 12:12:29 -04:00
Sam Gross
f4ebc65a12 Add Module.modules() and Module.children() (#90)
modules(): returns an iterator over all modules in the network
 children(): returns an iterator over immediate children

Also fix __getitem__ in Sequential
2016-10-01 21:18:53 -04:00
Sam Gross
e8a5f00866 Auto GPU for CUNN (#71) 2016-09-30 14:04:53 -04:00
Soumith Chintala
412019dbe4 fixing CPU builds by making cuda imports optional 2016-09-28 11:56:18 -04:00
Adam Paszke
3eac7164f4 Add data parallel functions to nn 2016-09-27 15:45:45 -07:00