pytorch/torch/nn
Rohan Varma c11412b4a8 [DDP] Support optim in backward after DDP init (#105995)
This allows in backward optimizers to be configured after DDP init, in
addition to before as was previously supported.

Differential Revision: [D47783347](https://our.internmc.facebook.com/intern/diff/D47783347/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105995
Approved by: https://github.com/fegin
2023-07-29 01:36:25 +00:00
..
backends DOC: Various typo fixes (#97095) 2023-03-20 20:46:04 +00:00
intrinsic ao migration: fix broken import, try 2 (#94458) 2023-02-09 22:20:01 +00:00
modules inductor: enable weight prepack for LSTM (#103071) 2023-07-28 13:54:32 +00:00
parallel [DDP] Support optim in backward after DDP init (#105995) 2023-07-29 01:36:25 +00:00
qat
quantizable
quantized
utils [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
__init__.py Back out "Make adding buffers more like adding parameters (#104069)" (#105581) 2023-07-20 03:39:53 +00:00
_reduction.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
common_types.py
cpp.py Add non-recursive module.to_empty option (#104197) 2023-06-26 21:47:22 +00:00
functional.py Format: fixing multiple string concatenation in single line (#106013) 2023-07-26 18:39:18 +00:00
functional.pyi.in Better function annotations for nn.functional (#102918) 2023-06-16 19:48:04 +00:00
grad.py
init.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
parameter.py Back out "Make adding buffers more like adding parameters (#104069)" (#105581) 2023-07-20 03:39:53 +00:00
parameter.pyi Back out "Make adding buffers more like adding parameters (#104069)" (#105581) 2023-07-20 03:39:53 +00:00