pytorch/torch/nn/parallel
Rohan Varma 24e5d61af8 Log usage of optimizer in backward (#110206)
This will allow us to inspect and aggregate jobs that use optimizer in
backward

Differential Revision: [D48674740](https://our.internmc.facebook.com/intern/diff/D48674740/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110206
Approved by: https://github.com/awgu
2023-09-29 11:00:07 +00:00
..
__init__.py Merge type stubs torch nn parallel (#102194) 2023-05-26 20:10:47 +00:00
_functions.py DDP forward support custom stream accelerated copy. (#98723) 2023-04-14 20:19:56 +00:00
comm.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
data_parallel.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
distributed.py Log usage of optimizer in backward (#110206) 2023-09-29 11:00:07 +00:00
parallel_apply.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
replicate.py Make DataParallel generic (#102455) 2023-06-03 00:33:01 +00:00
scatter_gather.py Merge type stubs torch nn parallel (#102194) 2023-05-26 20:10:47 +00:00