pytorch/torch/distributed/optim
Rohan Varma 5d4e170d58 [Optim in backward] API to retrieve in-backward optimizers (#105991)
API to retrieve in backward optimizer for checkpointing purposes

Differential Revision: [D47782225](https://our.internmc.facebook.com/intern/diff/D47782225/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105991
Approved by: https://github.com/awgu
2023-07-29 01:36:25 +00:00
..
__init__.py [Optim in backward] API to retrieve in-backward optimizers (#105991) 2023-07-29 01:36:25 +00:00
apply_optimizer_in_backward.py [Optim in backward] API to retrieve in-backward optimizers (#105991) 2023-07-29 01:36:25 +00:00
functional_adadelta.py
functional_adagrad.py
functional_adam.py [BE] Enable ruff's UP rules and autoformat distributed/ (#105433) 2023-07-19 14:27:11 +00:00
functional_adamax.py [BE] Enable ruff's UP rules and autoformat distributed/ (#105433) 2023-07-19 14:27:11 +00:00
functional_adamw.py [BE] Enable ruff's UP rules and autoformat distributed/ (#105433) 2023-07-19 14:27:11 +00:00
functional_rmsprop.py
functional_rprop.py
functional_sgd.py
named_optimizer.py Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
optimizer.py Convert logging f-strings to use % format, part four (#98705) 2023-04-11 13:17:59 +00:00
post_localSGD_optimizer.py
utils.py
zero_redundancy_optimizer.py Format: fixing multiple string concatenation in single line (#106013) 2023-07-26 18:39:18 +00:00
zero_redundancy_optimizer.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00