pytorch/torch/distributed/optim
Rob Zinkov 2a496e2f80 Adding maximize to Adamax (#77409)
Added the maximize flag #68052 to Adamax optimizer and updates the respective tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77409
Approved by: https://github.com/albanD
2022-05-16 17:34:44 +00:00
..
__init__.py [Opt Overlap] Clean up code in _OptimizerHookState (#71620) 2022-01-26 19:33:49 +00:00
functional_adadelta.py Add maximize flag to Adadelta 2022-04-08 20:32:35 +00:00
functional_adagrad.py Adding maximize flag to Adagrad 2022-04-20 08:29:03 +00:00
functional_adam.py Optim foreach cleanup for Adam (#70295) 2022-02-15 18:02:08 +00:00
functional_adamax.py Adding maximize to Adamax (#77409) 2022-05-16 17:34:44 +00:00
functional_adamw.py Optim foreach cleanup for AdamW (#70484) 2022-02-15 18:02:08 +00:00
functional_rmsprop.py Optim foreach cleanup for Rmsprop (#70482) 2022-02-15 18:02:08 +00:00
functional_rprop.py Optim foreach cleanup for Rprop (#70483) 2022-02-15 18:02:08 +00:00
functional_sgd.py Optim foreach cleanup for SGD (#70481) 2022-02-15 18:02:08 +00:00
optimizer.py [Opt Overlap] Clean up code in _OptimizerHookState (#71620) 2022-01-26 19:33:49 +00:00
post_localSGD_optimizer.py fix PostLocalSGDOptimizer and ModelAverager average bug 2022-04-13 11:41:27 +00:00
utils.py [Opt Overlap] Clean up code in _OptimizerHookState (#71620) 2022-01-26 19:33:49 +00:00
zero_redundancy_optimizer.py [FSDP] Add full optim state dict (#74215) 2022-03-30 14:15:23 +00:00
zero_redundancy_optimizer.pyi Remove req to call step() in training loop (#63164) 2021-08-13 08:22:44 -07:00