mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62611 Enables optimizer overlap with backwards in DDP for Adam. Additional optimizers, especially Adagrad will be done in follow up diffs. 1. Implement `step_param` method based on `step` in _FunctionalAdam (perf permitting we can later dedupe `step` to call `step_param` 2. Modify tests to test all current functional optimizers. ghstack-source-id: 135207143 Test Plan: CI Reviewed By: SciPioneer Differential Revision: D29891783 fbshipit-source-id: 321915982afd5cb0a9c2e43d27550f433bff00d1 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| functional_adadelta.py | ||
| functional_adagrad.py | ||
| functional_adam.py | ||
| functional_adamax.py | ||
| functional_adamw.py | ||
| functional_rmsprop.py | ||
| functional_rprop.py | ||
| functional_sgd.py | ||
| optimizer.py | ||
| post_localSGD_optimizer.py | ||
| zero_redundancy_optimizer.py | ||
| zero_redundancy_optimizer.pyi | ||