pytorch/torch/distributed/optim
Natalia Gimelshein b45cf9b81b Revert D30117838: [WIP] Gate DistributedOptimizers on RPC availability
Test Plan: revert-hammer

Differential Revision:
D30117838 (3f09485d7e)

Original commit changeset: e6365a910a3d

fbshipit-source-id: f276b2b2bdf5f7bd27df473fca0eebaee9f7aef2
2021-08-06 22:10:41 -07:00
..
__init__.py Revert D30117838: [WIP] Gate DistributedOptimizers on RPC availability 2021-08-06 22:10:41 -07:00
functional_adadelta.py [dist_optim] fix the bug of none grads on functional optimizers (#62249) 2021-07-27 18:10:51 -07:00
functional_adagrad.py [optim] take kw-only argument for functional optim APIs (#56185) 2021-04-15 20:08:04 -07:00
functional_adam.py Enable step_param for Adam functional optimizer (#62611) 2021-08-06 10:53:55 -07:00
functional_adamax.py [optim] take kw-only argument for functional optim APIs (#56185) 2021-04-15 20:08:04 -07:00
functional_adamw.py [optim] take kw-only argument for functional optim APIs (#56185) 2021-04-15 20:08:04 -07:00
functional_rmsprop.py [dist_optim] fix the bug of none grads on functional optimizers (#62249) 2021-07-27 18:10:51 -07:00
functional_rprop.py [dist_optim] fix the bug of none grads on functional optimizers (#62249) 2021-07-27 18:10:51 -07:00
functional_sgd.py [dist_optim] fix the bug of none grads on functional optimizers (#62249) 2021-07-27 18:10:51 -07:00
optimizer.py Revert D30117838: [WIP] Gate DistributedOptimizers on RPC availability 2021-08-06 22:10:41 -07:00
post_localSGD_optimizer.py [Model Averaging] Update model averager API to avoid the redundant params arg needed by post-localSGD optimizer (#62132) 2021-07-28 18:43:09 -07:00
zero_redundancy_optimizer.py Revert D30117838: [WIP] Gate DistributedOptimizers on RPC availability 2021-08-06 22:10:41 -07:00
zero_redundancy_optimizer.pyi Make _Join, _Joinable, _JoinHook public (#62605) 2021-08-03 12:20:11 -07:00