pytorch/torch/distributed/optim
Aaron Gokaslan edd640a95a [BE][Ez]: Use itertools.chain.from_iterable when possible (#148190)
Often makes the code more readable, more efficient, and adds support for infinite iterables.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148190
Approved by: https://github.com/jansel, https://github.com/malfet
2025-03-06 20:37:06 +00:00
..
__init__.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_deprecation_warning.py
apply_optimizer_in_backward.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
functional_adadelta.py
functional_adagrad.py
functional_adam.py
functional_adamax.py
functional_adamw.py
functional_rmsprop.py
functional_rprop.py
functional_sgd.py
named_optimizer.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
optimizer.py
post_localSGD_optimizer.py
utils.py
zero_redundancy_optimizer.py [BE][Ez]: Use itertools.chain.from_iterable when possible (#148190) 2025-03-06 20:37:06 +00:00
zero_redundancy_optimizer.pyi [BE] Upgrade to mypy 1.14 (#145966) 2025-03-04 20:58:26 +00:00