pytorch/torch/distributed/optim
Yuanyuan Chen a60d9e1f6d Fix flake8 B028 warnings (#166224)
This PR fixes flake8 B028 warning by specifying stacklevel=2 in `warnings.warn`. The advantage is that users can know more contextual information about PyTorch warnings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166224
Approved by: https://github.com/ezyang
2025-10-26 06:18:55 +00:00
..
__init__.py
_deprecation_warning.py
apply_optimizer_in_backward.py
functional_adadelta.py
functional_adagrad.py
functional_adam.py
functional_adamax.py
functional_adamw.py
functional_rmsprop.py
functional_rprop.py
functional_sgd.py
named_optimizer.py Fix flake8 B028 warnings (#166224) 2025-10-26 06:18:55 +00:00
optimizer.py [2/N] More ruff SIM fixes (#165031) 2025-10-14 14:22:54 +00:00
post_localSGD_optimizer.py Fix flake8 B028 warnings (#166224) 2025-10-26 06:18:55 +00:00
utils.py
zero_redundancy_optimizer.py Use correct pyrefly syntax in suppressions distributed/... (#166241) 2025-10-26 04:16:41 +00:00
zero_redundancy_optimizer.pyi [3/N] Import Callable from collections.abc in torch/distributed (#164104) 2025-09-30 00:28:53 +00:00