mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: This PR addresses the problem described in the comment: https://github.com/pytorch/pytorch/pull/20203#issuecomment-499231276 and previously coded bad behaviour: - a warning was raised all the times when lr schedulling is initialized Now the code checks that: - on the second call of `lr_scheduler.step`, ensure that `optimizer.step` has been already called, otherwise raise a warning (as it was done in #20203 ) - if optimizer's step is overridden -> raise once another warning to aware user about the new pattern: `opt.step()` -> `lrs.step()` as we can not check this . Now tests check that - at initialization (`lrs = StepLR(...)`)there is no warnings - if we replace `optimizer.step` by something else (similarly to the [code of nvidia/apex](https://github.com/NVIDIA/apex/blob/master/apex/amp/_process_optimizer.py#L287)) there is another warning raised. cc ezyang PS. honestly I would say that there is a lot of overhead introduced for simple warnings. I hope all these checks will be removed in future `1.2.0` or other versions... Pull Request resolved: https://github.com/pytorch/pytorch/pull/21460 Differential Revision: D15701776 Pulled By: ezyang fbshipit-source-id: eac5712b9146d9d3392a30f6339cd33d90c497c7 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| __init__.pyi | ||
| adadelta.py | ||
| adagrad.py | ||
| adam.py | ||
| adam.pyi | ||
| adamax.py | ||
| asgd.py | ||
| lbfgs.py | ||
| lr_scheduler.py | ||
| lr_scheduler.pyi | ||
| optimizer.py | ||
| optimizer.pyi | ||
| rmsprop.py | ||
| rprop.py | ||
| sgd.py | ||
| sgd.pyi | ||
| sparse_adam.py | ||