pytorch/torch/optim
vfn 8ece538a79 Addresses bad behavior with overridden optimizer.step by #20124 (#21460)
Summary:
This PR addresses the problem described in the comment: https://github.com/pytorch/pytorch/pull/20203#issuecomment-499231276
and previously coded bad behaviour:
- a warning was raised all the times when lr schedulling is initialized

Now the code checks that:
- on the second call of `lr_scheduler.step`, ensure that `optimizer.step` has been already called, otherwise raise a warning (as it was done in #20203 )
- if optimizer's step is overridden -> raise once another warning to aware user about the new pattern:
`opt.step()` -> `lrs.step()` as we can not check this .

Now tests check that
- at initialization (`lrs = StepLR(...)`)there is no warnings
- if we replace `optimizer.step` by something else (similarly to the [code of nvidia/apex](https://github.com/NVIDIA/apex/blob/master/apex/amp/_process_optimizer.py#L287)) there is another warning raised.

cc ezyang

PS. honestly I would say that there is a lot of overhead introduced for simple warnings. I hope all these checks will be removed in future `1.2.0` or other versions...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21460

Differential Revision: D15701776

Pulled By: ezyang

fbshipit-source-id: eac5712b9146d9d3392a30f6339cd33d90c497c7
2019-06-06 13:54:42 -07:00
..
__init__.py Turn on F401: Unused import warning. (#18598) 2019-03-30 09:01:17 -07:00
__init__.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
adadelta.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
adagrad.py Revert D14577575: [pytorch][PR] Fix lack of state init for adagrad and add share_memory flag 2019-04-26 15:43:04 -07:00
adam.py fix lint after new flake8 release added new style constraints (#13047) 2018-10-24 09:03:38 -07:00
adam.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
adamax.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
asgd.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
lbfgs.py Make return uniform in lbfgs step (#7586) 2018-05-16 11:16:46 -04:00
lr_scheduler.py Addresses bad behavior with overridden optimizer.step by #20124 (#21460) 2019-06-06 13:54:42 -07:00
lr_scheduler.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
optimizer.py Lightweight at-most-once logging for API usage (#20745) 2019-05-23 23:17:59 -07:00
optimizer.pyi Fix optimizer type hint (#20648) 2019-05-22 11:27:40 -07:00
rmsprop.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
rprop.py Turn on F401: Unused import warning. (#18598) 2019-03-30 09:01:17 -07:00
sgd.py SGD: remove unneeded multiply-add initialization operations (#18114) 2019-03-19 10:34:17 -07:00
sgd.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
sparse_adam.py fix lint after new flake8 release added new style constraints (#13047) 2018-10-24 09:03:38 -07:00