pytorch/torch/optim
Alex Hedges a3c87c4922 Make Optimizer.state_dict() nondeterministic (#37347)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/36831.

Instead of using `id()`, an arbitrary yet consistent order-based index is used instead. This results in a deterministic output between runs.

I am not the biggest fan of using `nonlocal` (it appears to be used sparingly in the codebase) to get `start_index` between calls to `pack_group()`, but the alternatives had larger issues:
- Using the last value added to `param_mappings` would be ideal, but that only works if `dict` iteration order is consistent, and PyTorch currently supports Python <3.7.
- Using the maximum value added to `param_mappings` wouldn't have that issue but would not be constant time.

For testing, I confirmed that `test_optim.py` works before and after these changes. Randomizing the indices in `param_mappings` causes the tests to fail, which is further evidence these changes work. I'm not 100% if these tests are sufficient, but they're a start.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37347

Differential Revision: D21353820

Pulled By: vincentqb

fbshipit-source-id: e549f1f154833a461b1f4df6d07ad509aab34ea1
2020-06-01 15:32:02 -07:00
..
__init__.py Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
__init__.pyi Fix multiple issues with type annotations (#36358) 2020-04-29 11:16:39 -07:00
adadelta.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
adadelta.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
adagrad.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
adagrad.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
adam.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
adam.pyi
adamax.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
adamax.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
adamw.py Fix exception message of torch.optim.AdamW. (#36088) 2020-04-09 08:02:10 -07:00
adamw.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
asgd.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
asgd.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
lbfgs.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
lbfgs.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
lr_scheduler.py Fix typo in documentation (#34581) 2020-03-11 13:57:10 -07:00
lr_scheduler.pyi fix typing bug of LambdaLR.__init__ (#33271) 2020-02-18 09:10:00 -08:00
optimizer.py Make Optimizer.state_dict() nondeterministic (#37347) 2020-06-01 15:32:02 -07:00
optimizer.pyi Fix minor issue in type stub for Optimizer (#38067) 2020-05-07 20:11:40 -07:00
rmsprop.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
rmsprop.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
rprop.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
rprop.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
sgd.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
sgd.pyi
sparse_adam.py End of the .data removal in torch/optim (#34211) 2020-03-09 06:40:39 -07:00
sparse_adam.pyi Add types for the remaining optimizers. (#31130) 2019-12-12 06:36:41 -08:00
swa_utils.py Add SWA to PyTorch mainline (#35032) 2020-04-27 07:42:19 -07:00
swa_utils.pyi Add SWA to PyTorch mainline (#35032) 2020-04-27 07:42:19 -07:00