pytorch/test/optim
Jane Xu 59d0dea90f Only make a shallow copy when loading optimizer state_dict (#106082)
The thing we do still deep copy is the param_groups, which is much lighter weight. This should also save memory when loading from a checkpoint.

The deepcopy was introduced in ecfcf39f30, but module.py had only a shallow copy at that point so it did not actually bring parity.

Incorporates an XLA fix, which is why I'm updating the pin to ca5eab87a7

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106082
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-08-01 05:33:31 +00:00
..
test_lrscheduler.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
test_optim.py Only make a shallow copy when loading optimizer state_dict (#106082) 2023-08-01 05:33:31 +00:00
test_swa_utils.py Dont run test files that are already run in test_optim (#103017) 2023-06-06 17:31:21 +00:00