pytorch/torch/optim
Jerry Ma 383d340e88 Small optimization for adam (#12107)
Summary:
Apply weight decay for Adam in-place instead of via copy.

Synced offline with soumith , who mentioned that it should be OK. This is also consistent with other optimizers, e.g. eee01731a5/torch/optim/sgd.py (L93)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12107

Reviewed By: soumith

Differential Revision: D10071787

Pulled By: jma127

fbshipit-source-id: 5fd7939c79039693b225c44c4c80450923b8d673
2018-09-26 21:43:46 -07:00
..
__init__.py added AMSgrad optimizer to Adam and SparseAdam (#4034) 2017-12-18 13:24:49 -05:00
adadelta.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
adagrad.py Remove methods that start with an underscore from at::Tensor (#11152) 2018-09-07 11:55:11 -07:00
adam.py Small optimization for adam (#12107) 2018-09-26 21:43:46 -07:00
adamax.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
asgd.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
lbfgs.py Make return uniform in lbfgs step (#7586) 2018-05-16 11:16:46 -04:00
lr_scheduler.py Changed serialization mechanism of LambdaLR scheduler (#9927) 2018-07-31 19:39:06 -07:00
optimizer.py migrating deprecated calls without abc module for containers (#11515) 2018-09-13 15:09:22 -07:00
rmsprop.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
rprop.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
sgd.py fix SGD lr check (#6244) 2018-04-03 21:29:18 -04:00
sparse_adam.py Remove methods that start with an underscore from at::Tensor (#11152) 2018-09-07 11:55:11 -07:00