pytorch/torch/optim
Soumith Chintala 75754beca3 Revert D14577575: [pytorch][PR] Fix lack of state init for adagrad and add share_memory flag
Differential Revision:
D14577575

Original commit changeset: 12440079ac96

fbshipit-source-id: 935106385e608471dc280fc61cfedf19d330812d
2019-04-26 15:43:04 -07:00
..
__init__.py Turn on F401: Unused import warning. (#18598) 2019-03-30 09:01:17 -07:00
__init__.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
adadelta.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
adagrad.py Revert D14577575: [pytorch][PR] Fix lack of state init for adagrad and add share_memory flag 2019-04-26 15:43:04 -07:00
adam.py fix lint after new flake8 release added new style constraints (#13047) 2018-10-24 09:03:38 -07:00
adam.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
adamax.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
asgd.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
lbfgs.py Make return uniform in lbfgs step (#7586) 2018-05-16 11:16:46 -04:00
lr_scheduler.py Add SGDR(Stochastic Gradient Descent with Warm Restarts) scheduler (#17226) 2019-04-25 09:26:31 -07:00
lr_scheduler.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
optimizer.py Fix copied optimizer (#19308) 2019-04-19 10:27:01 -07:00
optimizer.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
rmsprop.py Added parameter range checks for all optimizers (#6000) 2018-03-28 11:22:23 +02:00
rprop.py Turn on F401: Unused import warning. (#18598) 2019-03-30 09:01:17 -07:00
sgd.py SGD: remove unneeded multiply-add initialization operations (#18114) 2019-03-19 10:34:17 -07:00
sgd.pyi More type stubs (#18511) 2019-04-01 16:03:58 -07:00
sparse_adam.py fix lint after new flake8 release added new style constraints (#13047) 2018-10-24 09:03:38 -07:00