pytorch/torch/optim
2023-10-06 08:56:18 +00:00
..
_multi_tensor
__init__.py
__init__.pyi
_functional.py
adadelta.py feat(optim): Add adadelta multi_tensor support for complex, with has_complex shortcut (#110631) 2023-10-06 03:34:41 +00:00
adadelta.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
adagrad.py perf(inductor): use for loop with shortcut in Optimizers to speedup against list comprehensions (e.g. complex conversion) (#110613) 2023-10-05 23:10:52 +00:00
adagrad.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
adam.py perf(inductor): improve Adam compile times by shortcutting for loops (via has_complex) (#110607) 2023-10-06 05:08:49 +00:00
adam.pyi [optim] FusedAdam/W accepts lr: Tensor without h2ds (#106916) 2023-08-21 23:00:44 +00:00
adamax.py perf(inductor): use for loop with shortcut in Optimizers to speedup against list comprehensions (e.g. complex conversion) (#110613) 2023-10-05 23:10:52 +00:00
adamax.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
adamw.py perf(inductor): use for loop with shortcut in Optimizers to speedup against list comprehensions (e.g. complex conversion) (#110613) 2023-10-05 23:10:52 +00:00
adamw.pyi [optim] FusedAdam/W accepts lr: Tensor without h2ds (#106916) 2023-08-21 23:00:44 +00:00
asgd.py perf(inductor): use for loop with shortcut in Optimizers to speedup against list comprehensions (e.g. complex conversion) (#110613) 2023-10-05 23:10:52 +00:00
asgd.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
lbfgs.py Correct LBFGS tolerance_grad doc string (#99792) 2023-04-22 20:19:01 +00:00
lbfgs.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
lr_scheduler.py Simplify the conditionals used for learning rate calculation for ConstantLR learning rate scheduler (#109785) 2023-09-29 23:11:23 +00:00
lr_scheduler.pyi Fixed type hints for CosineAnnealingWarmRestarts (#102067) 2023-05-23 19:06:07 +00:00
nadam.py feat(optim): Add NAdamsupport for complex, with has_complex shortcut (#110634) 2023-10-06 03:31:48 +00:00
nadam.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
optimizer.py Revert "[optim] Make casting to match params a hook (#106725)" 2023-08-25 13:47:19 +00:00
radam.py feat(optim): Add RAdam support for complex, with has_complex shortcut (#110635) 2023-10-06 03:29:26 +00:00
radam.pyi Implement "RAdamW" optimizer (#107507) 2023-08-28 20:50:25 +00:00
rmsprop.py perf(inductor): use for loop with shortcut in Optimizers to speedup against list comprehensions (e.g. complex conversion) (#110613) 2023-10-05 23:10:52 +00:00
rmsprop.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
rprop.py perf(inductor): use for loop with shortcut in Optimizers to speedup against list comprehensions (e.g. complex conversion) (#110613) 2023-10-05 23:10:52 +00:00
rprop.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
sgd.py perf(optim/dynamo): shortcut is_sparse iteration in SGD multi_tensor (#110648) 2023-10-06 08:56:18 +00:00
sgd.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
sparse_adam.py [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
sparse_adam.pyi Merge and improve torch optim optimizer type stubs (#102593) 2023-07-26 11:56:42 +00:00
swa_utils.py use reset_running_stats in swa_utils.update_bn (#103801) 2023-06-23 01:17:13 +00:00
swa_utils.pyi [torch] add use_buffers to swa_utils interface (#109078) 2023-09-19 21:30:59 +00:00