pytorch/torch/optim
2025-05-15 01:07:36 +00:00
..
_multi_tensor Optim package docstring fix (#129086) 2024-06-21 14:30:53 +00:00
__init__.py [BE][optim] Make pyright recognize exported symbols (#135043) 2024-09-04 21:53:46 +00:00
_adafactor.py Fix incorrect citation of authors in documentation (#145209) 2025-05-05 17:45:05 +00:00
_functional.py PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
adadelta.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adagrad.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adam.py [MPS] grad scaler (#150255) 2025-04-06 17:06:55 +00:00
adamax.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adamw.py PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
asgd.py Add scripts to check xrefs and urls (#151844) 2025-04-28 09:30:07 +00:00
lbfgs.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
lr_scheduler.py Fix doc cosineannealinglr 152081 (#152936) 2025-05-08 17:25:30 +00:00
nadam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
optimizer.py Add load_state_dict hint doc about invoke order work with lr_scheduler (#149942) 2025-05-15 01:07:36 +00:00
radam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
rmsprop.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
rprop.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
sgd.py Document that dampening is skipped in SGD momentum first step (#152833) 2025-05-05 20:07:23 +00:00
sparse_adam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
swa_utils.py Revert "Fix non-bitwise type annotations for Tensor operators (see #145838) (#146845)" 2025-02-18 19:01:27 +00:00