pytorch/torch/optim
Jacobgoss30 6a8006472e Fix doc cosineannealinglr 152081 (#152936)
## Summary

This PR updates the docstring for `CosineAnnealingLR` to accurately reflect its recursive learning rate schedule. The previous docstring displayed only the SGDR closed-form expression, which doesn't match the actual recursive implementation in code.

Changes:

- Added the recursive update formula used in `get_lr()`
- Retained the original closed-form SGDR expression for reference
- Clarified that warm restarts are not implemented in this scheduler

This addresses confusion raised in issue #152081.

## Related issue

[#152081](https://github.com/pytorch/pytorch/issues/152081)

## Testing

Doc-only change. Ran pre-commit to verify formatting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152936
Approved by: https://github.com/janeyx99
2025-05-08 17:25:30 +00:00
..
_multi_tensor
__init__.py
_adafactor.py Fix incorrect citation of authors in documentation (#145209) 2025-05-05 17:45:05 +00:00
_functional.py
adadelta.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adagrad.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adam.py [MPS] grad scaler (#150255) 2025-04-06 17:06:55 +00:00
adamax.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adamw.py
asgd.py Add scripts to check xrefs and urls (#151844) 2025-04-28 09:30:07 +00:00
lbfgs.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
lr_scheduler.py Fix doc cosineannealinglr 152081 (#152936) 2025-05-08 17:25:30 +00:00
nadam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
optimizer.py Include other accelerators in capturable docstr for optimizers (#149770) 2025-04-24 20:38:42 +00:00
radam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
rmsprop.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
rprop.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
sgd.py Document that dampening is skipped in SGD momentum first step (#152833) 2025-05-05 20:07:23 +00:00
sparse_adam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
swa_utils.py Revert "Fix non-bitwise type annotations for Tensor operators (see #145838) (#146845)" 2025-02-18 19:01:27 +00:00