Commit Graph

155 Commits

Author SHA1 Message Date
Tony-Y
78715a181f Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674)
Fixes #145461

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145674
Approved by: https://github.com/janeyx99

Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2025-03-17 23:07:05 +00:00
zeshengzong
fb1b7ec173 Remove deprecate method and attirbute in LRScheduler (#147301)
Following [#99270 suggestion](https://github.com/pytorch/pytorch/issues/99270#issuecomment-1511656408), remove deprecate method `LRScheduler.print_lr`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147301
Approved by: https://github.com/janeyx99
2025-03-05 05:30:19 +00:00
Aaron Orenstein
0afd335174 PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175)
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145175
Approved by: https://github.com/bobrenjc93
2025-01-21 16:57:27 +00:00
PyTorch MergeBot
5fd881a5b6 Revert "PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175)"
This reverts commit 54a00af2c6.

Reverted https://github.com/pytorch/pytorch/pull/145175 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it seems to break some trunk tests ([comment](https://github.com/pytorch/pytorch/pull/145175#issuecomment-2603418267))
2025-01-21 00:49:55 +00:00
Aaron Orenstein
54a00af2c6 PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175)
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145175
Approved by: https://github.com/bobrenjc93
2025-01-20 22:32:59 +00:00
Tom Ritchford
c0582fd0f8 Remove unused Python variables in torch/[b-z]* (#136963)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136963
Approved by: https://github.com/ezyang
2024-10-19 16:45:22 +00:00
Matt Pitkin
8a5dd7f59b Allow SequentialLR to include ChainedScheduler (#133450)
This fixes #132745 and allows a `SequentialLR` to include schedulers that are compound scheduler types (i.e., a `ChainedScheduler`), which contain a list of schedulers in a `_schedulers` attribute.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133450
Approved by: https://github.com/janeyx99
2024-10-18 02:29:38 +00:00
Jane Xu
f9ed39c989 Autoupdate min_lrs for ReduceLROnPlateau if possible, fixes #104361 (#137637)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137637
Approved by: https://github.com/albanD
2024-10-10 01:23:30 +00:00
Mauricio Villegas
ece8267d2c Add back optim type hints that were lost when *.pyi files were removed (#136185)
When stub files (`*.pyi`) were removed from `optim` (#125556, #125452), some types that existed are no longer available. This pull request adds them back.

Just for reference, these types are used in `pytorch-lightning`'s `LightningCLI`. Command line interfaces are created automatically, and having type hints make them nicer.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136185
Approved by: https://github.com/janeyx99
2024-09-17 15:45:15 +00:00
Aaron Gokaslan
31715be72a [BE]: Update mypy to 1.11.2 (#133816)
Updates mypy to 1.11.1 to improve type inference

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133816
Approved by: https://github.com/ezyang
2024-09-16 19:44:11 +00:00
PyTorch MergeBot
3117f2cf67 Revert "[BE]: Update mypy to 1.11.2 (#133816)"
This reverts commit 55299cfc22.

Reverted https://github.com/pytorch/pytorch/pull/133816 on behalf of https://github.com/jeanschmidt due to seems to have broken https://github.com/pytorch/pytorch/actions/runs/10865710499/job/30155699792 on main ([comment](https://github.com/pytorch/pytorch/pull/133816#issuecomment-2352377684))
2024-09-16 09:11:16 +00:00
Aaron Gokaslan
55299cfc22 [BE]: Update mypy to 1.11.2 (#133816)
Updates mypy to 1.11.1 to improve type inference

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133816
Approved by: https://github.com/ezyang
2024-09-14 21:40:36 +00:00
Wil Kong
de06345e9b Avoid Host & Device Sync In LR Scheduler (#133663)
Fixes #133662.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133663
Approved by: https://github.com/janeyx99, https://github.com/eqy

Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2024-08-22 03:52:43 +00:00
Pierre Chapuis
0e4c0ef29f fix type of eta_min parameter in CosineAnnealing (int -> float) (#132482)
This fixes errors with type checkers such as `pyright`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132482
Approved by: https://github.com/janeyx99
2024-08-12 18:22:26 +00:00
Xuehai Pan
30293319a8 [BE][Easy][19/19] enforce style for empty lines in import segments in torch/[o-z]*/ (#129771)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129771
Approved by: https://github.com/justinchuby, https://github.com/janeyx99
2024-08-01 17:07:14 +00:00
Tianyi Tao
3477ee38e4 fix the use of initial learning rate in the OneCycleLR example (#130306)
Fixes #127649

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130306
Approved by: https://github.com/janeyx99
2024-07-09 18:58:07 +00:00
Michael Lazos
aa7ea6b45c Add wraps back (#129933)
Fixes https://github.com/pytorch/pytorch/issues/129922

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129933
Approved by: https://github.com/eqy, https://github.com/janeyx99
2024-07-02 18:24:02 +00:00
Sahdev Zala
4ccbf711e2 Learning Rate Scheduler docstring fix (#128679)
Fix docstrings in Learning Rate Scheduler.

The fix can be verified by running pydocstyle path-to-file --count

Related #112593

**BEFORE the PR:**
pydocstyle torch/optim/lr_scheduler.py --count

92


**AFTER the PR:**
pydocstyle torch/optim/lr_scheduler.py --count

0

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128679
Approved by: https://github.com/janeyx99
2024-06-15 05:30:35 +00:00
Aaron Orenstein
27f9d3b0a1 Flip default value for mypy disallow_untyped_defs [8/11] (#127845)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127845
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843, #127844
2024-06-08 18:49:56 +00:00
joncrall
80fa2778ed Update types for verbose in lr_scheduler (#127943)
I'm currently locked into jsonargparse version 4.19.0, and it complains when used in combination with LightningCLI (v2.0.8). This is because it cares about the types declared in google style docstrings. This causes a problem when it tries to parse how it should cast arguments to construct an instance of an LRScheduler class because the docstrings declare the "verbose" parameter as a bool, but the defaults recently changed to a string "deprecated". This means the type should really be `bool | str`.

This PR adds a `| str` to the docstring type in each learning rate scheduler class. This will prevent jsonargparse from complaining.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127943
Approved by: https://github.com/janeyx99
2024-06-06 21:59:22 +00:00
GdoongMathew
3437177e2b Quick Fix on #126854, deepcopy lr and other possible base_parameters (#127190)
* Apply `deepcopy` to every base parameters (`initial_lr`, `max_lr`) when instantiating `LRScheduler`.

Fixes #126854

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127190
Approved by: https://github.com/janeyx99
2024-06-03 18:06:31 +00:00
David Chiu
9641a8db25 [optim] deprecate LRScheduler.print_lr (#126105)
Fixes #99270

Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126105
Approved by: https://github.com/janeyx99
2024-05-14 14:13:03 +00:00
David Chiu
c520929c83 add typing in torch.optim.lr_scheduler (#125556)
Merge torch/optim/lr_scheduler.pyi into torch/optim/lr_scheduler.py
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125556
Approved by: https://github.com/janeyx99
2024-05-10 19:28:00 +00:00
Michael Lazos
69eeef0727 Update LRScheduler to handle tensor LR (#123753)
Enables LRScheduler to handle tensor LRs.

Note on test changes:
For the test modifications I just removed itertools.product and created two loops. This allows us to create a new set of optim_inputs on each iteration to prevent mutations on the tensor LR carrying over across iterations. Nothing else in those tests was modified.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123753
Approved by: https://github.com/janeyx99
ghstack dependencies: #123751, #123752
2024-05-09 00:52:43 +00:00
Michael Lazos
0ea6ffc613 Swap warning counter to flag in LRScheduler (#123751)
This was a counter previously, this should be a flag to indicate whether or not the optimizer step has been called.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123751
Approved by: https://github.com/janeyx99
2024-05-09 00:52:43 +00:00
David Chiu
a60fa960e5 refactor: extract get_lr warning (#125545)
Extract the `_get_lr_called_within_step` checking in the `get_lr()` of every LRSchedulers.
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125545
Approved by: https://github.com/janeyx99
2024-05-07 03:15:58 +00:00
Alex Morehead
9aed5dcfe6 Clarify wording in docstring for CosineAnnealingWarmRestarts within lr_scheduler.py (#125161)
- Clarifies wording in the docstring for `CosineAnnealingWarmRestarts` within `lr_scheduler.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125161
Approved by: https://github.com/janeyx99
2024-04-30 14:01:22 +00:00
GdoongMathew
8b1ad51881 Better Error Message in ChainedScheduler and SequentialLR (#121633)
Fixes #121577

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121633
Approved by: https://github.com/janeyx99
2024-04-19 13:37:41 +00:00
Mikayla Gawarecki
383d2d1f6c Add testing and fix issues for weights_only load for LRScheduler (#123775)
Fixes https://github.com/pytorch/pytorch/issues/98921

There were two issues detected:
- `MultiStepLR`: issue is described in https://github.com/pytorch/pytorch/issues/98921, this is resolved by allowlisting `collections.Counter`
- `OneCycleLR`: `state_dict['anneal_func']` is either `<function OneCycleLR._annealing_cos at 0x7f364186f5b0>` or
`<function OneCycleLR._annealing_linear at 0x7f39aa483640>` depending on the `anneal_func` kwarg.
   This leads to `WeightsUnpickler error: Unsupported class __builtin__.getattr` from the `weights_only` Unpickler.

  Fixed the above in a BC-compatible manner by adding `OneCyclicLR._anneal_func_type` as a string attribute and removing `OneCyclicLR.anneal_func`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123775
Approved by: https://github.com/albanD, https://github.com/malfet
2024-04-16 20:29:27 +00:00
FFFrog
ac74a6783b Part 2: UFMT fix 2 files in torch/optim due to the pr-sanity-checks (#124054)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124054
Approved by: https://github.com/ezyang
ghstack dependencies: #124048, #124053
2024-04-16 03:20:21 +00:00
FFFrog
f30704f5f3 add preparatory work for torch/optim/lr_scheduler.py (#124048)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124048
Approved by: https://github.com/albanD
2024-04-16 03:17:18 +00:00
lancerts
7a065e3b23 improve the constantLR doc (#120852)
Fixes #120716
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120852
Approved by: https://github.com/janeyx99
2024-03-04 21:15:27 +00:00
lancerts
605de946cf Clarify the patience in ReduceLROnPlateau (#119872)
Fixes #119763
@janeyx99

Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119872
Approved by: https://github.com/janeyx99
2024-02-15 19:43:06 +00:00
Alexander Kurakin
c5116d9e44 Fix optim.lr_scheduler examples in doc to use optimizer vs self.opt (#119563)
Fixes #119561

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119563
Approved by: https://github.com/janeyx99
2024-02-10 15:10:43 +00:00
Alexander Kurakin
9a1df7cfd7 ReduceLROnPlateau init _last_lr (#119366) (#119556)
Fixes #119366

Pull Request resolved: https://github.com/pytorch/pytorch/pull/119556
Approved by: https://github.com/janeyx99
2024-02-09 19:35:02 +00:00
rockerBOO
d810b10232 Add beta1 support to CyclicLR momentum (#113548)
Fixes #73910

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113548
Approved by: https://github.com/janeyx99
2024-01-23 01:16:58 +00:00
ancestor-mithril
2b72543f36 Solving pickle error when saving CyclicLR state_dict (#110931)
## How to reproduce:
```py
import os
import tempfile

import torch
from torch import nn
from torch.optim import SGD
from torch.optim.lr_scheduler import CyclicLR

model = nn.Linear(100, 100)
opt = SGD(model.parameters(), lr=1.)
scheduler = CyclicLR(opt, base_lr=0.1, max_lr=0.2, scale_fn=lambda x: 0.99)

tmp = tempfile.NamedTemporaryFile(delete=False)
try:
    torch.save(scheduler.state_dict(), tmp.name)
    scheduler.load_state_dict(torch.load(tmp.name))
finally:
    tmp.close()
    os.unlink(tmp.name)
```
Error:
```
_pickle.PicklingError: Can't pickle <function <lambda> at 0x000001A51DF67600>: attribute lookup <lambda> on __main__ failed
```
## Fix:
Saving `scale_fn` to the state dict only if it is a callable object and not if it is a function or lambda.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110931
Approved by: https://github.com/janeyx99
2023-11-22 11:38:35 +00:00
Thomas J. Fan
a4dc3716c0 Deprecated verbose parameter in LR schedulers (#111302)
Fixes https://github.com/pytorch/pytorch/issues/100847

This PR follows the comment in https://github.com/pytorch/pytorch/issues/100847#issuecomment-1546247239 by deprecating the `verbose` parameter and removing the print statements. Removing the print statements is technically BC breaking, so I would be okay with putting them back in.

To be less annoying, this PR raises a warning only when `verbose` is explicitly passed in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111302
Approved by: https://github.com/albanD
2023-11-10 23:17:27 +00:00
ancestor-mithril
e0be9ebc18 Simplify the conditionals used for learning rate calculation for ConstantLR learning rate scheduler (#109785)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109785
Approved by: https://github.com/janeyx99, https://github.com/kit1980
2023-09-29 23:11:23 +00:00
PyTorch MergeBot
c2c7c4035f Revert "Simplify the conditionals used for learning rate calculation for ConstantLR learning rate scheduler (#109785)"
This reverts commit 83283b4f0d.

Reverted https://github.com/pytorch/pytorch/pull/109785 on behalf of https://github.com/PaliC due to causing macos errors as per 83283b4f0d ([comment](https://github.com/pytorch/pytorch/pull/109785#issuecomment-1741471142))
2023-09-29 20:49:28 +00:00
ancestor-mithril
d615f0078c Updating documentation for PolynomialLR (#110151)
Docstring mentions the power parameter is `int`, when it should have been `float`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110151
Approved by: https://github.com/janeyx99
2023-09-29 03:50:11 +00:00
ancestor-mithril
83283b4f0d Simplify the conditionals used for learning rate calculation for ConstantLR learning rate scheduler (#109785)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109785
Approved by: https://github.com/janeyx99, https://github.com/kit1980
2023-09-29 01:19:05 +00:00
Adam J. Stewart
0a8296da7d ReduceLROnPlateau: inherit LRScheduler (#108464)
Fixes #106767
FIxes #104687
Fixes #49369
Fixes #63143
Fixes #50715
Fixes #21981
Fixes #2829

Hoping this is just a simple fix, but we'll see.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108464
Approved by: https://github.com/ezyang
2023-09-05 13:48:54 +00:00
Aaron Gokaslan
660e8060ad [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-22 23:16:38 +00:00
PyTorch MergeBot
d59a6864fb Revert "[BE]: Update ruff to 0.285 (#107519)"
This reverts commit 88ab3e4322.

Reverted https://github.com/pytorch/pytorch/pull/107519 on behalf of https://github.com/ZainRizvi due to Sorry, but this PR breaks internal tests. @ezyang, can you please hep them get unblocked? It seems like one of the strings was prob accidentally modified ([comment](https://github.com/pytorch/pytorch/pull/107519#issuecomment-1688833480))
2023-08-22 19:53:32 +00:00
Aaron Gokaslan
88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
Justin Chu
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
Justin Chu
3721fa5612 [BE] Enable ruff's UP rules and autoformat optim/ (#105426)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105426
Approved by: https://github.com/malfet, https://github.com/albanD, https://github.com/aaronenyeshi, https://github.com/janeyx99
2023-07-18 21:07:43 +00:00
Ali Moezzi
8c3958eddc Fix lr_scheduler serialization contains bound methods issue (#102627)
Fixes #42376
`torch.save` serializes bound methods inside LR scheduler resulting in large serialized file.

Test cases include checking file size, checking if the `anneal_func` is bounded and file is loaded correctly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102627
Approved by: https://github.com/albanD
2023-06-23 03:53:15 +00:00
Ravikiran Parameshwara
8340762211 Update lr_scheduler.py to check the type of eta_min (#97003)
Add float assertion to `eta_min` parameter in `CosineAnnealingWarmRestarts`.

Fixes #87757

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97003
Approved by: https://github.com/janeyx99
2023-06-14 02:13:05 +00:00