pytorch/torch/optim
zeshengzong eb69f4e609 Add lr_lambda type check in MultiplicativeLR (#151973)
Fixes #81554

## TestResult

### Before

```python
In [3]: import torch
   ...: class SimpleLinearModel(torch.nn.Module):
   ...:     def __init__(self):
   ...:         super(SimpleLinearModel, self).__init__()
   ...:         self.linear = torch.nn.Linear(10, 1)
   ...:
   ...:     def forward(self, x):
   ...:         return self.linear(x)
   ...:
   ...: net = SimpleLinearModel()
   ...: optimizer = torch.optim.Adam(net.parameters(), lr=0.01)
   ...: scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95)
   ...: for i in range(10):
   ...:     print(i, scheduler.get_last_lr())
   ...:     scheduler.step()
TypeError: 'float' object is not callable

### After

```python
   ...: scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95)
TypeError: lr_lambda should be a function, but got float
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151973
Approved by: https://github.com/janeyx99
2025-04-29 08:21:41 +00:00
..
_multi_tensor
__init__.py
_adafactor.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
_functional.py PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
adadelta.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adagrad.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adam.py [MPS] grad scaler (#150255) 2025-04-06 17:06:55 +00:00
adamax.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
adamw.py PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
asgd.py Add scripts to check xrefs and urls (#151844) 2025-04-28 09:30:07 +00:00
lbfgs.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
lr_scheduler.py Add lr_lambda type check in MultiplicativeLR (#151973) 2025-04-29 08:21:41 +00:00
nadam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
optimizer.py Include other accelerators in capturable docstr for optimizers (#149770) 2025-04-24 20:38:42 +00:00
radam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
rmsprop.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
rprop.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
sgd.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
sparse_adam.py Convert Tensor lr to 0-dim as needed for the optimizer to normally work (#145674) 2025-03-17 23:07:05 +00:00
swa_utils.py Revert "Fix non-bitwise type annotations for Tensor operators (see #145838) (#146845)" 2025-02-18 19:01:27 +00:00