mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Fixes #81554 ## TestResult ### Before ```python In [3]: import torch ...: class SimpleLinearModel(torch.nn.Module): ...: def __init__(self): ...: super(SimpleLinearModel, self).__init__() ...: self.linear = torch.nn.Linear(10, 1) ...: ...: def forward(self, x): ...: return self.linear(x) ...: ...: net = SimpleLinearModel() ...: optimizer = torch.optim.Adam(net.parameters(), lr=0.01) ...: scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95) ...: for i in range(10): ...: print(i, scheduler.get_last_lr()) ...: scheduler.step() TypeError: 'float' object is not callable ### After ```python ...: scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, 0.95) TypeError: lr_lambda should be a function, but got float ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/151973 Approved by: https://github.com/janeyx99 |
||
|---|---|---|
| .. | ||
| _multi_tensor | ||
| __init__.py | ||
| _adafactor.py | ||
| _functional.py | ||
| adadelta.py | ||
| adagrad.py | ||
| adam.py | ||
| adamax.py | ||
| adamw.py | ||
| asgd.py | ||
| lbfgs.py | ||
| lr_scheduler.py | ||
| nadam.py | ||
| optimizer.py | ||
| radam.py | ||
| rmsprop.py | ||
| rprop.py | ||
| sgd.py | ||
| sparse_adam.py | ||
| swa_utils.py | ||