Maggie Moss
d1a6e006e0
Fix syntax for pyrefly errors ( #166496 )
...
Last one! This ensures all existing suppressions match the syntax expected and will silence only one error code
pyrefly check
lintrunner
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166496
Approved by: https://github.com/Skylion007 , https://github.com/mlazos
2025-10-29 20:00:25 +00:00
Yuanyuan Chen
a60d9e1f6d
Fix flake8 B028 warnings ( #166224 )
...
This PR fixes flake8 B028 warning by specifying stacklevel=2 in `warnings.warn`. The advantage is that users can know more contextual information about PyTorch warnings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166224
Approved by: https://github.com/ezyang
2025-10-26 06:18:55 +00:00
Maggie Moss
eb83c3ca23
Clean up unused Pyrefly suppressions ( #166178 )
...
Cleaning up ignores that are no longer needed in the repo and adding select suppressions so the main branch is clean.
test plan:
`lintrunner -a`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166178
Approved by: https://github.com/oulgen
2025-10-25 05:32:21 +00:00
mansiag05
f8fccb1e48
[Code Clean] Clean asserts in torch/optim. ( #165629 )
...
Replaces 50 assert statements across 15 files in torch.optim with explicit if-checks raising AssertionError to prevent assertions from being disabled with Python -O flag.
fix partially #164878
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165629
Approved by: https://github.com/albanD
2025-10-23 15:56:29 +00:00
Yuanyuan Chen
fbe0d20a17
[2/N] More ruff SIM fixes ( #165031 )
...
This is follow-up of #164695 to apply ruff SIM rules to more files. Most changes are about simplifying dict.get because None is already the default value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165031
Approved by: https://github.com/mlazos
2025-10-14 14:22:54 +00:00
PyTorch MergeBot
b8be796a57
Revert "[2/N] More ruff SIM fixes ( #165031 )"
...
This reverts commit 38095fbd13 .
Reverted https://github.com/pytorch/pytorch/pull/165031 on behalf of https://github.com/albanD due to One of the changed line started to fail on trunk ([comment](https://github.com/pytorch/pytorch/pull/165031#issuecomment-3390190870 ))
2025-10-10 13:42:14 +00:00
Yuanyuan Chen
38095fbd13
[2/N] More ruff SIM fixes ( #165031 )
...
This is follow-up of #164695 to apply ruff SIM rules to more files. Most changes are about simplifying dict.get because None is already the default value.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165031
Approved by: https://github.com/mlazos
2025-10-10 05:37:46 +00:00
Maggie Moss
b13cd141b3
Add pyrefly suppressions ( #164748 )
...
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283
Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check
step 1: delete lines in the pyrefly.toml file from the `project-excludes` field
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/4b3bf2037014e116bc00706a16aef199
after:
0 errors (4,263 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164748
Approved by: https://github.com/oulgen
2025-10-07 17:31:18 +00:00
zeshengzong
fdc8ccc5bc
Make Adam, AdamW work with nonzero-dim Tensor betas ( #149939 )
...
Fixes #147921
## Changes
- Convert tensor `betas` using `_to_scalar`
- Change annotation of `betas` param
- Change param type in docs
## Test Result
```bash
pytest -s test/test_optim.py -k test_tensor_lr -vv
```


Pull Request resolved: https://github.com/pytorch/pytorch/pull/149939
Approved by: https://github.com/janeyx99
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2025-10-06 22:03:25 +00:00
Maggie Moss
4ab847bbc7
Pyrefly suppressions 4/n ( #164615 )
...
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283
Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check
step 1: uncomment lines in the pyrefly.toml file
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/356645cf8cfe33123d9a27f23b30f7b1
after:
0 errors (2,753 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164615
Approved by: https://github.com/oulgen
2025-10-06 16:14:36 +00:00
PyTorch MergeBot
5d7360bb03
Revert "Enable all SIM rules except disabled ones ( #164645 )"
...
This reverts commit 321e602692 .
Reverted https://github.com/pytorch/pytorch/pull/164645 on behalf of https://github.com/izaitsevfb due to causes lint failures ([comment](https://github.com/pytorch/pytorch/pull/164645#issuecomment-3369274351 ))
2025-10-05 19:32:21 +00:00
Yuanyuan Chen
321e602692
Enable all SIM rules except disabled ones ( #164645 )
...
`SIM` rules are useful for simplifying boolean expressions and enhances code readability.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164645
Approved by: https://github.com/ezyang
2025-10-05 07:38:25 +00:00
Yuanyuan Chen
a43c4c3972
[5/N] Apply ruff UP035 rule ( #164423 )
...
Continued code migration to enable ruff `UP035`. Most changes are about moving `Callable` from `typing` to `from collections.abc`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164423
Approved by: https://github.com/ezyang
2025-10-02 07:31:11 +00:00
Parshant Sharma
4ad9fbc83a
Unify TypeAlias definitions in optimizer.py ( #161493 )
...
Fixes #160834
This issue unifies TypeAlias definitions in [optimizer.py](https://github.com/pytorch/pytorch/blob/main/torch/optim/optimizer.py )
This ensures the following:
- Consistency and Standardization
- Enhanced IDE support
- Prevents runtime confusion
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161493
Approved by: https://github.com/Skylion007
2025-08-30 00:35:02 +00:00
zeshengzong
f8bd85827d
Optimzie zero_grad description ( #161239 )
...
Optimize [zero_grad doc](https://docs.pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html ) format and description.
## Test Result
### Before
<img width="996" height="534" alt="image" src="https://github.com/user-attachments/assets/e1db973c-57e8-4525-90e7-0500cde2263d " />
### After
<img width="890" height="496" alt="image" src="https://github.com/user-attachments/assets/5579c4fb-a857-4030-9303-34770083d1a5 " />
Pull Request resolved: https://github.com/pytorch/pytorch/pull/161239
Approved by: https://github.com/janeyx99
2025-08-22 06:18:25 +00:00
Xuehai Pan
596b418391
[BE][PYFMT] migrate PYFMT for {torch,test}/{nn,optim}/** to ruff format ( #144548 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144548
Approved by: https://github.com/ezyang
2025-06-14 11:27:04 +00:00
zeshengzong
82dc3457e0
Add load_state_dict hint doc about invoke order work with lr_scheduler ( #149942 )
...
Fixes #119168
## Test Result

Pull Request resolved: https://github.com/pytorch/pytorch/pull/149942
Approved by: https://github.com/janeyx99
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2025-05-15 01:07:36 +00:00
Aaron Gokaslan
f05b38aa26
[BE]: Improve decorator typing for Optimizer subclasses ( #153374 )
...
Improves typing so that all the optimizer subclasses (which all of them that subtype step) do not erase their type signature when this decorator is used. Now *kwarg values and returns will propogate
This complements @tsunghsienlee PR #153367 as the type signature of step() was being erased on all the optimizer subclasses by this untyped decorator
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153374
Approved by: https://github.com/janeyx99 , https://github.com/tsunghsienlee
2025-05-12 22:55:25 +00:00
Tsung-Hsien Lee
ea4b65ab60
Fix the type hint of step() with default value ( #153367 )
...
Summary: Because the default value of `closure` is `None`, this fixes the situation when `step()`. The previous typing (https://github.com/pytorch/pytorch/pull/102593 ) could only be used as `step(closure=None)` and `step(None)`.
Test Plan: contbuild & OSS CI
Differential Revision: D74560785
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153367
Approved by: https://github.com/cyyever , https://github.com/Skylion007 , https://github.com/janeyx99
2025-05-12 15:52:59 +00:00
Jane Xu
dccc41581a
Include other accelerators in capturable docstr for optimizers ( #149770 )
...
Fixes #149722
@ILCSFNO is this better?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149770
Approved by: https://github.com/albanD
2025-04-24 20:38:42 +00:00
Tony-Y
78715a181f
Convert Tensor lr to 0-dim as needed for the optimizer to normally work ( #145674 )
...
Fixes #145461
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145674
Approved by: https://github.com/janeyx99
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2025-03-17 23:07:05 +00:00
Aaron Gokaslan
292af3cc89
[BE][Ez]: ISC001 Auto concatenate implicit one line strings ( #146408 )
...
Apply ruff rule about implicit string concatenation, this autofixes strings that are all the same type and on the same line. These lines are broken up likely as the result of autoformatters in the past. All fixes are automated using the autofixes in ISC001.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146408
Approved by: https://github.com/justinchuby , https://github.com/janeyx99
2025-02-04 19:07:04 +00:00
Aaron Orenstein
0afd335174
PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu ( #145175 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145175
Approved by: https://github.com/bobrenjc93
2025-01-21 16:57:27 +00:00
PyTorch MergeBot
5fd881a5b6
Revert "PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu ( #145175 )"
...
This reverts commit 54a00af2c6 .
Reverted https://github.com/pytorch/pytorch/pull/145175 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it seems to break some trunk tests ([comment](https://github.com/pytorch/pytorch/pull/145175#issuecomment-2603418267 ))
2025-01-21 00:49:55 +00:00
Aaron Orenstein
54a00af2c6
PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu ( #145175 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145175
Approved by: https://github.com/bobrenjc93
2025-01-20 22:32:59 +00:00
PyTorch MergeBot
154185dcd0
Revert "Removed unused _RequiredParameter ( #144771 )"
...
This reverts commit 6a5f895e54 .
Reverted https://github.com/pytorch/pytorch/pull/144771 on behalf of https://github.com/malfet due to It broke number of cpuinductor tests ([comment](https://github.com/pytorch/pytorch/pull/144771#issuecomment-2593293542 ))
2025-01-15 15:51:33 +00:00
Piergiacomo De Marchi
6a5f895e54
Removed unused _RequiredParameter ( #144771 )
...
As per this [discussion](https://discuss.pytorch.org/t/a-question-about-requiredparameter/137977 ), I figured that `_RequiredParameter` is no longer used.
The `required` object was initially introduced in this [PR](4db6667923 ) as the `SGD` optimizer did not offer a default value for the learning rate. However there isn't a single place in the code base using `_RequiredParameter`, nor `required`. I am therefore removing unused `_RequiredParameter` and `required`.
Everything not included in this PR is Not a Contribution.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144771
Approved by: https://github.com/janeyx99
2025-01-15 04:11:17 +00:00
Aaron Orenstein
45ef3309e3
[BE] typing for decorators ( #144161 )
...
Summary:
Untyped decorators strip annotations from the decorated items.
- _compile
- _inductor/fx_passes/post_grad
- _inductor/lowering
- _library/custom_ops
- _meta_registrations
- _ops
- _refs/nn/functional
- ao/quantization/quantizer/xnnpack_quantizer_utils
- distributed/_composable/contract
- fx/experimental/graph_gradual_typechecker
- fx/experimental/migrate_gradual_types/constraint_generator
- optim/optimizer
- signal/windows/windows
- testing/_internal/common_device_type
- torch/_inductor/decomposition
- utils/flop_counter
Test Plan: unit tests
Differential Revision: D62302684
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144161
Approved by: https://github.com/Skylion007 , https://github.com/albanD
2025-01-04 16:40:09 +00:00
Xuehai Pan
e1196dfe51
Deprecate torch._utils.is_compiling() ( #127690 )
...
This PR is split from PR #126898 .
- #126898
------
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127690
Approved by: https://github.com/Skylion007 , https://github.com/malfet
2024-12-08 22:55:36 +00:00
Michael Lazos
1fd4757fdc
Support tensor betas in Adam and AdamW ( #134171 )
...
Adds support for beta1 and beta2 to be wrapped in tensor for Adam and AdamW.
Fixes https://github.com/pytorch/pytorch/issues/133898
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134171
Approved by: https://github.com/janeyx99
2024-11-15 21:55:55 +00:00
PyTorch MergeBot
1d28b8b6d5
Revert "Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() ( #127690 )"
...
This reverts commit e84d1121ad .
Reverted https://github.com/pytorch/pytorch/pull/127690 on behalf of https://github.com/ZainRizvi due to Sorry but this is breaking internally. More details in D65483292 ([comment](https://github.com/pytorch/pytorch/pull/127690#issuecomment-2458381056 ))
2024-11-05 23:10:38 +00:00
Xuehai Pan
e84d1121ad
Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() ( #127690 )
...
This PR is split from PR #126898 .
- #126898
------
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127690
Approved by: https://github.com/Skylion007 , https://github.com/malfet
2024-11-05 10:44:56 +00:00
ErezYosef
197601eeea
Add Support for Tracking Parameter Names (named_parameters) in Optimizer State Dict ( #134107 )
...
A proposal addressing Issue #1489 : **Optimizer should track parameter names and not id.**
(also mentioned in here: [[RFC] Introducing FQNs/clarity eyeglasses to optim state_dict](https://dev-discuss.pytorch.org/t/rfc-introducing-fqns-clarity-to-optim-state-dict/1552 )
## Summary
This PR introduces a backward-compatible enhancement where optimizers track parameter names instead of just their id.
Optimizers can be initialized with `named_parameters()` as:
```python
optimizer = optim.SGD(model.named_parameters(), lr=0.01, momentum=0.9)
```
This allows for greater clarity and ease when handling optimizers, as the parameters' names are preserved within the optimizer’s `state_dict` as:
```
state_dict =
{
'state': {
0: {'momentum_buffer': tensor(...), ...},
1: {'momentum_buffer': tensor(...), ...},
},
'param_groups': [
{
'lr': 0.01,
'weight_decay': 0,
...
'params': [0,1]
'param_names' ['layer.weight', 'layer.bias'] (optional)
}
]
}
```
Loading `state_dict` is not changed (backward-compatible) and the `param_names` key will be ignored.
## Key Features
#### Named Parameters in Optimizer Initialization:
Optimizers can accept the output of `model.named_parameters()` during initialization, allowing them to store parameter names directly.
#### Parameter Names in `state_dict`:
The parameter names are saved as a list in the optimizer’s `state_dict` with key `param_names`, alongside the `params` indices, ensuring seamless tracking of both names and parameters.
## Backward Compatibility
#### No Breaking Changes:
This change is fully backward-compatible. The added `param_names` key in the optimizer's `state_dict` is ignored when loading a state to the optimizer.
#### Customization with Hooks:
For more control, the loaded state_dict can be modified using a custom `register_load_state_dict_pre_hook`, providing flexibility for different design needs.
## Documentation Updates
Please refer to the documentation changes for more details on how this feature is implemented and how it can be used effectively.
## Solution Example:
A suggested solution to the problem mentioned in #1489 , for the same parameters but in a different order.
The following `register_load_state_dict_pre_hook` should be added to the optimizer before loading to enable loading the state dict :
```python
def adapt_state_dict_ids(optimizer, state_dict):
# assuming a single param group.
current_state_group = optimizer.state_dict()['param_groups'][0]
loaded_state_group = state_dict['param_groups'][0]
# same number of params, same names, only different ordering
current_state_name_to_id_mapping = {} # mapping -- param_name: id
for i, name in enumerate(current_state_group['param_names']):
current_state_name_to_id_mapping[name] = current_state_group['params'][i]
# changing the ids of the loaded state dict to match the order of the given state dict.
for i, name in enumerate(current_state_group['param_names']):
loaded_state_group['params'][i] = current_state_name_to_id_mapping[name]
return state_dict
```
In this code, the loaded `state_dict` ids are adapted to match the order of the current optimizer `state_dict`.
Both the previous and the current optimizers are required to be initiated with `named_parameters()` to have the 'param_names' key in the dict.
### Note
This is my first contribution to PyTorch, and I wish to receive feedback or suggestions for improvement.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134107
Approved by: https://github.com/janeyx99
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2024-10-14 19:24:44 +00:00
Jane Xu
ddc7b6d0b4
Removes confusing note, addresses #38006 ( #137535 )
...
Fixes #38006
The note was originally added in https://github.com/pytorch/pytorch/pull/30257 , which tried to ensure that the gradient wasn't modified in the optimizer. This note creates more confusion than is helpful, so removing it is better than leaving it in, especially because most uses of closure that I know _does_ modify the grads.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137535
Approved by: https://github.com/albanD
2024-10-09 04:00:38 +00:00
Jane Xu
b1612569f6
[BE] Clarify defaulting behavior in optimizer ( #135384 )
...
Fixes #135340
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135384
Approved by: https://github.com/drisspg , https://github.com/jainapurva
2024-09-06 21:52:55 +00:00
Masaki Kozuki
702c810780
move param's device check to _init_group for fused ( #131153 )
...
There could be some cases where the params have the meta device when calling optimizer's dunder init and those params are materialized in the first computation. This change would allow such situation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131153
Approved by: https://github.com/mlazos , https://github.com/janeyx99
Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99@users.noreply.github.com>
2024-08-17 04:49:47 +00:00
Xuehai Pan
758a0a88a2
[BE][Easy] enable ruff rule PIE790: unnecessary pass statement ( #133200 )
...
This PR removes unnecessary `pass` statement. This is semanticly safe because the bytecode for the Python code does not change.
Note that if there is a docstring in the function, a empty function does not need a `pass` statement as placeholder.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133200
Approved by: https://github.com/malfet , https://github.com/eqy , https://github.com/kit1980
2024-08-15 15:50:19 +00:00
Jane Xu
14750dd737
Correct return type of grouping helper function in Optimizer ( #133360 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133360
Approved by: https://github.com/albanD
2024-08-14 01:56:02 +00:00
PyTorch MergeBot
cbee9c1fd2
Revert "Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() ( #127690 )"
...
This reverts commit 0e7e61f7ce .
Reverted https://github.com/pytorch/pytorch/pull/127690 on behalf of https://github.com/kit1980 due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/127690#issuecomment-2272370386 ))
2024-08-07 00:05:20 +00:00
Xuehai Pan
0e7e61f7ce
Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() ( #127690 )
...
This PR is split from PR #126898 .
- #126898
------
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127690
Approved by: https://github.com/Skylion007 , https://github.com/malfet
2024-08-03 09:43:38 +00:00
xinyu-intel
2ee9895304
Support optimizer capturable on hpu and xpu ( #132119 )
...
as title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132119
Approved by: https://github.com/jgong5 , https://github.com/janeyx99
2024-08-02 08:19:52 +00:00
Xuehai Pan
30293319a8
[BE][Easy][19/19] enforce style for empty lines in import segments in torch/[o-z]*/ ( #129771 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129771
Approved by: https://github.com/justinchuby , https://github.com/janeyx99
2024-08-01 17:07:14 +00:00
Jane Xu
3816f6420a
[BE] remove unnecessary _dispatch_sqrt by using ** 0.5 ( #131358 )
...
Based on the discussion here where ** 0.5 is not slower than math.sqrt. https://github.com/pytorch/pytorch/pull/129905#discussion_r1675605075
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131358
Approved by: https://github.com/albanD
2024-07-30 18:08:17 +00:00
PyTorch MergeBot
e4db5dc1c4
Revert "[BE] remove unnecessary _dispatch_sqrt by using ** 0.5 ( #131358 )"
...
This reverts commit 4c7f22dee2 .
Reverted https://github.com/pytorch/pytorch/pull/131358 on behalf of https://github.com/janeyx99 due to Internal uses this private API and landing that has been a pain so we're reverting this first ([comment](https://github.com/pytorch/pytorch/pull/131358#issuecomment-2253190654 ))
2024-07-26 17:35:27 +00:00
PyTorch MergeBot
c9888c2739
Revert "[BE] typing for decorators - optim/optimizer ( #131583 )"
...
This reverts commit a1dad77dfa .
Reverted https://github.com/pytorch/pytorch/pull/131583 on behalf of https://github.com/atalman due to Breaks CI: [GH job link](https://github.com/pytorch/pytorch/actions/runs/10105959146/job/27947741162 ) [HUD commit link](a1dad77dfa ) ([comment](https://github.com/pytorch/pytorch/pull/131583#issuecomment-2252784280 ))
2024-07-26 13:41:22 +00:00
Aaron Orenstein
a1dad77dfa
[BE] typing for decorators - optim/optimizer ( #131583 )
...
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131583
Approved by: https://github.com/janeyx99
ghstack dependencies: #131568 , #131569 , #131570 , #131571 , #131572 , #131573 , #131574 , #131575 , #131576 , #131577 , #131578 , #131579 , #131580 , #131581 , #131582
2024-07-26 05:00:07 +00:00
Jane Xu
4c7f22dee2
[BE] remove unnecessary _dispatch_sqrt by using ** 0.5 ( #131358 )
...
Based on the discussion here where ** 0.5 is not slower than math.sqrt. https://github.com/pytorch/pytorch/pull/129905#discussion_r1675605075
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131358
Approved by: https://github.com/albanD
2024-07-24 14:58:57 +00:00
Aaron Orenstein
5a0068cc69
[BE] mypy: disallow untyped decorators ( #131428 )
...
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.
Step 1 - Enable the error and override in all the offending files.
#131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby , https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Sahdev Zala
9795dba1e0
Optim package docstring fix ( #129086 )
...
Fix docstrings in various files in optim package. This is a last remaining fix for the issue #112593
The fix can be verified by running pydocstyle path-to-file --count
Fixes #112593
Related #128248
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129086
Approved by: https://github.com/janeyx99
2024-06-21 14:30:53 +00:00
PyTorch MergeBot
90bb510ece
Revert "Deprecate torch._utils.is_compiling() and torch._dynamo.external_utils.is_compiling() ( #127690 )"
...
This reverts commit 348b181a97 .
Reverted https://github.com/pytorch/pytorch/pull/127690 on behalf of https://github.com/clee2000 due to sorry I think https://github.com/pytorch/pytorch/pull/126898#issuecomment-2142884456 is still relevant, I will reach out to them to see what needs to be done in internal to get this remerged ([comment](https://github.com/pytorch/pytorch/pull/127690#issuecomment-2159248859 ))
2024-06-10 20:44:42 +00:00