Maggie Moss
84fe848503
Fix pyrefly error syntax (2/n) ( #166448 )
...
Ensrues pyrefly ignores only silence one error code.
After this, only ~40 files left to clean up .
pyrefly check
lintrunner
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166448
Approved by: https://github.com/Skylion007
2025-10-29 00:36:40 +00:00
Maggie Moss
eb83c3ca23
Clean up unused Pyrefly suppressions ( #166178 )
...
Cleaning up ignores that are no longer needed in the repo and adding select suppressions so the main branch is clean.
test plan:
`lintrunner -a`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166178
Approved by: https://github.com/oulgen
2025-10-25 05:32:21 +00:00
can-gaa-hou
39161e73fc
[Fix] missing lambda in torch._check ( #165043 )
...
Fixes more missing lambda in torch._check in the source code. Inspired by #164225 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165043
Approved by: https://github.com/FFFrog , https://github.com/Skylion007
2025-10-10 17:11:55 +00:00
Maggie Moss
4ab847bbc7
Pyrefly suppressions 4/n ( #164615 )
...
Adds suppressions to pyrefly will typecheck clean: https://github.com/pytorch/pytorch/issues/163283
Test plan:
dmypy restart && python3 scripts/lintrunner.py -a
pyrefly check
step 1: uncomment lines in the pyrefly.toml file
step 2: run pyrefly check
step 3: add suppressions, clean up unused suppressions
before: https://gist.github.com/maggiemoss/356645cf8cfe33123d9a27f23b30f7b1
after:
0 errors (2,753 ignored)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164615
Approved by: https://github.com/oulgen
2025-10-06 16:14:36 +00:00
can-gaa-hou
eb4361a801
[Fix] Adding missing f prefixes to formatted strings [1/N] ( #164065 )
...
As stated in the title.
* #164068
* #164067
* #164066
* __->__ #164065
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164065
Approved by: https://github.com/Skylion007
2025-09-29 04:53:00 +00:00
kbabiuchx
ea1883dfd3
Fixes #154982 : add missing to_result_dtype in vector_norm ( #155111 )
...
Fixes #154982
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155111
Approved by: https://github.com/isuruf , https://github.com/eellison
2025-09-04 10:49:08 +00:00
Laith Sakka
8485f19507
remove gso from vector_norm ( #156530 )
...
guard_or_false here does same thing that guard_size_oblivuous do, note that
size is >=0 and this is size like by definition since its a tensor size
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156530
Approved by: https://github.com/bobrenjc93
2025-06-21 08:42:36 +00:00
Slawomir Siwek
3742b7fb3a
Treat dim=[] same as dim=None ( #153570 )
...
Fixes https://github.com/pytorch/pytorch/issues/153568
Pull Request resolved: https://github.com/pytorch/pytorch/pull/153570
Approved by: https://github.com/ngimel
2025-05-20 22:44:29 +00:00
Sheng Qin
18588fe2fc
Fix GuardOnDataDependentSymNode in the normalize operator ( #152039 )
...
Test Plan:
Dumped the local net torch.package to local
Ran
```
buck2 run scripts/shengqin:test_model_export -- /tmp/mtia_local_torch_package {\"local\":null}
```
succeeded
Reviewed By: hongyang-zhao
Differential Revision: D73405271
Pull Request resolved: https://github.com/pytorch/pytorch/pull/152039
Approved by: https://github.com/houseroad
2025-05-01 04:34:49 +00:00
Laith Sakka
5471e80fb4
Remove guard_size_oblivious from vector_norm decomposition. ( #148809 )
...
This PR remove the usage of guard_size_oblivious in vector_norm by inlining it in the runtime check,
this prevent any data dependent error from ever appearing here at the locations where guard_size_oblivious
used to exist. Before this PR it used to break potentially. This is NOT BC breaking or changing of semantics from eager.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148809
Approved by: https://github.com/bobrenjc93
2025-04-10 16:19:00 +00:00
Isuru Fernando
957faaadca
Avoid overflow in vector_norm for scalar input ( #144073 )
...
Fixes https://github.com/pytorch/pytorch/issues/143960 where torch.dist gave different results from eager due to vector_norm overflowing and eager mode avoids the overflow for single element reductions by not computing the power and then the root.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144073
Approved by: https://github.com/eellison , https://github.com/laithsakka
2025-04-07 17:10:10 +00:00
Aaron Orenstein
db4ce78d46
PEP585: More UP006 fixes ( #146392 )
...
This should be the final PR before we can enable RUFF UP006.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/146392
Approved by: https://github.com/justinchuby , https://github.com/albanD , https://github.com/Skylion007
2025-02-20 06:18:13 +00:00
Aaron Orenstein
5b5766665d
PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight ( #145102 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145102
Approved by: https://github.com/bobrenjc93
ghstack dependencies: #145105
2025-01-18 20:47:12 +00:00
cyy
b567ca0f51
Remove unused imported names in python files ( #134438 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134438
Approved by: https://github.com/zou3519
2024-08-27 20:44:04 +00:00
Xuehai Pan
e7eeee473c
[BE][Easy][14/19] enforce style for empty lines in import segments in torch/_[a-c]*/ and torch/_[e-h]*/ and torch/_[j-z]*/ ( #129765 )
...
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501 . Most changes are auto-generated by linter.
You can review these PRs via:
```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129765
Approved by: https://github.com/ezyang
2024-07-31 10:42:50 +00:00
Aaron Orenstein
4a5a87168e
[BE] typing for decorators - _prims_common/wrappers ( #131567 )
...
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131567
Approved by: https://github.com/oulgen , https://github.com/zou3519
2024-07-25 14:35:13 +00:00
Aaron Orenstein
5a0068cc69
[BE] mypy: disallow untyped decorators ( #131428 )
...
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.
Step 1 - Enable the error and override in all the offending files.
#131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby , https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Aaron Orenstein
afe15d2d2f
Flip default value for mypy disallow_untyped_defs [3/11] ( #127840 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127840
Approved by: https://github.com/oulgen
2024-06-08 18:28:01 +00:00
Edward Z. Yang
c73c9457aa
Add guard_size_oblivious to vector_norm ( #126772 )
...
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126772
Approved by: https://github.com/lezcano , https://github.com/Skylion007
ghstack dependencies: #126771
2024-05-21 19:53:21 +00:00
Aaron Gokaslan
2f3b0befed
[BE]: Apply ruff FURB 118. ( #124743 )
...
Replaces various lambdas with operator.itemgetter which is more efficient (as it's a builtin function). Particularly useful for when lambdas are used as 'key' functions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124743
Approved by: https://github.com/albanD , https://github.com/malfet
2024-04-26 14:34:52 +00:00
Xu Zhao
7a64eb65e4
Fix Dynamo tests failing with "Failed running call_function <built-in function linalg_norm" ( #120993 )
...
When iterating the ord value through an array, we are sharing the same torchdynamo context. This makes dynamo treat the `ord` variable as dynamic shape, causing problems.
In the `vector_norm` decomposition, casting the int type ord to float will fix this problem.
Fixes https://github.com/pytorch/pytorch/issues/119795
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120993
Approved by: https://github.com/lezcano
2024-03-01 20:27:45 +00:00
Andrew M. James
4625ecb858
Add decomp for linalg.cross ( #119809 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119809
Approved by: https://github.com/lezcano , https://github.com/peterbell10
2024-02-16 09:58:38 +00:00
lezcano
239fed7e1e
Add reference for linalg.vecdot ( #108188 )
...
Was addressing https://github.com/pytorch/pytorch/issues/108127 , but
then I realised that vecdot is already CompositeImplicit. Pushing anyway
as a short-and-sweet PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108188
Approved by: https://github.com/peterbell10
2023-08-31 15:30:23 +00:00
Kurt Mohler
ee83c646bb
Replace _prims_common.check with torch._check* ( #103240 )
...
This relands most of the changes from #102219 which were backed out by #103128 . However, instead of removing `_prims_common.check`, it adds a warning and a comment mentioning that it will be removed in the future and `torch._check*` should be used instead. As mentioned in https://github.com/pytorch/pytorch/pull/103128#pullrequestreview-1466414415 , `_prims_common.check` cannot yet be removed because of some internal usage
Part of #72948
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103240
Approved by: https://github.com/albanD
2023-06-21 00:46:17 +00:00
BowenBao
724a1ba2de
Tidy __all__ under torch._refs ( #103712 )
...
- Added ops that were missing under `__all__`.
- Some misc changes to helper functions to make them private.
- Set correct `fn.__module__` for `fn` created by `_make_alias`, when called in another module.
All modification largely references results from a hacked version of `test_public_bindings::test_correct_module_names`.
By default `torch._refs` is not included in the test because it is technically a private package.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103712
Approved by: https://github.com/lezcano
2023-06-20 00:04:58 +00:00
Ivan Zaitsev
821493715c
Back out "Remove check from _prims_common, replace with torch._check* ( #102219 )", Back out "Forwatd fix for D46427687" ( #103128 )
...
Test Plan: revertitparrot
Reviewed By: malfet
Differential Revision: D46506433
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103128
Approved by: https://github.com/malfet
2023-06-07 01:41:41 +00:00
Kurt Mohler
a84bb2709a
Remove check from _prims_common, replace with torch._check* ( #102219 )
...
Part of #72948
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102219
Approved by: https://github.com/lezcano , https://github.com/albanD
2023-06-03 02:23:21 +00:00
PyTorch MergeBot
a7efa0ce35
Revert "Remove check from _prims_common, replace with torch._check* ( #102219 )"
...
This reverts commit fb79d43649 .
Reverted https://github.com/pytorch/pytorch/pull/102219 on behalf of https://github.com/malfet due to Broke lint, see https://github.com/pytorch/pytorch/actions/runs/5158949959/jobs/9293466925 ([comment](https://github.com/pytorch/pytorch/pull/102219#issuecomment-1574245414 ))
2023-06-02 20:00:48 +00:00
Kurt Mohler
fb79d43649
Remove check from _prims_common, replace with torch._check* ( #102219 )
...
Part of #72948
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102219
Approved by: https://github.com/lezcano , https://github.com/albanD
2023-06-02 19:13:45 +00:00
Khushi Agrawal
301a28bf8c
[primTorch] move diagonal & add linalg.diagonal refs ( #95774 )
...
Fixes #85419
Also, add `_refs.linalg.diagonal`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95774
Approved by: https://github.com/lezcano
2023-03-06 17:59:47 +00:00
Fabio Rocha
e116ca93e1
Run test_torchinductor*.py with implicit_fallbacks=False ( #94039 )
...
This way it errors out for ops that don't have decomps and
requires you to add explicit fallbacks to lowering.py
Turns out there are a lot, and this commit adds them as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94039
Approved by: https://github.com/lezcano , https://github.com/jansel , https://github.com/ngimel
2023-02-10 18:10:56 +00:00
Nikita Shulga
fd3a7264ae
[MPS] Add group_norm[fwd+backward] and mean_var (take 2) ( #91190 )
...
Use Prims to implement group_norm, group_norm_backward and mean_var
Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in
order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in
15af4b1cee/torch/__init__.py (L1095)
is executed last during init process.
Add `__all__` to `torch/backends/mps/__init__.py` as well as alias all imports as private
Add `TestNNMPS.test_group_norm_backward` that validates no NaNs are generated during the backward pass
Fixes https://github.com/pytorch/pytorch/issues/88331
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91190
Approved by: https://github.com/albanD
2022-12-22 08:54:37 +00:00
PyTorch MergeBot
645eda0a00
Revert "[MPS] Add group_norm[fwd+backward] and mean_var ( #91190 )"
...
This reverts commit 371716eb36 .
Reverted https://github.com/pytorch/pytorch/pull/91190 on behalf of https://github.com/kit1980 due to Broke test_correct_module_names because of underscore _ops
2022-12-21 19:37:43 +00:00
Nikita Shulga
371716eb36
[MPS] Add group_norm[fwd+backward] and mean_var ( #91190 )
...
Use Prims to implement group_norm, group_norm_backward and mean_var
Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in
order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in
15af4b1cee/torch/__init__.py (L1095)
is executed last during init process.
Depends on https://github.com/pytorch/pytorch/pull/91203
Fixes https://github.com/pytorch/pytorch/issues/88331
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91190
Approved by: https://github.com/albanD
2022-12-21 17:33:27 +00:00
Khushi
a3f8495b84
[primTorch fix] use _maybe_convert_to_dtype ( #85163 )
...
Fixes #84561
- [x] fix lint tests
cc: @Lezcano!!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85163
Approved by: https://github.com/lezcano , https://github.com/mruberry
2022-10-31 17:08:55 +00:00
Edward Z. Yang
d73d4aa7de
Audit for error prone isinstance int/float and add lint ( #87345 )
...
We recently fixed a bug on symbolic-shapes branch where
an isinstance(x, int) test failed when passed a SymIntNode.
To prevent this, I've added a lint for all the codepaths
where we may pass SymInt/SymFloat directly to reject
direct isinstance int/float tests, and instead use one of
the aliases. The lint rule explains the options. I then
go and fix all of them.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87345
Approved by: https://github.com/bdhirsh , https://github.com/albanD
2022-10-21 15:55:24 +00:00
lezcano
11fe277b62
[PrimTorch] Add reference for torch.norm ( #81765 )
...
This ref does more things than `torch.norm`, and it fixes a few bugs
that `torch.norm` has. This implementation and the `torch.norm`
implementation come to terms in the next PR of this stack
We put this PR before, as otherwise `test_decomp.py` was failing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81765
Approved by: https://github.com/ngimel
2022-07-25 19:57:21 +00:00
Huy Do
12cb26509a
Apply ufmt to torch internal ( #81643 )
...
This is a big bang PR, merge conflicts are probably expected and will be addressed at merge.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81643
Approved by: https://github.com/ezyang
2022-07-22 02:19:50 +00:00
lezcano
96dfee4ce7
[PrimTorch] Reference for linalg.norm ( #81241 )
...
After all the work done, this one's just a simple composition of the
others :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81241
Approved by: https://github.com/ngimel
2022-07-21 23:07:32 +00:00
lezcano
c5330183ca
[PrimTorch] Reference for linalg.matrix_norm ( #81113 )
...
As per title. I corrected a thing or two from my previous implementation
to make for better errors in some weird edge-cases and have a more clear
understanding of when does this function support low_precision types and
when it doesn't.
We also use the optimisation for bfloat16 within `vector_norm` within
this function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81113
Approved by: https://github.com/ngimel
2022-07-21 23:07:32 +00:00
Horace He
a5fb41e3d3
Revert "Revert "Refactored prim utils into _prims_utils folder ( #81746 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81746
Approved by: https://github.com/anijain2305 , https://github.com/Krovatkin
2022-07-20 23:43:57 +00:00
PyTorch MergeBot
e43a02c314
Revert "Refactored prim utils into _prims_utils folder ( #81088 )"
...
This reverts commit 80231d0a72 .
Reverted https://github.com/pytorch/pytorch/pull/81088 on behalf of https://github.com/jeanschmidt due to breaking internal tests
2022-07-19 19:56:41 +00:00
Horace He
80231d0a72
Refactored prim utils into _prims_utils folder ( #81088 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81088
Approved by: https://github.com/ngimel
2022-07-19 03:55:51 +00:00
lezcano
24af7948ca
Add prim.svd, refs.linalg.svd, and refs.linalg.svdvals ( #78616 )
...
This is the first prim / ref added that has multiple returns.
There is an issue with `out_wrapper_multi` as currently implemented
(left a note). It assumes that the API is `svd(X, U=U, S=S, Vh=Vh)`,
when it's actually `svd(X, out=(U, S, Vh))`.
Even more, if we want to model PyTorch exactly, it should return a
`torch.return_types.svd`, rather than a `Tuple`.
There is an issue with
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78616
Approved by: https://github.com/mruberry
2022-07-09 19:42:01 +00:00
lezcano
e9a9b50f48
Reference for linalg.vector_norm ( #78350 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78350
Approved by: https://github.com/mruberry
2022-07-09 19:42:01 +00:00