Commit Graph

30 Commits

Author SHA1 Message Date
Xuehai Pan
1cb4e2df65 [BE][PYFMT] migrate PYFMT for torch._inductor to ruff format (#144550)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144550
Approved by: https://github.com/jansel
2025-02-28 13:33:19 +00:00
Aaron Orenstein
893ca1dfe1 PEP585 update - torch/_inductor/[_-i]* (#145137)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145137
Approved by: https://github.com/bobrenjc93
2025-01-19 01:22:47 +00:00
bobrenjc93
a3ab27b8e0 Migrate from Tuple -> tuple in torch/_inductor (#144264)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144264
Approved by: https://github.com/eellison
2025-01-07 03:27:27 +00:00
Tom Ritchford
da67a6a7bb [inductor] Replace set by OrderedSet (#138466)
Uses the set_linter from https://github.com/pytorch/pytorch/pull/138454
and considerable manual editing

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138466
Approved by: https://github.com/eellison
2024-12-13 16:08:45 +00:00
Tom Ritchford
dc23f1944a Remove unused Python variables in torch/[_-a]* (#133492)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133492
Approved by: https://github.com/albanD
2024-12-12 17:39:14 +00:00
PyTorch MergeBot
5c97ac9721 Revert "Remove unused Python variables in torch/[_-a]* (#133492)"
This reverts commit fda975a7b3.

Reverted https://github.com/pytorch/pytorch/pull/133492 on behalf of https://github.com/clee2000 due to Sorry, I need to revert this in order to revert something else.  The only thing you need to do is rebase and remerge ([comment](https://github.com/pytorch/pytorch/pull/133492#issuecomment-2536635516))
2024-12-11 17:29:12 +00:00
Tom Ritchford
fda975a7b3 Remove unused Python variables in torch/[_-a]* (#133492)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/133492
Approved by: https://github.com/albanD
2024-12-10 21:48:44 +00:00
eellison
34e420519d [Reland] dont decompose baddbmm (#141045)
Previously the decomposition would upcasts inputs to fp32. This led to a slowdown compared to eager which would run in fp16. We also tried keeping the bmm in fp16, and the upcasting for the epilogue but that led to worse numerics because the bmm in eager would do the epilogue all in fp32 without a downcast in the bmm accumulator.

Fix for https://github.com/pytorch/pytorch/issues/137897

Reland of https://github.com/pytorch/pytorch/pull/137904

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141045
Approved by: https://github.com/BoyuanFeng
2024-11-19 21:07:58 +00:00
PyTorch MergeBot
6aef58a249 Revert "Dont decompose aten.baddmm in inductor (#137904)"
This reverts commit c066f4a055.

Reverted https://github.com/pytorch/pytorch/pull/137904 on behalf of https://github.com/huydhn due to Sorry for reverting your change but I think the test is failing in trunk, maybe a landrace? ([comment](https://github.com/pytorch/pytorch/pull/137904#issuecomment-2443158194))
2024-10-29 04:08:11 +00:00
eellison
c066f4a055 Dont decompose aten.baddmm in inductor (#137904)
Previously the decomposition would upcasts inputs to fp32. This led to a slowdown compared to eager which would run in fp16. We also tried keeping the bmm in fp16, and the upcasting for the epilogue but that led to worse numerics because the bmm in eager would do the epilogue all in fp32 without a downcast in the bmm accumulator.

Fix for https://github.com/pytorch/pytorch/issues/137897

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137904
Approved by: https://github.com/ngimel
2024-10-29 00:54:29 +00:00
Oguz Ulgen
09f9c256ad Add basic mypy annotations to inductor (#132416)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132416
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
ghstack dependencies: #132415
2024-08-04 18:43:37 +00:00
PyTorch MergeBot
f2ddd5e9e0 Revert "Add basic mypy annotations to inductor (#132416)"
This reverts commit 78927d37f6.

Reverted https://github.com/pytorch/pytorch/pull/132416 on behalf of https://github.com/ZainRizvi due to Sorry, this PR has entered a weird state in the diff train. Trying to revert it to skip it, and then we can try relanding it ([comment](https://github.com/pytorch/pytorch/pull/132415#issuecomment-2267631785))
2024-08-04 18:39:29 +00:00
Oguz Ulgen
78927d37f6 Add basic mypy annotations to inductor (#132416)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132416
Approved by: https://github.com/XuehaiPan, https://github.com/jamesjwu
ghstack dependencies: #132415
2024-08-01 20:14:25 +00:00
Xuehai Pan
b6d477fd56 [BE][Easy][16/19] enforce style for empty lines in import segments in torch/_i*/ (#129768)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129768
Approved by: https://github.com/jansel
2024-07-20 16:20:58 +00:00
Aaron Orenstein
ea614fb2b1 Flip default value for mypy disallow_untyped_defs [2/11] (#127839)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127839
Approved by: https://github.com/oulgen
2024-06-08 18:23:08 +00:00
Edward Z. Yang
5e58227d27 Rebind and refresh unbacked bindings in FakeTensorUpdater (#124314)
Like the previous two PRs, this is doing the rebinding and binding computation, just in FakeTensorUpdater. FakeTensorUpdater modifies FX graph in place so its usage pattern is slightly different, but still pretty short.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124314
Approved by: https://github.com/IvanKobzarev, https://github.com/lezcano
ghstack dependencies: #124310
2024-04-25 02:08:55 +00:00
Isuru Fernando
67ec870234 Fix FakeTensorUpdater logic for updating fake tensors (#116168)
Fixes https://github.com/pytorch/pytorch/issues/114464

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116168
Approved by: https://github.com/peterbell10
2024-03-15 18:22:24 +00:00
Edward Z. Yang
d03173e88c Unify MYPYINDUCTOR and MYPY (#118432)
The original motivation for MYPYINDUCTOR was a faster type checking configuration that only checked a subset of files. With the removal of `follow_imports = ignore`, we are now able to use dmypy to do fast incremental typechecking, eliminating the need for this.

Perhaps erroneously, when I tee'ed up this PR I elected to delete the `follow_imports = skip` designations in the mypy-inductor.ini. This lead to a number of extra type error suppressions that I manually edited. You will need to review.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118432
Approved by: https://github.com/Skylion007
ghstack dependencies: #118414, #118418
2024-01-27 17:23:20 +00:00
Peter Bell
3ec4f00316 [inductor] Allow reinplacing functionalized scatter ops (#116899)
This expands the reinplacing pass to allow reinplacing view-scatter operations.
e.g. if our python code is:
```
a = view1(inp)
b = view2(a)
b.copy_(src)
```
this generates a functionalized graph like:
```python
a = view1(inp)
a_updated = view2_scatter(a, src)
inp_updated = view1_scatter(inp, a_updated)
```

First, the `canonicalize_view_scatter_ops` step rewrites the functionalized graph
in the form:
```python
inp_updated = _generalized_scatter(inp, src, [view1, view2])
a_updated = view1(inp_updated)
```

I then register `_generalized_scatter` as a normal inplacable op which can be
handled by the pre-existing mechanism. Since we've fused the two scatter ops into one,
the reinplacing pass sees only one user of `inp` which allows the entire operation to be
reinplaced  if desired (and I add heuristics that sometimes choose not to reinplace).

Finally, there is a decomposition step which decomposes out-of-place or in-place
`_generalized_scatter` operations either back into view_scatter operations, or
into the version with mutations. When introducing mutations, the reinplaced
version is equivalent to the original mutation:
```
a = view1(inp)
b = view2(a)
b.copy_(src)
```

Or when out-of-place we end up with a minor restructuring of the graph:
```
a = view1(inp)
tmp = view2_scatter(a, src)
inp_updated = view1_scatter(inp, tmp)
a_updated = view1(inp_updated)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116899
Approved by: https://github.com/lezcano
ghstack dependencies: #116898, #117121
2024-01-23 15:31:28 +00:00
Peter Bell
001585f446 [fx][inductor] Add statically_known_true utility for SymBool (#117359)
This adds a function `statically_known_true` for `SymBool` that works
like inductor's `is_expr_static_and_true`. That is, it tries to simplify the
expression to a constant or returns `False` if it cannot be simplified.

This is useful in cases that can be optimized if the condition is met,
otherwise it doesn't effect correctness so we can avoid adding guards.

I also use this new function in inductor for `FakeTensorUpdater` and
`remove_noop_pass` which both generated unexpected guards previously.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117359
Approved by: https://github.com/lezcano
2024-01-15 18:01:10 +00:00
Aaron Gokaslan
ee5d981249 [BE]: Enable RUFF PERF402 and apply fixes (#115505)
* Enable PERF402. Makes code more efficient and succinct by removing useless list copies that could be accomplished either via a list constructor or extend call. All test cases have noqa added since performance is not as sensitive in that folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115505
Approved by: https://github.com/malfet
2023-12-20 18:01:24 +00:00
Jez Ng
0a9dbbbaad Make _inductor/fx_utils.py, _dynamo/utils.py pass follow_imports typechecking (#113722)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113722
Approved by: https://github.com/lezcano
2023-11-16 05:44:15 +00:00
Peter Bell
66c32d099a Use pytree.arg_tree_leaves everywhere (#112394)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112394
Approved by: https://github.com/lezcano
ghstack dependencies: #112391, #112392, #112393
2023-10-31 15:57:06 +00:00
Peter Bell
bbd5b935e4 Use pytree.tree_leaves everywhere (#112324)
This changes all the instances I could find of `tree_flatten(...)[0]` or
`x, _ = tree_flatten` to use `tree_leaves`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112324
Approved by: https://github.com/lezcano
ghstack dependencies: #112327, #112323
2023-10-30 03:39:04 +00:00
chilli
005e8ddcb9 cache the hash construction on Guard (#110464)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110464
Approved by: https://github.com/zou3519, https://github.com/voznesenskym
2023-10-04 04:49:18 +00:00
Jez Ng
5cd8a6d40a Enable typechecking for _inductor/fx_utils.py (#109415)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109415
Approved by: https://github.com/Skylion007
ghstack dependencies: #109269, #109347, #109335
2023-09-18 18:12:23 +00:00
chilli
39130c7433 Add reinplacing pass for scatters + incremental fake tensor updating (#106192)
mutation for params)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106192
Approved by: https://github.com/jansel, https://github.com/eellison
2023-08-30 20:41:37 +00:00
BowenBao
60a68477a6 Bump black version to 23.1.0 (#96578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96578
Approved by: https://github.com/ezyang
2023-03-15 06:27:59 +00:00
blzheng
08035b1eb9 inductor: support more conv+unary fusion (#92518)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92518
Approved by: https://github.com/jgong5, https://github.com/jansel
2023-01-30 07:21:50 +00:00
XiaobingSuper
25ff10caa7 inductor:enable conv+unary fusion for torch unary function (#91609)
This PR is about to enable unary fusion which the unary is the torch function, this PR will improve timm models performance a lot.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91609
Approved by: https://github.com/jgong5, https://github.com/desertfire
2023-01-06 01:23:35 +00:00