Commit Graph

31 Commits

Author SHA1 Message Date
linhaifeng
695cb0d342 [2/N][Fix] Fix typo in test folder (#166374)
Fix typo in test folder.

_typos.toml
```bash
[default.extend-words]
nd = "nd"
arange = "arange"
Nd = "Nd"
GLOBALs = "GLOBALs"
hte = "hte"
iy = "iy"
PN = "PN"
Dout = "Dout"
optin = "optin"
gam = "gam"
PTD = "PTD"
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166374
Approved by: https://github.com/cyyever, https://github.com/ezyang
2025-10-29 03:02:07 +00:00
Anthony Barbier
bf7e290854 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/clee2000
2025-06-16 10:28:45 +00:00
PyTorch MergeBot
20912673a6 Revert "Add __main__ guards to jit tests (#154725)"
This reverts commit 1a55fb0ee8.

Reverted https://github.com/pytorch/pytorch/pull/154725 on behalf of https://github.com/malfet due to This added 2nd copy of raise_on_run to common_utils.py which caused lint failures, see https://github.com/pytorch/pytorch/actions/runs/15445374980/job/43473457466 ([comment](https://github.com/pytorch/pytorch/pull/154725#issuecomment-2940503905))
2025-06-04 15:42:52 +00:00
Anthony Barbier
1a55fb0ee8 Add __main__ guards to jit tests (#154725)
This PR is part of a series attempting to re-submit https://github.com/pytorch/pytorch/pull/134592 as smaller PRs.

In jit tests:

- Add and use a common raise_on_run_directly method for when a user runs a test file directly which should not be run this way. Print the file which the user should have run.
- Raise a RuntimeError on tests which have been disabled (not run)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154725
Approved by: https://github.com/Skylion007
2025-06-04 14:44:08 +00:00
Tom Ritchford
d8c8ba2440 Fix unused Python variables in test/[e-z]* (#136964)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/136964
Approved by: https://github.com/justinchuby, https://github.com/albanD
2024-12-18 23:02:30 +00:00
Oguz Ulgen
920f0426ae Add None return type to init -- tests rest (#132376)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132376
Approved by: https://github.com/jamesjwu
ghstack dependencies: #132335, #132351, #132352
2024-08-01 15:44:51 +00:00
Xuehai Pan
6ff1e43a41 [BE][Easy][13/19] enforce style for empty lines in import segments in test/j*/ (#129764)
See https://github.com/pytorch/pytorch/pull/129751#issue-2380881501. Most changes are auto-generated by linter.

You can review these PRs via:

```bash
git diff --ignore-all-space --ignore-blank-lines HEAD~1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129764
Approved by: https://github.com/ezyang
2024-08-01 12:13:42 +00:00
Aaron Gokaslan
c5fafe9f48 [BE]: TRY002 - Ban raising vanilla exceptions (#124570)
Adds a ruff lint rule to ban raising raw exceptions. Most of these should at the very least be runtime exception, value errors, type errors or some other errors. There are hundreds of instance of these bad exception types already in the codebase, so I have noqa'd most of them. Hopefully this error code will get commiters to rethink what exception type they should raise when they submit a PR.

I also encourage people to gradually go and fix all the existing noqas that have been added so they can be removed overtime and our exception typing can be improved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124570
Approved by: https://github.com/ezyang
2024-04-21 22:26:40 +00:00
Yuanhao Ji
604c9c5601 Enable UFMT on all of test/jit (#123623)
Partially addresses #123062

Ran lintrunner on:

- `test/jit`

with command:

```bash
lintrunner -a --take UFMT --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123623
Approved by: https://github.com/ezyang
2024-04-11 23:45:05 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
Elias Ellison
47ad6628f1 add optional refining (#69776)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69776

If we have an node output which is an optional type, but both if blocks produce a non-optional value, we can try to refine the if output type, which can open up further optimization opportunities.

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D33515235

Pulled By: eellison

fbshipit-source-id: 34f6ab94ac4238498f9db36a1b673c5d165e832e
2022-01-11 22:12:34 -08:00
Elias Ellison
2486061c72 [JIT] make x (+ or -) 0 and x (* or /) 1 peepholes type promotion aware (#67688)
Summary:
Some of the "no-ops" are not actually no-ops because they can change the dtype

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67688

Reviewed By: davidberard98

Differential Revision: D32104601

Pulled By: eellison

fbshipit-source-id: ccb99179a4b30fd20b5a9228374584f2cdc8ec21
2021-11-03 20:11:46 -07:00
Jane Xu
09c7771e9c Set test owners for jit tests (#66808)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66808

Reviewed By: mrshenli

Differential Revision: D31761414

Pulled By: janeyx99

fbshipit-source-id: baf8c49ff9c4bcda7b0ea0f6aafd26380586e72d
2021-10-25 07:51:10 -07:00
Elias Ellison
6e6ede2e70 [JIT] Re-enable alias sensitive peepholes (#65860)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65860

Re-enable peepholes like `x + 0 == x`. These were at one point enabled, and then disabled because they did not properly account for aliasing, and then re-enabled with reconstructing the alias db everytime which is slow  - O(n^2). I've added correctness conditions, and I've also made it so that we avoid using stale aliasing properties for either the input or output of nodes we optimize.
Some of the other code that we have written to avoid re-instantiating the alias db involves internally mutating it, however this is tricky to reason about and we probably have to add some extra invariants...

cc navahgar relevant to graph opts and d1jang alias analysis relevant here

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D31352382

Pulled By: eellison

fbshipit-source-id: 441a27f17dc623d6c24538d1d43cba0412c3c482
2021-10-22 09:45:57 -07:00
Elias Ellison
eaba976d49 Add x + 0 optimization (#65574)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65574

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D31797470

Pulled By: eellison

fbshipit-source-id: bf9309fb43f164665335fed0d09697b0e2f67261
2021-10-20 16:13:03 -07:00
Michael Suo
b8d58129bb Revert D31732420: Add x + 0 optimization
Test Plan: revert-hammer

Differential Revision:
D31732420 (66543f88de)

Original commit changeset: 0271e0dc0dda

fbshipit-source-id: c2beea1661e10c2f1a982b5d4a34b1041dcb1204
2021-10-19 20:07:00 -07:00
Elias Ellison
66543f88de Add x + 0 optimization (#65574)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65574

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D31732420

Pulled By: eellison

fbshipit-source-id: 0271e0dc0ddab06220048ed5bf4236fc85f3318c
2021-10-19 16:41:29 -07:00
Max Ren
0eaf081018 [JIT] canonicalize aten::rsub (#65014)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65014

ghstack-source-id: 138656948

Test Plan:
```
(pytorch) [maxren@devvm3115.atn0 ~/pytorch] python3 test/test_jit.py TestPeephole
CUDA not available, skipping tests
monkeytype is not installed. Skipping tests for Profile-Directed Typing
........s......................
----------------------------------------------------------------------
Ran 31 tests in 0.393s

OK (skipped=1)
(pytorch) [maxren@devvm3115.atn0 ~/pytorch] python3 test/test_jit.py TestPeephole.test_normalized_rsub
CUDA not available, skipping tests
monkeytype is not installed. Skipping tests for Profile-Directed Typing
.
----------------------------------------------------------------------
Ran 1 test in 0.015s

OK
```

Reviewed By: eellison

Differential Revision: D30941389

fbshipit-source-id: 03f0416d99090845c9bfb1e5fcf771d5f1d7a050
2021-09-22 17:20:46 -07:00
Mike Iovine
9324181d0a [JIT] Re-land "Add aten::slice optimization" (#65341)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65341

The changes in D30231044 (babd449978) were removed due to a downstream issue in glow. Now that the issue has been fixed by D30849396, we can safely re-introduce the changes.

Test Plan:
`buck test //caffe2/test:jit -- TestPeephole`

Glow test:
* `buck test //glow/fb/torch_glow/tests:unfuse_glow_ops_test`
* qxy11 confirmed that the problematic glow model now loads correctly with these changes

Reviewed By: eellison

Differential Revision: D31056878

fbshipit-source-id: 049903ee04ba88885cc9d1a91427af0f1f44f681
2021-09-21 07:29:51 -07:00
Daya Khudia
65050ec924 Back out "[JIT] Add aten::slice optimization"
Summary:
Original commit changeset: d12ee39f6828
build-break
overriding_review_checks_triggers_an_audit_and_retroactive_review
Oncall Short Name: dskhudia

Test Plan: Local run succeeds

Differential Revision: D30633990

fbshipit-source-id: 91cf7cc0ad7e47d919347c2a1527688e062e0c62
2021-08-30 14:05:04 -07:00
Mike Iovine
babd449978 [JIT] Add aten::slice optimization (#63049)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63049

Given a graph produced from a function like this:
```
def foo():
    li = [1, 2, 3, 4, 5, 6]
    return li[0:2]
```
This pass produces a graph like this:
```
def foo():
    li = [1, 2]
    return li
```

These changes are mostly adapted from https://github.com/pytorch/pytorch/pull/62297/

Test Plan: `buck test //caffe2/test:jit -- TestPeephole`

Reviewed By: eellison

Differential Revision: D30231044

fbshipit-source-id: d12ee39f68289a574f533041a5adb38b2f000dd5
2021-08-27 10:12:45 -07:00
Elias Ellison
e2227e86e4 Add a few peepholes (#62910)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/62910

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D30196947

Pulled By: eellison

fbshipit-source-id: d88c92616d4de4f47ff4fcf5c1994e629ca20395
2021-08-17 11:26:38 -07:00
Mike Iovine
000e3a0881 [Static Runtime] Add pass to eliminate __getitem__/DictConstruct calls (#62429)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62429

Introduce a new pass to eliminate calls to `prim::DictConstruct/aten::__getitem__`. Given a graph like this:
```
%2 : Dict = prim::DictConstruct(%key, %value)
%3 : Tensor = aten::__getitem__(%2, %key)
%4 : Tensor = op(%3)
```
This pass produces a graph like this (after dead code elimination):
```
%4 : Tensor = op(%value)
```

This optimization is applied in the static runtime.

Test Plan:
`buck test //caffe2/test:jit -- TestPeephole`

**local.forward performance summary**
About 3% runtime benefit. All `DictConstruct` calls optimized out, `__getitem__` calls reduced significantly (~50% of them are cut out)
P438354810

**local_request_only.forward performance summary**
About 14% runtime benefit. Again, all `DictConstruct` calls optimized out, 50% `__getitem__` calls removed.
P438359742

There is some variance with runtime measurements, so take these numbers with a grain of salt. Also note that the benefit does not exist in the shrunk model since there are no `DictConstruct` calls

Reviewed By: hlu1

Differential Revision: D29995087

fbshipit-source-id: f376376a46ff808115afd2d60446e5db8f6f752f
2021-08-13 10:21:16 -07:00
Zhengxu Chen
f0df0207ec [jit] Arithmetic simplification for integers. (#61444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61444

Add a mini pass to merge arithmetic nodes like (((x - 1) + 2) * 1) - 1.
Issue #60913

Test Plan:
python test/test_jit.py TestPeephole.test_peephole_arith

Imported from OSS

Reviewed By: eellison

Differential Revision: D29630614

fbshipit-source-id: 08ac64cee39070401f9ff9163d309f20ff53c5ac
2021-07-20 11:35:42 -07:00
eellison
f3aa61b9ed Add peephole for len(x.size()) (#59051)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/59051

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D28727247

Pulled By: eellison

fbshipit-source-id: 6474d39773b640992bdaf261575a3dbd48c6d56c
2021-05-27 17:57:53 -07:00
Elias Ellison
5313bafd31 [JIT] integer value refinement (#56438)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56438

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27924239

Pulled By: eellison

fbshipit-source-id: ace54fcb594853f30c242369ea203b0eb5527ac1
2021-05-21 08:51:01 -07:00
Elias Ellison
0d9f1c1ec6 Add Value * == Value * peephole (#55978)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55978

This is needed for broadcasting two of the same symbolic shape

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27755328

Pulled By: eellison

fbshipit-source-id: d38d9458a9e28d31558f0bc55206516b78131032
2021-05-21 08:50:57 -07:00
Elias Ellison
5cebf29b4e Add list len refinement (#55926)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55926

This is necessary for code like conv2d where we wish to share a generic convolution shape function logic with that of conv2d but for conv2d always infer the output is dimension 4. I'm also hoping the refinement algorithm here could be refactored out and used to support refining tensor types from user annotations. i have a length comment explaining how this works, and the logic outside of data structures is pretty small and contained. Additionally, you might check out https://fb.quip.com/X7EVAdQ99Zzm for a very similar description of how to refine values based on comparison operators.

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D27750997

Pulled By: eellison

fbshipit-source-id: d962415af519ac37ebc9de88f2e1ea60a1374f7c
2021-05-21 08:50:54 -07:00
John Clow
698be31262 Adding support for normalization of __is__ op (#57862)
Summary:
normalizing `__is__` to `eq`, and `__isnot__` to `ne` in the case of bools.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57862

Test Plan:
```
python test/test_jit.py TestPeephole
```
11 Tests, 1 skipped, no failures
Fixes https://github.com/pytorch/pytorch/issues/57387

Reviewed By: eellison

Differential Revision: D28335646

Pulled By: Gamrix

fbshipit-source-id: c9f885044b32897ba35483091bcf7037759b7517
2021-05-11 12:20:47 -07:00
Elias Ellison
30aeed7c2b Peephole Optimize out conv(x).dim(), which prevents BN fusion (#50221)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50221

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856266

Pulled By: eellison

fbshipit-source-id: ef7054b3d4ebc59a0dd129116d29273be33fe12c
2021-01-12 11:39:09 -08:00
Elias Ellison
a69f008cb7 [JIT] Factor out peephole to own test file (#50220)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50220

Test Plan: Imported from OSS

Reviewed By: tugsbayasgalan

Differential Revision: D25856263

Pulled By: eellison

fbshipit-source-id: f3d918d860e64e788e0bb9b9cb85125660f834c6
2021-01-12 11:39:06 -08:00