Aaron Orenstein
0b2a3687b9
PEP585 update - torch/fx ( #145166 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145166
Approved by: https://github.com/bobrenjc93
2025-01-20 18:11:54 +00:00
Huamin Li
65c2086d45
fix the lint from D66795414 ( #142122 )
...
Summary: this diff is to fix the lint issues from D66457500 / https://github.com/pytorch/pytorch/pull/142056
Test Plan: OSS CI
Reviewed By: houseroad, FulinHuang
Differential Revision: D66795414
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142122
Approved by: https://github.com/houseroad
2024-12-05 12:05:51 +00:00
Sherlock Huang
0be004ff37
Enable fuse_by_partitions to always return output as tuple ( #142056 )
...
Summary:
aot_compile only accept a graph with tuple output.
we introduce an option to fuse_by_partitions to alway return outputs as tuple, even if it only have a single entry.
Test Plan: OSS CI
Differential Revision: D66457500
Pull Request resolved: https://github.com/pytorch/pytorch/pull/142056
Approved by: https://github.com/angelayi , https://github.com/hl475
2024-12-05 08:07:41 +00:00
Sherlock Huang
f32b9a5145
Fx graph always return tuple in fuse_as_graphmodule ( #139236 )
...
Summary: As title.
Test Plan: Let's see what OSS CI says
Differential Revision: D65147426
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139236
Approved by: https://github.com/ezyang
2024-10-30 23:31:06 +00:00
Xuehai Pan
abbd71d29d
[BE][Easy] enable PYFMT for torch.fx ( #138443 )
...
Reproduce command:
```bash
ghstack checkout https://github.com/pytorch/pytorch/pull/138443
git checkout HEAD~1 torch/
lintrunner -a --take "PYFMT" --all-files
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138443
Approved by: https://github.com/ezyang
2024-10-21 19:15:49 +00:00
Zhou, Lingzhi
35532fc477
[Partitioner] Reuse partition to check whether nodes exist ( #135317 )
...
The time complexity of find node whether in NodeList is O(n). Reuse partition to speed up due to partition.nodes is hash table and has same elements.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135317
Approved by: https://github.com/ezyang
2024-09-21 23:52:02 +00:00
PyTorch MergeBot
c025f7becc
Revert "[Partitioner] Reuse partition to check whether nodes exist ( #135317 )"
...
This reverts commit e004d539da .
Reverted https://github.com/pytorch/pytorch/pull/135317 on behalf of https://github.com/izaitsevfb due to BC-breaking, breaks executorch and internal meta builds ([comment](https://github.com/pytorch/pytorch/pull/135317#issuecomment-2344730294 ))
2024-09-11 21:27:53 +00:00
Zhou, Lingzhi
e004d539da
[Partitioner] Reuse partition to check whether nodes exist ( #135317 )
...
The time complexity of find node whether in NodeList is O(n). Reuse partition to speed up due to partition.nodes is hash table and has same elements.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135317
Approved by: https://github.com/ezyang
2024-09-10 17:45:29 +00:00
Zhou, Lingzhi
44c08f4984
[Partitioner] Query whether nodes exist in graph faster ( #135316 )
...
Find node if exist in graph.nodes (linked list) take too long time. Using graph._find_nodes_lookup_table (hash table) instead to speed up.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135316
Approved by: https://github.com/ezyang
2024-09-09 03:34:02 +00:00
Aaron Orenstein
ed86ac2f25
[BE] typing for decorators - fx/_compatibility ( #134054 )
...
Summary: See #131429
Test Plan: unit tests pass
Differential Revision: D61493706
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134054
Approved by: https://github.com/oulgen
2024-08-26 04:00:27 +00:00
PyTorch MergeBot
945bf78894
Revert "[BE] typing for decorators - fx/_compatibility ( #131568 )"
...
This reverts commit 193f62fde9 .
Reverted https://github.com/pytorch/pytorch/pull/131568 on behalf of https://github.com/clee2000 due to same as https://github.com/pytorch/pytorch/pull/131572#issuecomment-2254328359 but I clicked the wrong link by accident. This is where it actually starts ([comment](https://github.com/pytorch/pytorch/pull/131568#issuecomment-2254330781 ))
2024-07-28 03:43:39 +00:00
Aaron Orenstein
193f62fde9
[BE] typing for decorators - fx/_compatibility ( #131568 )
...
See #131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131568
Approved by: https://github.com/justinchuby , https://github.com/oulgen , https://github.com/zou3519
2024-07-25 22:24:19 +00:00
Aaron Orenstein
5a0068cc69
[BE] mypy: disallow untyped decorators ( #131428 )
...
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.
Step 1 - Enable the error and override in all the offending files.
#131429
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby , https://github.com/oulgen
2024-07-23 21:50:55 +00:00
Xuehai Pan
973037be6a
[BE][Easy] apply autofix for ruff rules unnecessary-collection-call (C408): list() / tuple() / dict() ( #130199 )
...
This PR changes the empty collection factory call to Python literals:
- `list()` -> `[]`
- `tuple()` -> `()`
- `dict()` -> `{}`
The Python literals are more performant and safer. For example, the bytecode for building an empty dictionary:
```bash
$ python3 -m dis - <<EOS
import collections
d1 = {}
d2 = dict()
dict = collections.OrderedDict
d3 = dict()
EOS
```
```text
0 0 RESUME 0
1 2 LOAD_CONST 0 (0)
4 LOAD_CONST 1 (None)
6 IMPORT_NAME 0 (collections)
8 STORE_NAME 0 (collections)
3 10 BUILD_MAP 0
12 STORE_NAME 1 (d1)
4 14 PUSH_NULL
16 LOAD_NAME 2 (dict)
18 CALL 0
26 STORE_NAME 3 (d2)
6 28 LOAD_NAME 0 (collections)
30 LOAD_ATTR 8 (OrderedDict)
50 STORE_NAME 2 (dict)
7 52 PUSH_NULL
54 LOAD_NAME 2 (dict)
56 CALL 0
64 STORE_NAME 5 (d3)
66 RETURN_CONST 1 (None)
```
The dict literal `{}` only has one bytecode `BUILD_MAP`, while the factory call `dict()` has three `PUSH_NULL + LOAD_NAME + CALL`. Also, the factory call is not safe if users override the `dict` name in `locals` or `globals` (see the example of replacing with `OrderedDict` above).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130199
Approved by: https://github.com/malfet
2024-07-11 17:30:28 +00:00
Aaron Orenstein
038b927590
Flip default value for mypy disallow_untyped_defs [7/11] ( #127844 )
...
See #127836 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127844
Approved by: https://github.com/oulgen
ghstack dependencies: #127842 , #127843
2024-06-08 18:49:45 +00:00
Hongyang Zhao
62403b57b9
Add prefix option to CapabilityBasedPartitioner ( #126382 )
...
Summary: Add prefix arg so that users can provide the submodule name to partitioner.
Test Plan: https://fburl.com/anp/2kue4qp9
Differential Revision: D57416926
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126382
Approved by: https://github.com/SherlockNoMad
2024-05-16 22:38:07 +00:00
Sherlock Huang
a59dc14877
Keep node.meta when fusing subgraph ( #125261 )
...
Summary: When CapabilityBasedPartitioner creates the fused subgraph as the call_module node, it didn't populate the node.meta["val"] field.
Test Plan: OSS CI
Differential Revision: D56789259
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125261
Approved by: https://github.com/zhxchen17
2024-05-01 01:38:28 +00:00
Aaron Gokaslan
1562dae62c
[BE]: Apply RUF025 dict.fromkeys preview rule ( #118637 )
...
Simplifies and optimizes dict construction using the `fromkeys` classmethod ctor. This also makes it really obvious when all the keys will have the same static value, which could be a bug if unintentional. It is also significantly faster than using a dict comprehension. The rule is in preview, but I am adding a forward fix for when it becomes stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118637
Approved by: https://github.com/albanD
2024-01-30 20:46:54 +00:00
Wenting Wang
393fe9339a
Back out "Revert D49107540: [pytorch][PR] split by tag" ( #109332 )
...
Summary:
Original commit changeset: 6391a068640b
Original Phabricator Diff: D49107540
Test Plan: same as D49107540
Differential Revision: D49297522
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109332
Approved by: https://github.com/842974287
2023-09-16 05:29:16 +00:00
PyTorch MergeBot
bf5622e965
Revert "split by tag ( #108892 )"
...
This reverts commit 89b6276be9 .
Reverted https://github.com/pytorch/pytorch/pull/108892 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/108892#issuecomment-1720249148 ))
2023-09-14 22:43:03 +00:00
Wenting Wang
89b6276be9
split by tag ( #108892 )
...
Differential Revision: D49107540
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108892
Approved by: https://github.com/842974287
2023-09-14 21:49:11 +00:00
Kazuaki Ishizaki
105ef68f72
Fix typos under torch/fx directory ( #97596 )
...
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses , https://github.com/kit1980
2023-04-10 21:57:36 +00:00
Catherine Lee
4519228f60
Reduce pytest blocklist part 2 ( #96397 )
...
Enable pytest for a few unique files. pytest runs tests in a different order than unittest (but still a consistent ordering with respect to itself) and some tests change global state, causing other tests to fail.
`test_transpose_non_contiguous` in `test_torchinductor.py` gets impacted from some other test but I'm not sure which one, so my solution is to reset the metrics before the rest of the test is run.
`test_register_patterns` in `test_quantize_fx.py` adds extra keys to global variables, so remove them when the test is done via unittest's `addCleanUp` which also works on pytest.
pytest doesn't really have an equivalent for `load_tests` so change it to be like `test_jit` that imports all the classes. I also attempted to dynamically import them, but I failed.
`test_public_api_surface` in `test_fx.py` checks for a backwards compatibility classification. There is a different test in test_fx that results in `fuser_utils` being imported. pytest runs this test before `test_public_api_surface` while unittest runs it after, so pytest sees `fuser_utils` when crawling through the modules.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96397
Approved by: https://github.com/huydhn
2023-03-10 19:10:43 +00:00
Wei-Sheng Chin
9227fd741c
Avoid recursion in graph traverse ( #95723 )
...
It's easy to reach recursion limit in Python when calling `dfs_find_cycle` in big graphs (e.g., searching for attention heads in GPT-2 via SubgraphMatcher). Let's switch to queue-based graph tarversing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95723
Approved by: https://github.com/SherlockNoMad , https://github.com/Skylion007
2023-03-01 04:35:22 +00:00
Brian Hirsh
b5a925ff2e
propagate .meta info when replacing subgraphs in fx ( #87255 )
...
Fixes https://github.com/pytorch/torchdynamo/issues/1708
Our FX subgraph partitioner works by taking all of the original output nodes from a subgraph, and replacing it with a new `call_module` node in the graph.
If the original subgraph outputs had fake tensors and other metadata stored in their `.meta` attribute though, then this information was getting lost when we spliced in the subgraph.
Losing metadata on an FX graph also seems like an easy trap to fall into, so I'm wondering if there are any better guardrails that we can add. I ended up fixing in this PR by adding an optional kwarg to propagate meta info directly in the `fx.Node.replace_all_uses_with`, just because propagating metadata seems like a pretty core thing.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87255
Approved by: https://github.com/wconstab , https://github.com/SherlockNoMad
2022-11-02 14:36:46 +00:00
Sherlock Huang
43e7fee764
[Reland] Recursively print graph module and its submodule ( #81639 )
...
ghstack-source-id: fcfc024c440981ee3fe3537a5816089eadf2cc13
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81080
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81639
Approved by: https://github.com/ezyang
2022-07-21 16:58:25 +00:00
PyTorch MergeBot
4035a53cca
Revert "Recursively print graph module and its submodule ( #81080 )"
...
This reverts commit fe7262329c .
Reverted https://github.com/pytorch/pytorch/pull/81080 on behalf of https://github.com/DanilBaibak due to Break internal build
2022-07-18 14:46:26 +00:00
Sherlock Huang
fe7262329c
Recursively print graph module and its submodule ( #81080 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81080
Approved by: https://github.com/ezyang
2022-07-18 01:19:03 +00:00
Sherlock Huang
ac5a94789f
Refactor lift_subgraph_as_module as a fx.passes.util function ( #80292 )
...
lift_subgraph_as_module can be shared between fuser_utils.py and spliter_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80292
Approved by: https://github.com/jjsjann123 , https://github.com/842974287
2022-06-29 22:35:39 +00:00
Sherlock Huang
752c06e0e1
FX graph partitioner and fuser ( #79439 )
...
This PR introduces two components.
CapabilityBasedPartitioner for FX graph: given a list of supported operators, this partitioner tries to forms the largest subgraphs that only contain the supported ops.
Fuser utility: given a list of nodes in FX graph, it lifts them as a sub-GraphModule in the original graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79439
Approved by: https://github.com/jjsjann123 , https://github.com/davidberard98
2022-06-24 18:49:37 +00:00