Commit Graph

39 Commits

Author SHA1 Message Date
Aaron Orenstein
6503b4a96e Update to using mypy 1.15 (#154054)
The BC break isn't real - mypy decided to start complaining about the way we were typing that function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154054
Approved by: https://github.com/Skylion007
2025-05-24 04:30:57 +00:00
Zesheng Zong
580f1183b4 Enable ruff rule S324 (#147665)
Fixes #147627

- Add `S324` in `pyproject.toml `
- Running check and clean warnings

```bash
lintrunner --take RUFF --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147665
Approved by: https://github.com/Skylion007

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2025-02-25 18:27:34 +00:00
Aaron Orenstein
0b2a3687b9 PEP585 update - torch/fx (#145166)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145166
Approved by: https://github.com/bobrenjc93
2025-01-20 18:11:54 +00:00
Edward Z. Yang
612122af8f Fix type-safety of torch.nn.Module instances (#141240)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/141240
Approved by: https://github.com/Skylion007, https://github.com/malfet
2024-11-22 00:05:05 +00:00
PyTorch MergeBot
6dada2136a Revert "Refactor FxGraphDrawer to use HTML-like labels (#137726)"
This reverts commit 1e73842029.

Reverted https://github.com/pytorch/pytorch/pull/137726 on behalf of https://github.com/huydhn due to Sorry for reverting your change, but it looks like some internal components are failing after this change and need to be updated ([comment](https://github.com/pytorch/pytorch/pull/137726#issuecomment-2455332612))
2024-11-04 17:44:44 +00:00
Gabriel Ferns
1e73842029 Refactor FxGraphDrawer to use HTML-like labels (#137726)
Fixes https://github.com/pytorch/pytorch/issues/137499
Testing: Added a new unit test to make sure that the regression case succeeds.
I'm debating about whether to make the borders visible. I'm partial to no borders, but it might make it harder for some people to read?
![68a2b0e3-orig_fx_graph_diagram](https://github.com/user-attachments/assets/fbc2fd98-9e76-488e-8ebe-c64fbf206932)
Vs.
![2bfe1c4f-orig_fx_graph_diagram](https://github.com/user-attachments/assets/b6bc88ba-dda2-4cf7-84ac-a615e1e03a74)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137726
Approved by: https://github.com/eellison, https://github.com/malfet
2024-11-01 23:19:50 +00:00
Xuehai Pan
abbd71d29d [BE][Easy] enable PYFMT for torch.fx (#138443)
Reproduce command:

```bash
ghstack checkout https://github.com/pytorch/pytorch/pull/138443
git checkout HEAD~1 torch/
lintrunner -a --take "PYFMT" --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138443
Approved by: https://github.com/ezyang
2024-10-21 19:15:49 +00:00
Aaron Orenstein
d95aedf5fd [BE] typing for decorators - fx/_compatibility (part 1) (#134202)
Part of #134054.

This corresponds to the pytorch mypy changes from D61493706. Updating takes so
long and touches so many files that it's impossible to land as a whole without conflicting with some other intermediate change.
So landing these 'type: ignore' for pytorch in advance of them actually being needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134202
Approved by: https://github.com/Skylion007
2024-08-22 17:07:33 +00:00
Oguz Ulgen
72d2dba992 Add None return type to init (#132335)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132335
Approved by: https://github.com/albanD
2024-08-01 15:26:45 +00:00
PyTorch MergeBot
945bf78894 Revert "[BE] typing for decorators - fx/_compatibility (#131568)"
This reverts commit 193f62fde9.

Reverted https://github.com/pytorch/pytorch/pull/131568 on behalf of https://github.com/clee2000 due to same as https://github.com/pytorch/pytorch/pull/131572#issuecomment-2254328359 but I clicked the wrong link by accident.  This is where it actually starts ([comment](https://github.com/pytorch/pytorch/pull/131568#issuecomment-2254330781))
2024-07-28 03:43:39 +00:00
Aaron Orenstein
193f62fde9 [BE] typing for decorators - fx/_compatibility (#131568)
See #131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131568
Approved by: https://github.com/justinchuby, https://github.com/oulgen, https://github.com/zou3519
2024-07-25 22:24:19 +00:00
Riley Dulin
fb5cb17fbe [torch][fx] Add normalize_args constructor argument to FxGraphDrawer (#130348)
Summary:
When writing out Graphviz files for graphs, sometimes the arguments are all
in a row and it's unclear which is which. Like for `aten.conv2d`, someone might not
remember the stride, padding, dilation order.

Add an option `normalize_args` (defaults to False) to normalize all args into kwargs.
This should help the readability of a graph.

Differential Revision: D59529417

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130348
Approved by: https://github.com/mcremon-meta
2024-07-09 22:16:54 +00:00
Xuehai Pan
d80939e5e9 [BE] enable UFMT for torch/storage.py (#127706)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127706
Approved by: https://github.com/ezyang
2024-06-27 23:16:24 +00:00
Aaron Orenstein
038b927590 Flip default value for mypy disallow_untyped_defs [7/11] (#127844)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127844
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843
2024-06-08 18:49:45 +00:00
chuanqiw
ff63e8bac8 [CI] fix doctest case by adding requires (#126855)
With the triton update, the new dependency `llnl-hatchet` will be introduced. And `pydot` is a dependency of `llnl-hatchet`. So the doctest case `torch/fx/passes/graph_drawer.py::FxGraphDrawer.get_dot_graph:0` won't be skipped.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/126855
Approved by: https://github.com/etaf, https://github.com/EikanWang, https://github.com/peterbell10
2024-05-27 07:40:27 +00:00
vfdev
8ec3d8e35f Fixed FxGraphDrawer compat constructor (#119767)
Match FxGraphDrawer compat constructor signature to avoid the following failure when `pydot` is not installed:
```
  File "/pytorch/torch/_functorch/partitioners.py", line 933, in draw_graph
    g = graph_drawer.FxGraphDrawer(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
TypeError: __init__() got an unexpected keyword argument 'dot_graph_shape'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119767
Approved by: https://github.com/eellison
2024-02-13 19:36:01 +00:00
chilli
1f51f977ae misc visualization/utility improvements (#114984)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114984
Approved by: https://github.com/weifengpy
ghstack dependencies: #114520
2023-12-02 04:02:39 +00:00
Yang Chen
5c3f03e2dd [inductor] add a config to specify the shape attribute for the generated svg graphs (#114811)
We draw our fx graphs with the "record" shape attribute by default.
Sometimes, when the graph is very complex, we may hit dot errors like below:
  "flat edge between adjacent nodes one of which has a record shape -
   replace records with HTML-like labels"
and thus fail to generate a graph. So, let's give the user an option
to specify the shape attribute for the dot graph. For example, passing
INDUCTOR_DOT_GRAPH_SHAPE_SVG = "none" would let us generate HTML-like lables
to workaround the above failure.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114811
Approved by: https://github.com/weifengpy
2023-11-30 06:10:37 +00:00
Jon Chuang
79212430df feat(inductor): fx graph debug should display device (#110346)
Device mismatch issues are root cause of: https://github.com/pytorch/pytorch/issues/107006, hence make device-related scheduling issues easier to diagnose.
Also format single-kwarg graphs to be more concise

Example rendering:
![image](https://github.com/pytorch/pytorch/assets/9093549/1b59a994-f2df-45c9-8cb7-37eb3ba12654)

CC code owners: @ngimel @jansel @shunting314 @mlazos @peterbell10

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110346
Approved by: https://github.com/eellison
2023-10-11 00:34:55 +00:00
willfengg
772e104dfd [inductor] visualize fused ops in svg graph (#107752)
example usage
* `TORCH_COMPILE_DEBUG=1 INDUCTOR_ORIG_FX_SVG=1 INDUCTOR_POST_FUSION_SVG=1 python trig.py`: show original fx node name, file, and code. see snapshot 2 where we have origin_0, 1, 2
* trig.py can be found in P816304818

Implementation
* keep original fx graph in GraphLowering, ```self.orig_gm: torch.fx.GraphModule = gm.__copy__()```
* draw original fx graph with origins ir_post_fusion ```V.debug.draw_orig_fx_graph(self.orig_gm, self.scheduler.nodes)```. node.meta["buff_meta"] tracks buf_name

<img width="350" alt="Screenshot 2023-08-29 at 12 40 24 PM" src="https://github.com/pytorch/pytorch/assets/134637289/c4e197cb-ab3b-4a09-a584-c1356376accb">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107752
Approved by: https://github.com/mlazos
2023-09-21 08:03:05 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Wenzhe Xue
b7a3f331f1 Add doc test in graph_drawer.py (#95919)
Add a doc test, extending #95534 .

I found I need to put the xdoctest under a class method. Otherwise if it's right under the class definition, the test cannot be found. @Erotemic Do I miss anything?

The xdoctest has been tested:
```
$ pytest --xdoctest torch/fx/passes/graph_drawer.py::FxGraphDrawer.get_dot_graph:0
=========== test session starts ==================
platform linux -- Python 3.9.15, pytest-7.2.1, pluggy-1.0.0
rootdir: /localdisk/wenzhexu/dev/forked_pytorch, configfile: pytest.ini
plugins: xdoctest-1.1.1
collected 1 item

torch/fx/passes/graph_drawer.py .                                                                                                                                                                               [100%]

============ 1 passed in 1.13s ===================
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95919
Approved by: https://github.com/ezyang
2023-03-05 02:23:18 +00:00
Wenzhe Xue
8dfac7b887 Update fx.pass.graph_drawer usage doc to draw fx graph (#95534)
Previous usage gave this error:
```
f.write(g.get_dot_graph().create_svg())
TypeError: write() argument must be str, not bytes
```

pydot has function to save to different types, e.g. `save_svg()`. I updated the usage doc working code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95534
Approved by: https://github.com/ezyang
2023-02-27 19:27:29 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
Sungmin Cho
bf67589915 Escape curly brackets in FxGraphDrawer _typename (#83604)
Summary:
Encountered `Error: bad label format` from dot (i.e. graphviz) when benchmarking models that have dict-like structure.

The root cause was that curly brackets were not properly escaped, like this example P522499127 (unescaped curly brackets in target= string)

This diff insert the fix in FxGraphDrawer, since many of these graph generation codes rely on that class.

(Modified summary before exporting to GitHub PR)

Test Plan:
```
CUDA_VISIBLE_DEVICES=7 buck run mode/opt -c python.package_style=inplace //hpc/new/models/feed/benchmark:feed_lower_benchmark -- --model-name={INSERT IFR QE MODEL NAME HERE} --batch-iter 100 --batch-size 768 --num-gpu 1 --lower-presets {INSERT ITS PRESET}
```

Will not encounter dot errors after this diff.

(Modified test plan before exporting to GitHub PR)

Reviewed By: yinghai

Differential Revision: D38758827

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83604
Approved by: https://github.com/yinghai, https://github.com/jianyuh
2022-08-31 15:15:21 +00:00
anjali411
4bf076e964 Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80520
Approved by: https://github.com/rohan-varma
2022-07-08 14:31:24 +00:00
Riley Dulin
d579838eb5 [torch][fx] Add ignore_parameters_and_buffers kwarg to FxGraphDrawer (#79982)
Summary:
Add an `ignore_parameters_and_buffers` parameter which will tell the graph drawer
to leave off adding parameter and buffer nodes in the dot graph.

This is useful for large networks, where we want to view the graph to get an idea of
the topology and the shapes without needing to see every detail. Removing these buffers
de-clutters the graph significantly without detracting much information.

Reviewed By: jfix71

Differential Revision: D37317917

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79982
Approved by: https://github.com/jfix71
2022-06-29 22:48:43 +00:00
Jordan Fix
e99e3fa580 [fx/graph_drawer] Add skip_node_names_in_args option, default to True (#73815)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73815

Add `skip_node_names_in_args` (default=`True`) which will skip including node names in args/kwargs during graph drawing.

Test Plan:
Default (`skip_node_names_in_args=True`):

{F707455583}

Vs. `skip_node_names_in_args=False`:

{F707046375}

Reviewed By: wushirong

Differential Revision: D34659144

fbshipit-source-id: 9f0bd7bee98dc1ca8eecdabc960804564d83777b
(cherry picked from commit a0ed64b51f0187115586f4001dc81148c7ed18b9)
2022-03-08 01:50:03 +00:00
Jordan Fix
b196e016a6 [fx/graph_drawer] Add args/kwargs and users (#73464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73464

- Improve formatting of graph by centering everything
- Add num_users
- Add args/kwargs
  - Don't print more than 10 of any list/tuple by default (this is necessary for very large concats)

Test Plan: tested locally

Reviewed By: khabinov

Differential Revision: D34492256

fbshipit-source-id: 8073992edb3efddcf8bfd72e2d3db49cc242db10
(cherry picked from commit b1b802965c143fdb0d308b70f51aa741f7d90f78)
2022-02-26 11:29:39 +00:00
Tomomas
b2b63209e1 make code simplify in get bufffers and parameters (#70399)
Summary:
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70399

Reviewed By: george-qi

Differential Revision: D33823535

Pulled By: khabinov

fbshipit-source-id: 8d1e49595da1f5cc14db7634a8c27556b02a5361
(cherry picked from commit 78bbb53614)
2022-01-28 19:15:56 +00:00
Mostafa Elhoushi
3f53365086 define get_dot_graph (#70541)
Summary:
In the [docstring](https://github.com/pytorch/pytorch/blob/master/torch/fx/passes/graph_drawer.py#L54-L60) we mention `get_dot_graph but it is not defined, so I defined it here.
Not sure if this is preferred, or should we update the docstring to use `get_main_dot_graph`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70541

Test Plan:
```
            g = FxGraphDrawer(symbolic_traced, "resnet18")
            with open("a.svg", "w") as f:
                f.write(g.get_dot_graph().create_svg())
```

Reviewed By: khabinov

Differential Revision: D33378080

Pulled By: mostafaelhoushi

fbshipit-source-id: 7feea2425a12d5628ddca15beff0fe5110f4a111
2022-01-05 20:00:20 -08:00
Jerry Zhang
3d6d4f4322 [fx2trt][quant] Add lowering support for per channel quantization in fx2trt (#64787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64787

This PR added support for lowering per channel quantization and dequantization operators
in fx2trt, this also extends TensorMeta with extra arguments corresponding to per channel quantized Tensors,
initially I was thinking of adding a qpram that can capture everything, but currently we still have some lowering support
for fbgemm ops (which has scale and zero_point in operator interface). I think we can move everything to qprams
after we deprecate lowering support for fbgemm ops in the future.

Test Plan:
Test for per channel weight:
```
python torch/fx/experimental/fx2trt/example/quantized_resnet_test.py
```

change BC compatibility test expect for TensorMeta
```
python test/test_fx.py TestFXAPIBackwardCompatibility.test_class_member_back_compat --accept
```

Imported from OSS

Reviewed By: jfix71, mrshenli, 842974287

Differential Revision: D30879848

fbshipit-source-id: 76c3804bb1d9343183ae53d9f02c1a3bf6c79e1c
2021-09-30 18:54:14 -07:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
James Reed
874f9bd509 [FX] Gate FXGraphDrawer on whether pydot is installed (#65088)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65088

Test Plan: Imported from OSS

Reviewed By: khabinov

Differential Revision: D30967951

Pulled By: jamesr66a

fbshipit-source-id: dba2f13a47889b3d4187de925b4fe74ee90b7f79
2021-09-16 10:04:33 -07:00
Malay Bag
287c0ab170 [FX] Add requires_grad to TensorMetadata (#60972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60972

For PyTorch model memory requirement calculation, requires_grad is needed. Output tensors with requires_grad are saved in module context and increases memory during forward pass.

Test Plan: Existing test cases

Reviewed By: jamesr66a

Differential Revision: D29024932

fbshipit-source-id: def990f8c6ff6fa4537bfc377c646b9d44464ebd
2021-06-29 23:07:27 -07:00
Oleg Khabinov
0d7d316dc1 [fx ir] Support lists and dicts in FX IR GraphDrawer (#58775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58775

Reviewed By: RoshanPAN

Differential Revision: D28613939

fbshipit-source-id: 4164e2dd772b59240ea3907001fe4ebddb003060
2021-06-10 01:55:53 -07:00
Jordan Fix
6d97a80dd2 [fx][graph_drawer] Improve graph drawer coloring and tensor_meta handling (#58699)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58699

Make `call_function`/`call_method` random colors based on their target name. This coloring is stable according to the name of the target. Also handle tensor_meta more elegantly for quantized types, including print q_scale/q_zero_point if they're used.

Test Plan: Tested locally

Reviewed By: chenccfb, 842974287

Differential Revision: D28580333

fbshipit-source-id: ad9961e1106a1bfa5a018d009b0ddb8802d2163c
2021-05-20 21:26:04 -07:00
Oleg Khabinov
73f22bcbf9 [fx ir] Handle cases in GraphDrawer when shape, type or stride are not present (#57845)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57845

As title says

Test Plan: N/A

Reviewed By: 842974287

Differential Revision: D28295999

fbshipit-source-id: f2cbf80c468f13685b17bb396c1f48972744ced0
2021-05-07 17:24:48 -07:00
Shiyan Deng
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00