Part of #134054.
This corresponds to the pytorch mypy changes from D61493706. Updating takes so
long and touches so many files that it's impossible to land as a whole without conflicting with some other intermediate change.
So landing these 'type: ignore' for pytorch in advance of them actually being needed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/134202
Approved by: https://github.com/Skylion007
Summary:
When writing out Graphviz files for graphs, sometimes the arguments are all
in a row and it's unclear which is which. Like for `aten.conv2d`, someone might not
remember the stride, padding, dilation order.
Add an option `normalize_args` (defaults to False) to normalize all args into kwargs.
This should help the readability of a graph.
Differential Revision: D59529417
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130348
Approved by: https://github.com/mcremon-meta
Match FxGraphDrawer compat constructor signature to avoid the following failure when `pydot` is not installed:
```
File "/pytorch/torch/_functorch/partitioners.py", line 933, in draw_graph
g = graph_drawer.FxGraphDrawer(
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
TypeError: __init__() got an unexpected keyword argument 'dot_graph_shape'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119767
Approved by: https://github.com/eellison
We draw our fx graphs with the "record" shape attribute by default.
Sometimes, when the graph is very complex, we may hit dot errors like below:
"flat edge between adjacent nodes one of which has a record shape -
replace records with HTML-like labels"
and thus fail to generate a graph. So, let's give the user an option
to specify the shape attribute for the dot graph. For example, passing
INDUCTOR_DOT_GRAPH_SHAPE_SVG = "none" would let us generate HTML-like lables
to workaround the above failure.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114811
Approved by: https://github.com/weifengpy
example usage
* `TORCH_COMPILE_DEBUG=1 INDUCTOR_ORIG_FX_SVG=1 INDUCTOR_POST_FUSION_SVG=1 python trig.py`: show original fx node name, file, and code. see snapshot 2 where we have origin_0, 1, 2
* trig.py can be found in P816304818
Implementation
* keep original fx graph in GraphLowering, ```self.orig_gm: torch.fx.GraphModule = gm.__copy__()```
* draw original fx graph with origins ir_post_fusion ```V.debug.draw_orig_fx_graph(self.orig_gm, self.scheduler.nodes)```. node.meta["buff_meta"] tracks buf_name
<img width="350" alt="Screenshot 2023-08-29 at 12 40 24 PM" src="https://github.com/pytorch/pytorch/assets/134637289/c4e197cb-ab3b-4a09-a584-c1356376accb">
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107752
Approved by: https://github.com/mlazos
Add a doc test, extending #95534 .
I found I need to put the xdoctest under a class method. Otherwise if it's right under the class definition, the test cannot be found. @Erotemic Do I miss anything?
The xdoctest has been tested:
```
$ pytest --xdoctest torch/fx/passes/graph_drawer.py::FxGraphDrawer.get_dot_graph:0
=========== test session starts ==================
platform linux -- Python 3.9.15, pytest-7.2.1, pluggy-1.0.0
rootdir: /localdisk/wenzhexu/dev/forked_pytorch, configfile: pytest.ini
plugins: xdoctest-1.1.1
collected 1 item
torch/fx/passes/graph_drawer.py . [100%]
============ 1 passed in 1.13s ===================
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95919
Approved by: https://github.com/ezyang
Previous usage gave this error:
```
f.write(g.get_dot_graph().create_svg())
TypeError: write() argument must be str, not bytes
```
pydot has function to save to different types, e.g. `save_svg()`. I updated the usage doc working code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95534
Approved by: https://github.com/ezyang
Summary:
Encountered `Error: bad label format` from dot (i.e. graphviz) when benchmarking models that have dict-like structure.
The root cause was that curly brackets were not properly escaped, like this example P522499127 (unescaped curly brackets in target= string)
This diff insert the fix in FxGraphDrawer, since many of these graph generation codes rely on that class.
(Modified summary before exporting to GitHub PR)
Test Plan:
```
CUDA_VISIBLE_DEVICES=7 buck run mode/opt -c python.package_style=inplace //hpc/new/models/feed/benchmark:feed_lower_benchmark -- --model-name={INSERT IFR QE MODEL NAME HERE} --batch-iter 100 --batch-size 768 --num-gpu 1 --lower-presets {INSERT ITS PRESET}
```
Will not encounter dot errors after this diff.
(Modified test plan before exporting to GitHub PR)
Reviewed By: yinghai
Differential Revision: D38758827
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83604
Approved by: https://github.com/yinghai, https://github.com/jianyuh
Summary:
Add an `ignore_parameters_and_buffers` parameter which will tell the graph drawer
to leave off adding parameter and buffer nodes in the dot graph.
This is useful for large networks, where we want to view the graph to get an idea of
the topology and the shapes without needing to see every detail. Removing these buffers
de-clutters the graph significantly without detracting much information.
Reviewed By: jfix71
Differential Revision: D37317917
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79982
Approved by: https://github.com/jfix71
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73815
Add `skip_node_names_in_args` (default=`True`) which will skip including node names in args/kwargs during graph drawing.
Test Plan:
Default (`skip_node_names_in_args=True`):
{F707455583}
Vs. `skip_node_names_in_args=False`:
{F707046375}
Reviewed By: wushirong
Differential Revision: D34659144
fbshipit-source-id: 9f0bd7bee98dc1ca8eecdabc960804564d83777b
(cherry picked from commit a0ed64b51f0187115586f4001dc81148c7ed18b9)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73464
- Improve formatting of graph by centering everything
- Add num_users
- Add args/kwargs
- Don't print more than 10 of any list/tuple by default (this is necessary for very large concats)
Test Plan: tested locally
Reviewed By: khabinov
Differential Revision: D34492256
fbshipit-source-id: 8073992edb3efddcf8bfd72e2d3db49cc242db10
(cherry picked from commit b1b802965c143fdb0d308b70f51aa741f7d90f78)
Summary:
In the [docstring](https://github.com/pytorch/pytorch/blob/master/torch/fx/passes/graph_drawer.py#L54-L60) we mention `get_dot_graph but it is not defined, so I defined it here.
Not sure if this is preferred, or should we update the docstring to use `get_main_dot_graph`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70541
Test Plan:
```
g = FxGraphDrawer(symbolic_traced, "resnet18")
with open("a.svg", "w") as f:
f.write(g.get_dot_graph().create_svg())
```
Reviewed By: khabinov
Differential Revision: D33378080
Pulled By: mostafaelhoushi
fbshipit-source-id: 7feea2425a12d5628ddca15beff0fe5110f4a111
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64787
This PR added support for lowering per channel quantization and dequantization operators
in fx2trt, this also extends TensorMeta with extra arguments corresponding to per channel quantized Tensors,
initially I was thinking of adding a qpram that can capture everything, but currently we still have some lowering support
for fbgemm ops (which has scale and zero_point in operator interface). I think we can move everything to qprams
after we deprecate lowering support for fbgemm ops in the future.
Test Plan:
Test for per channel weight:
```
python torch/fx/experimental/fx2trt/example/quantized_resnet_test.py
```
change BC compatibility test expect for TensorMeta
```
python test/test_fx.py TestFXAPIBackwardCompatibility.test_class_member_back_compat --accept
```
Imported from OSS
Reviewed By: jfix71, mrshenli, 842974287
Differential Revision: D30879848
fbshipit-source-id: 76c3804bb1d9343183ae53d9f02c1a3bf6c79e1c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60972
For PyTorch model memory requirement calculation, requires_grad is needed. Output tensors with requires_grad are saved in module context and increases memory during forward pass.
Test Plan: Existing test cases
Reviewed By: jamesr66a
Differential Revision: D29024932
fbshipit-source-id: def990f8c6ff6fa4537bfc377c646b9d44464ebd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58699
Make `call_function`/`call_method` random colors based on their target name. This coloring is stable according to the name of the target. Also handle tensor_meta more elegantly for quantized types, including print q_scale/q_zero_point if they're used.
Test Plan: Tested locally
Reviewed By: chenccfb, 842974287
Differential Revision: D28580333
fbshipit-source-id: ad9961e1106a1bfa5a018d009b0ddb8802d2163c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201
Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.
Test Plan: CI
Reviewed By: jackm321
Differential Revision: D27629598
fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0