Commit Graph

152 Commits

Author SHA1 Message Date
Zhengxu Chen
abd759d50d [fx] Add hooks to intercept node replacements. (#117825)
Summary: Adding an experimental API to FX graph module to place "hooks" every time when we are changing or replacing nodes in a graph, so that we can properly update the new name in graph signature and potentially other places.

Test Plan:
buck test mode/opt  -c fbcode.enable_gpu_sections=true caffe2/test/distributed/_tensor/experimental:tp_transform

buck test mode/opt caffe2/test:test_export -- -r test_replace_hook

Differential Revision: D52896531

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117825
Approved by: https://github.com/avikchaudhuri
2024-01-23 22:28:40 +00:00
Edward Z. Yang
003c900d5e Add _assert_scalar (#117378)
Peeled off from https://github.com/pytorch/pytorch/pull/114148, because that PR is going to take a while to actually land.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/117378
Approved by: https://github.com/jansel
2024-01-14 00:50:36 +00:00
PyTorch MergeBot
1174e82bde Revert "Add _assert_scalar and teach Inductor to codegen it (#114148)"
This reverts commit b6028acfa4.

Reverted https://github.com/pytorch/pytorch/pull/114148 on behalf of https://github.com/osalpekar due to Going to revert this given the broken torchrec PT2 tests internally: [D52648865](https://www.internalfb.com/diff/D52648865). Logs aren't too clear but @dstaay-fb can help debug as well ([comment](https://github.com/pytorch/pytorch/pull/114148#issuecomment-1886100368))
2024-01-11 02:30:22 +00:00
Edward Z. Yang
b6028acfa4 Add _assert_scalar and teach Inductor to codegen it (#114148)
Inductor codegen for `_assert_async` is currently disabled because we don't really understand how to codegen `scalar_to_tensor` on a Sympy expression. I initially tried to see if I could get this to work, but I got into some weird problem involving stride sorting, so I decided to fix it properly by not going through a tensor.

So we introduce an `_assert_scalar` which takes a scalar as an argument, avoiding needing to turn a SymBool into a tensor before asserting on it. I also add `_functional_assert_scalar` for good luck, although this doesn't do anything right now because https://github.com/pytorch/pytorch/pull/104203 still hasn't been landed.

I need to customize the codegen for this operator, so I decide to directly implement it in Inductor, rather than trying to treat it as a generic ExternKernel. This leads to the new AssertScalar IR node. This is written carefully so that it doesn't get DCE'd by Inductor.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114148
Approved by: https://github.com/jansel
2024-01-09 23:21:26 +00:00
Aaron Gokaslan
3fe437b24b [BE]: Update flake8 to v6.1.0 and fix lints (#116591)
Updates flake8 to v6.1.0 and fixes a few lints using sed and some ruff tooling.
- Replace `assert(0)` with `raise AssertionError()`
- Remove extraneous parenthesis i.e.
  - `assert(a == b)` -> `assert a == b`
  - `if(x > y or y < z):`->`if x > y or y < z:`
  - And `return('...')` -> `return '...'`

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116591
Approved by: https://github.com/albanD, https://github.com/malfet
2024-01-03 06:04:44 +00:00
Tugsbayasgalan Manlaibaatar
76b1d44d57 pre_dispatch aot_export (#115188)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115188
Approved by: https://github.com/bdhirsh
2023-12-25 04:51:21 +00:00
PyTorch MergeBot
0567f71ac6 Revert " pre_dispatch aot_export (#115188)"
This reverts commit a267d67350.

Reverted https://github.com/pytorch/pytorch/pull/115188 on behalf of https://github.com/jeanschmidt due to sadly, it is required to revert this commit in order to revert https://github.com/pytorch/pytorch/pull/115454 ([comment](https://github.com/pytorch/pytorch/pull/115188#issuecomment-1866310014))
2023-12-21 14:03:18 +00:00
Tugsbayasgalan Manlaibaatar
a267d67350 pre_dispatch aot_export (#115188)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115188
Approved by: https://github.com/bdhirsh
2023-12-20 21:36:25 +00:00
Aaron Gokaslan
b7b2178204 [BE]: Remove useless lambdas (#113602)
Applies PLW0108 which removes useless lambda calls in Python, the rule is in preview so it is not ready to be enabled by default just yet. These are the autofixes from the rule.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113602
Approved by: https://github.com/albanD
2023-11-14 20:06:48 +00:00
Ken Jin
70064ac416 [Dynamo] Match closures by code ID (#109427)
Closes https://github.com/pytorch/pytorch/issues/107866

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109427
Approved by: https://github.com/ezyang, https://github.com/jansel
2023-11-12 08:20:14 +00:00
Zhengxu Chen
64d75f72d4 [fx] Add a faster method for inserting positional argument. (#111974)
Summary:
Traditionally when user want to update the arguments for an FX node, the only way is to call the setter of .args property on nodes. This may be problematic when we insert a lot of arguments. Because of the semantics of the setter method, it has a worst case O(n) complexity.

Adding a new insert_arg provides us two benefits:
1. The operation is guaranteed to be O(1) cost.
2. User can express the intentation more directly, instead of writing code like `node.args = (arg,) + node.args`

Test Plan: caffe2/test:fx -- -r test_insert_arg

Reviewed By: suo

Differential Revision: D50574435

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111974
Approved by: https://github.com/angelayi
2023-10-26 02:30:42 +00:00
PyTorch MergeBot
b0087b4cf7 Revert "record_function: remove legacy internal operators (#72303)"
This reverts commit 0be84bb41e.

Reverted https://github.com/pytorch/pytorch/pull/72303 on behalf of https://github.com/izaitsevfb due to Apparently _record_function_enter is still used internally at Meta in several places and in lots of internal tests. ([comment](https://github.com/pytorch/pytorch/pull/72303#issuecomment-1777942975))
2023-10-24 20:01:14 +00:00
Peter Bell
0be84bb41e record_function: remove legacy internal operators (#72303)
These operators have not been used since #76420 but were preserved for TorchScript backward compatibility

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72303
Approved by: https://github.com/albanD
ghstack dependencies: #104535
2023-10-23 22:55:05 +00:00
Jason Ansel
a1154e673b [Compiled Autograd] Turn accumulate_grad into an op (#111700)
Relands #111271

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111700
Approved by: https://github.com/voznesenskym
2023-10-21 17:31:09 +00:00
PyTorch MergeBot
3eb5cae3af Revert "[Compiled Autograd] Turn accumulate_grad into an op (#111271)"
This reverts commit 04b04c0686.

Reverted https://github.com/pytorch/pytorch/pull/111271 on behalf of https://github.com/jeanschmidt due to Breaking internal CI ([comment](https://github.com/pytorch/pytorch/pull/111271#issuecomment-1768527932))
2023-10-18 14:02:34 +00:00
Jason Ansel
04b04c0686 [Compiled Autograd] Turn accumulate_grad into an op (#111271)
Rather than baking the behavior of `AccumulateGrad` nodes into the generated graph (either as `+=`, or as a return value of the graph).  This creates a new `accumulate_grad_` dispatcher op that is included in the generated graph like:
```
def forward(self, inputs, sizes, hooks):
    getitem = inputs[0]
    getitem_1 = inputs[1]
    getitem_2 = inputs[2]
    getitem_3 = inputs[3]
    getitem_4 = inputs[4]
    getitem_5 = inputs[5]
    getitem_6 = inputs[6]
    getitem_7 = inputs[7]
    getitem_8 = inputs[8]
    getitem_9 = inputs[9];  inputs = None
    expand = torch.ops.aten.expand.default(getitem, [2, 4]);  getitem = None
    threshold_backward = torch.ops.aten.threshold_backward.default(expand, getitem_1, 0);  expand = getitem_1 = None
    t = torch.ops.aten.t.default(getitem_3);  getitem_3 = None
    mm = torch.ops.aten.mm.default(threshold_backward, t);  t = None
    t_1 = torch.ops.aten.t.default(threshold_backward)
    mm_1 = torch.ops.aten.mm.default(t_1, getitem_2);  t_1 = getitem_2 = None
    t_2 = torch.ops.aten.t.default(mm_1);  mm_1 = None
    sum_1 = torch.ops.aten.sum.dim_IntList(threshold_backward, [0], True);  threshold_backward = None
    view = torch.ops.aten.view.default(sum_1, [4]);  sum_1 = None
    t_3 = torch.ops.aten.t.default(t_2);  t_2 = None
    accumulate_grad_ = torch.ops.inductor.accumulate_grad_.default(getitem_4, t_3);  getitem_4 = t_3 = None
    threshold_backward_1 = torch.ops.aten.threshold_backward.default(mm, getitem_5, 0);  mm = getitem_5 = None
    t_4 = torch.ops.aten.t.default(threshold_backward_1)
    mm_2 = torch.ops.aten.mm.default(t_4, getitem_6);  t_4 = getitem_6 = None
    t_5 = torch.ops.aten.t.default(mm_2);  mm_2 = None
    sum_2 = torch.ops.aten.sum.dim_IntList(threshold_backward_1, [0], True);  threshold_backward_1 = None
    view_1 = torch.ops.aten.view.default(sum_2, [4]);  sum_2 = None
    t_6 = torch.ops.aten.t.default(t_5);  t_5 = None
    accumulate_grad__1 = torch.ops.inductor.accumulate_grad_.default(getitem_7, t_6);  getitem_7 = t_6 = None
    accumulate_grad__2 = torch.ops.inductor.accumulate_grad_.default(getitem_8, view_1);  getitem_8 = view_1 = None
    accumulate_grad__3 = torch.ops.inductor.accumulate_grad_.default(getitem_9, view);  getitem_9 = view = None
    return []

```

The motivation here is `AccumulateGrad` nodes are causing trouble in FSDP tracing, since FSDP is in-place resizing parameters and parameter storage in hooks.  We will model this mutation in dynamo, but not during the initial compiled autograd capture.  This allows us to bypass failing shape checks in the initial capture.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111271
Approved by: https://github.com/voznesenskym
2023-10-16 21:16:17 +00:00
Michael Voznesensky
de0b18fad9 Use user directed names for variables where possible (#109092)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109092
Approved by: https://github.com/ezyang
ghstack dependencies: #108846
2023-09-13 07:44:04 +00:00
lezcano
4eac43d046 Trace through Tensor slots (#107159)
Namely
```
__delattr__
__delitem__
__getattribute__
__getitem__
__setattr__
__setitem__
__str__
```

We don't trace through `__init__`.

Fixes https://github.com/pytorch/pytorch/issues/106648

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107159
Approved by: https://github.com/Skylion007
2023-08-19 08:56:25 +00:00
Tugsbayasgalan Manlaibaatar
20c5add133 [export] Refactor constrain_as_value and constrain_as_size (#106591)
Some notable changes:
1. `constrain_as_size` allows min value to be less than 2 as it will unconditionally assume min >= 2 for compiler purposes. Instead, we add additional check to make sure max value is always greater than 2.
2. Previously, we used to runtime assert on the unbacked symint's val range which would be always between [2, max]. I modified this logic to assert on [0, max] unless user explicitly specifies the min range.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106591
Approved by: https://github.com/gmagogsfm, https://github.com/ezyang
2023-08-15 05:41:43 +00:00
PyTorch MergeBot
745d29b0cc Revert "[export] Refactor constrain_as_value and constrain_as_size (#106591)"
This reverts commit 18989890bf.

Reverted https://github.com/pytorch/pytorch/pull/106591 on behalf of https://github.com/izaitsevfb due to Breaks inductor test on trunk ([comment](https://github.com/pytorch/pytorch/pull/106591#issuecomment-1675069091))
2023-08-11 16:37:47 +00:00
Tugsbayasgalan Manlaibaatar
18989890bf [export] Refactor constrain_as_value and constrain_as_size (#106591)
Some notable changes:
1. `constrain_as_size` allows min value to be less than 2 as it will unconditionally assume min >= 2 for compiler purposes. Instead, we add additional check to make sure max value is always greater than 2.
2. Previously, we used to runtime assert on the unbacked symint's val range which would be always between [2, max]. I modified this logic to assert on [0, max] unless user explicitly specifies the min range.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106591
Approved by: https://github.com/gmagogsfm, https://github.com/ezyang
2023-08-11 05:29:22 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Michael Suo
a475ea4542 [fx] change from #users to num_users in graph printout (#101140)
`#users` means stuff in various chat apps, which makes it annoying to copypasta graphs into them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101140
Approved by: https://github.com/ezyang
2023-06-20 21:24:32 +00:00
xuanqi
b27c3558a4 [RFC]: Create aten native op for constrain_range (#103346)
At high current implementation of constrains functions (constrain_as_**) will raise exception for the following code snippets:
```
def f(x):
    a = x.item()
    constrain_as_size(a, 4, 7)
    return torch.empty((a, 4))

inp = torch.tensor([5])
ep = torch._export.export(f, (inp,))
```

The reason is because current constrain logic is:
1) Purely python so it won't survive AOT export (the full node is gone after AOT export since AOT export only maintains aten level op).
2) Utilize side effect to add range constraints for traced symbol's shape env ([code](9591e52880/torch/fx/experimental/symbolic_shapes.py (L370-L372))).
3) If runtime assertion is turned on (by default). [`_AddRuntimeAssertionsForConstraintsPass`](9591e52880/torch/_export/passes/add_runtime_assertions_for_constraints_pass.py (L98-L100)) will try to append assertion node based on range constrains extracted from shape env of symbol during another interpretation round.
4). However, since 1), in the round of AOT export, range constraints logic won't run for symbols generated during this round. And later there is no range constrains information available for assertion round and caused issue.
5) As a result of above, it will failure at `torch.empty((a, 4))` (there is no constrains for `a` that it must be positive).

The fix here is just to implement range constrain logic as a native aten op (CPU implementation as no-op) to make it be able to survive AOT export.

**NOTE:**
[Logic](2d745b95d7/torch/fx/experimental/symbolic_shapes.py (L350-L365C15)) within [`constrain_range`](2d745b95d7/torch/fx/experimental/symbolic_shapes.py (LL313C74-L313C74)) is split out as `constrain_range_int` to capture case when non `SymInt` is passed in and reused in the new `_constrain_range`. The reason is when non `SymInt` is provided:
* If it directly calls `sym_constrain_range`, the C++ version will be called which will be no-op.
* So in this case it calls `constrain_range_int` instead to be able to capture issue like user provides a input whose tensor's shape could be out of range during exporting, like the following for above code example:
```
...
inp = torch.tensor([10])
ep = torch._export.export(f, (inp,)) # immediately raise error
```

Differential Revision: [D46734204](https://our.internmc.facebook.com/intern/diff/D46734204)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/103346
Approved by: https://github.com/tugsbayasgalan
2023-06-16 14:55:40 +00:00
Animesh Jain
58d2c66a70 [activation checkpointing] Higher order functional rng op wrappers (#102934)
Introduces two higher order operators
* run_and_save_rng_state - Saves the current rng state and then runs the op.
* run_with_rng_state - Runs the op with the rng state supplied as an input

Ideally, we would like to use torch.compile for these operators. But currently the plan is to introduce these operators at the partitioner level, obviating the need to support them fully through the torch.compile stack. To ensure that we have good enough debugging with minifiers, we have ensure that they work with make_fx. In future, we can move on torch.compile.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102934
Approved by: https://github.com/jansel, https://github.com/zou3519
2023-06-12 22:54:17 +00:00
PyTorch MergeBot
66eef31444 Revert "[fx] change from #users to num_users in graph printout (#101140)"
This reverts commit e568c5a18d.

Reverted https://github.com/pytorch/pytorch/pull/101140 on behalf of https://github.com/jeanschmidt due to There are internal changes to this commit that are preventing landing, so I am reverting to unblock the diff train ([comment](https://github.com/pytorch/pytorch/pull/101140#issuecomment-1547989487))
2023-05-15 14:35:22 +00:00
Michael Suo
e568c5a18d [fx] change from #users to num_users in graph printout (#101140)
`#users` means stuff in various chat apps, which makes it annoying to copypasta graphs into them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101140
Approved by: https://github.com/ezyang
2023-05-12 04:34:01 +00:00
Tugsbayasgalan Manlaibaatar
d4bf76c2a4 Persist torch.assert in aten graph (#100101)
This PR introduces a new operator called aten._assert_async.msg, which allows passing a tensor value and assertion message as inputs. As part of TorchDynamo, we're replacing the use of torch._assert with this new operator so that make_fx also knows how to handle assertions. This is subset of https://github.com/pytorch/pytorch/pull/98878, refer there for historic reviews.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100101
Approved by: https://github.com/jansel
2023-04-28 07:31:43 +00:00
Shiyan Deng
82a54513ac [fx] Add a function to allow adding more functions to the side effect function set (#97288)
Summary: There're some customized functions that we would also like to keep during eliminate dead code pass. Add a function to help us to do.

Test Plan: Added a unit test

Differential Revision: D44273630

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97288
Approved by: https://github.com/houseroad
2023-04-22 04:42:24 +00:00
Kazuaki Ishizaki
105ef68f72 Fix typos under torch/fx directory (#97596)
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses, https://github.com/kit1980
2023-04-10 21:57:36 +00:00
Edward Z. Yang
37faa48844 DCE inference graphs too (#97275)
I added a bunch of asserts to verify that I didn't accidentally kill copy_ in the graph, hopefully this combined with our existing tests is good enough.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97275
Approved by: https://github.com/bdhirsh
2023-03-23 07:02:52 +00:00
PyTorch MergeBot
a7856e18a7 Revert "DCE inference graphs too (#97275)"
This reverts commit aa3a57b80d.

Reverted https://github.com/pytorch/pytorch/pull/97275 on behalf of https://github.com/ezyang due to this broke a test
2023-03-22 18:55:52 +00:00
Edward Z. Yang
aa3a57b80d DCE inference graphs too (#97275)
I added a bunch of asserts to verify that I didn't accidentally kill copy_ in the graph, hopefully this combined with our existing tests is good enough.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97275
Approved by: https://github.com/bdhirsh
2023-03-22 01:02:21 +00:00
Sherlock Huang
f8692dcc4a Node.stack_trace should have innermost frame last (#95592)
Both fx.Tracer and Dynamo should store node.stack_trace in the "innermost frame last" order.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95592
Approved by: https://github.com/ezyang
2023-02-28 02:14:40 +00:00
min-jean-cho
900e09c872 [Dynamo] Support torch.Tensor.fn as TorchVariable, not UserDefinedObjectVariable, preventing graph break (#93243)
As found in #92709, thanks to @ngimel and @jansel, currently `torch.Tensor.fn` points to `UserDefinedObjectVariable` rather than `TorchVariable`. The root cause is due to https://github.com/pytorch/pytorch/pull/92709#pullrequestreview-1273357406. To prevent this, build `TorchVariable`  of `torch.Tensor.fn` pointing to `torch.ops.aten.fn`.

This issue propagates to `torch.Tensor.fn` causing graph break with `nopython=True`.
```python
import torch
import torch._dynamo as dynamo

#op = torch.ops.aten.abs_ # no graph break
op = torch.Tensor.abs_ # graph break
args = torch.empty(10)

def foo(args):
    return op(args)

opt_foo = dynamo.optimize("inductor", nopython=True)(foo)
y_ = opt_foo(args)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93243
Approved by: https://github.com/jansel
2023-02-07 09:26:50 +00:00
albanD
496c0a207b Make segment_reduce properly private. (#93166)
I am attempting not to change the aten function to reduce the amount of BC issues on the torchscript side.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93166
Approved by: https://github.com/ngimel
2023-02-06 18:32:23 +00:00
Nikita Shulga
fd3a7264ae [MPS] Add group_norm[fwd+backward] and mean_var (take 2) (#91190)
Use Prims to implement group_norm, group_norm_backward and mean_var

Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in
order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in
15af4b1cee/torch/__init__.py (L1095)
is executed last during init process.

Add `__all__` to `torch/backends/mps/__init__.py` as well as alias all imports as private

Add `TestNNMPS.test_group_norm_backward` that validates no NaNs are generated during the backward pass

Fixes https://github.com/pytorch/pytorch/issues/88331
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91190
Approved by: https://github.com/albanD
2022-12-22 08:54:37 +00:00
PyTorch MergeBot
645eda0a00 Revert "[MPS] Add group_norm[fwd+backward] and mean_var (#91190)"
This reverts commit 371716eb36.

Reverted https://github.com/pytorch/pytorch/pull/91190 on behalf of https://github.com/kit1980 due to Broke test_correct_module_names because of underscore _ops
2022-12-21 19:37:43 +00:00
Nikita Shulga
371716eb36 [MPS] Add group_norm[fwd+backward] and mean_var (#91190)
Use Prims to implement group_norm, group_norm_backward and mean_var

Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in
order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in
15af4b1cee/torch/__init__.py (L1095)
is executed last during init process.

Depends on https://github.com/pytorch/pytorch/pull/91203

Fixes https://github.com/pytorch/pytorch/issues/88331
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91190
Approved by: https://github.com/albanD
2022-12-21 17:33:27 +00:00
Yanbo Liang
37e46a5035 [Dynamo] Fix several bugs & code refactor in RangeVariable (#89322)
Fix bug in [7k github models](https://github.com/pytorch/torchdynamo/issues/1884): https://github.com/jansel/pytorch-jit-paritybench/blob/master/generated/test_clovaai_stargan_v2.py
```
E       TypeError: 'list' object cannot be interpreted as an integer
E
E       from user code:
E          File "/scratch/ybliang/work/repos/pytorch-jit-paritybench/generated/test_clovaai_stargan_v2.py", line 335, in forward
E           idx = torch.LongTensor(range(y.size(0)))
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89322
Approved by: https://github.com/jansel
2022-11-23 19:44:48 +00:00
Brian Hirsh
b5a925ff2e propagate .meta info when replacing subgraphs in fx (#87255)
Fixes https://github.com/pytorch/torchdynamo/issues/1708

Our FX subgraph partitioner works by taking all of the original output nodes from a subgraph, and replacing it with a new `call_module` node in the graph.

If the original subgraph outputs had fake tensors and other metadata stored in their `.meta` attribute though, then this information was getting lost when we spliced in the subgraph.

Losing metadata on an FX graph also seems like an easy trap to fall into, so I'm wondering if there are any better guardrails that we can add. I ended up fixing in this PR by adding an optional kwarg to propagate meta info directly in the `fx.Node.replace_all_uses_with`, just because propagating metadata seems like a pretty core thing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87255
Approved by: https://github.com/wconstab, https://github.com/SherlockNoMad
2022-11-02 14:36:46 +00:00
David
693250ac85 Docs: fx.Node docs incorrectly state that the self argument is included in args for module calls (#86685)
It seems like the [torch.fx.Node docs](https://pytorch.org/docs/stable/fx.html#torch.fx.Node) are incorrect regarding the inclusion of the self argument for module call nodes.
While the docs state that self (the module) is included in `args`, it is in fact not, as demonstrated by this code:
```python
import torch
from torch import fx, nn

class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.submod = nn.Linear(10, 10)
    def forward(self, x):
        x = x.flatten()
        return self.submod(x)

graph_module = fx.symbolic_trace(Net())
print(graph_module.graph)  # doesn't show self for the submodule call
submod_node = list(graph_module.graph.nodes)[2]
print(submod_node.op)  # call_module
print(submod_node.args)  # (flatten,) => would need to have len 2 if self was included

flatten_node = list(graph_module.graph.nodes)[1]
print(flatten_node.op)  # call_method
print(flatten_node.args)  # (x,) => here self is included (and docs are correct)
```

Since [torch.fx.Interpreter also uses `args` as if self was is not included](2fe5808590/torch/fx/interpreter.py (L288)), I assume the docs are incorrect.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86685
Approved by: https://github.com/soulitzer
2022-10-11 18:05:56 +00:00
PyTorch MergeBot
75aa049a81 Revert "[Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)"
This reverts commit c09d84d325.

Reverted https://github.com/pytorch/pytorch/pull/81695 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-20 23:31:05 +00:00
Pavel Belevich
c09d84d325 [Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)
Test Plan: CI

Differential Revision: D37956824

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81695
Approved by: https://github.com/jamesr66a
2022-07-20 03:50:09 +00:00
PyTorch MergeBot
fde1107fe8 Revert "Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)"
This reverts commit d52f8c2533.

Reverted https://github.com/pytorch/pytorch/pull/81510 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-19 09:51:54 +00:00
Pavel Belevich
d52f8c2533 Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)
Adds an optional callback that checks if map_aggregate should continue recursive traversal. The main motivation is to not traverse torch.Size which is tuple

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81510
Approved by: https://github.com/SherlockNoMad, https://github.com/jamesr66a
2022-07-15 11:57:07 +00:00
anjali411
4bf076e964 Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80520
Approved by: https://github.com/rohan-varma
2022-07-08 14:31:24 +00:00
Natalia Gimelshein
c0ce4b0de9 make refs executor handle kwargs (#79858)
Mostly fixes #78923
I had to disable function patching in fx for functions with kwonly args, see https://github.com/pytorch/pytorch/compare/ngimel/make_fx_fix?expand=1#diff-090b22122be0779cd14afd2ebaf20d1e7c0bfe837e9eefa1d84e7521bb1defc6R446, cc @jamesr66a
But it looks like it was doing weird things anyway - it was patching signature of wrapped function with arbitrary local vars from wrapper, that can't be right, but I don't know what the intent there is.
A lot of functions now fail with nvfuser executor, and some still fail with aten, although with the different errors than before.
Edit: undid the change to _symbolic_script.py, turns out inspect.unwrapping function is not needed, and fx never sees kwargs.
cc @IvanYashchuk, @Chillee

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79858
Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-06-21 18:53:15 +00:00
Peter Bell
9ef5c679ef record_function: add torchbind alternative API (#72301)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72301

First step in resolving #35026.

This adds `PythonRecordFunction` which is a `torch::CustomClassHolder`
for `at::RecordFunction` to keep the ATen code free of torch includes.
And adds new unused internal API functions
`_record_function_enter_new` which return the torchbind object.

Once the FC period is expired, `torch.profiler.record_function` will
be updated to use this new internal API. Then once BC period is
expired, the cpp_custom_type_hack-based API can be removed.

Test Plan: Imported from OSS

Reviewed By: dagitses

Differential Revision: D34586311

Pulled By: robieta

fbshipit-source-id: d3eb9ffad7b348548a2b22c75203a92d1cb5115b
(cherry picked from commit 92d2ca808e5fbd20c9d6645dcabc3f059f9ef2d3)
2022-03-08 03:26:27 +00:00
Jay Banerjee
5332d8705b [FX lowering] Modify replace_all_uses_with to allowing filtering of nodes to update; use it to (#73763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73763

The test that is enabled generates a graph as such:

```
linear_25 --> sigmoid_14 --> output_1
         \--> output_2
```
Before this diff, (unpadding) layout_transform nodes would be added as follows:

```
linear_25 --> layout_xform1 --> sigmoid_14 --> layout_xform2--> output_1
                           \--> output_2
```
This causes an assertion to fail for the sigmoid node where the input and output types
don't match due to padding differences.

This diff modifies the replacement algorithm to not affect users of an output's parent node
when the user requires padded inputs. This yields the following graph instead:

```
linear_25 --> sigmoid_14 --> layout_xform2--> output_1
         \--> layout_xform1 --> output_2
```

Test Plan: Manually and CI

Reviewed By: jfix71, dborkovic

Differential Revision: D34623590

fbshipit-source-id: 3834b06c95fc5626eccc282216cbe039ac5a3242
(cherry picked from commit af012372ae1a6bb654b0ed9b765993960d5251e4)
2022-03-04 19:35:41 +00:00
Jordan Fix
b196e016a6 [fx/graph_drawer] Add args/kwargs and users (#73464)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73464

- Improve formatting of graph by centering everything
- Add num_users
- Add args/kwargs
  - Don't print more than 10 of any list/tuple by default (this is necessary for very large concats)

Test Plan: tested locally

Reviewed By: khabinov

Differential Revision: D34492256

fbshipit-source-id: 8073992edb3efddcf8bfd72e2d3db49cc242db10
(cherry picked from commit b1b802965c143fdb0d308b70f51aa741f7d90f78)
2022-02-26 11:29:39 +00:00
Jordan Fix
987f146185 [fx] Improve support for tuple subclasses such as NamedTuple (#73198)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73198

Previously, if an arg to an FX node is a subclass of tuple then it gets sanitized essentially back to that base class. An example here is when setting an arg to be a TensorMetadata object, which is a NamedTuple, it will be set as a tuple instead.

- Change `map_aggregate` to repack the tuple to `type(a)` when it's not directly a tuple (try/except for best attempt)
- During codegen, call `add_global` for `type(a)` if it's not directly a tuple.
- Add an option for an arg to provide a `_custom_fx_repr_fn` for use inside stringifying via `_format_arg`

Test Plan: Added unit test coverage, where we inline the named tuple into arg/kwarg.

Reviewed By: jamesr66a

Differential Revision: D34381888

fbshipit-source-id: bd672a8542e2bba5aa604b448bec920efc256440
(cherry picked from commit 68f99c12dd)
2022-02-23 11:31:10 +00:00
Brian Muse
8bf3179f6e #71946 Remove Python 3.6 references (#72211)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/71946

This commit removes some bits of code that were hard coded for Python 3.6 support from the `.circleci` and `torch` folders. It should only be merged if https://github.com/pytorch/pytorch/issues/66462 is complete.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72211

Reviewed By: dagitses, seemethere

Differential Revision: D33982604

Pulled By: musebc

fbshipit-source-id: 8f453bf9909df615addd59538adb369c65484044
(cherry picked from commit 944a9970fe)
2022-02-08 03:46:20 +00:00
Shen Li
7c2eda3829 Fix fx docs (#72108)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72108

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D33916855

Pulled By: mrshenli

fbshipit-source-id: 5fff6c87555109e43954eff99164e68a56ff95da
(cherry picked from commit 1611c4c75c)
2022-02-02 03:28:07 +00:00
Vasiliy Kuznetsov
2dd46d3aa9 FX: ensure node stack trace survives copying (#69368)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69368

Before this PR, copying a node would lose the stack trace. This PR
ensures that the stack trace is preserved across copies.

This is useful because quantization passes would like to start
allowing the user to preserve stack traces, and we use the copy
behavior.

Test Plan:
```
python test/test_fx.py TestFX.test_stack_traces
```

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D32835248

fbshipit-source-id: 91610fd8d05f5683cfa5e11fb6f9f3feacb8e241
2021-12-07 06:18:38 -08:00
Onyiee
ae11264583 Fixed type checking errors in node.py (#68124)
Summary:
Fixes [issue#67](https://github.com/MLH-Fellowship/pyre-check/issues/67)
This PR fixes the type checking errors in Pytorch torch/fx/node.py .
The variable types in 363:20 and 364:20 were declared to have type `List[str]`  but were  assigned a value of  `None`. This caused an incompatitble variable type error.  I changed the type from `List[str]` to `Optional[List[str]` . This therefore fixed the incompatitble variable type error.

Signed-off-by: Onyemowo  Agbo
onionymous
0xedward

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68124

Reviewed By: gmagogsfm

Differential Revision: D32322414

Pulled By: onionymous

fbshipit-source-id: be11bbbd463715ddf28a5ba78fb4adbf62878c80
2021-12-03 12:03:49 -08:00
Shiyan Deng
4b9464f4b9 [fx]Early return if a node tries prepend self (#67068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67068

Prepending a node to itself will result in the node gets removed from the graph.

Usually people won't prepend a node with itself. But people would accidentally try to append a node that's already next to `self` node, which will be prepending `self` to `self`.

Test Plan: Added a unit test

Reviewed By: jamesr66a

Differential Revision: D31849030

fbshipit-source-id: b0fdfbb893f785f268595acd823b426d57c15e61
2021-10-27 10:49:45 -07:00
Yinghai Lu
6b0aa2958d [FX] Support torch.layout as arg (#66048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66048

Previously, create_arg would fail if it encountered a not `None` layout argument. Adding it to `BaseArgumentTypes` list should be enough to fix that.

Test Plan: Added unittest

Reviewed By: jamesr66a

Differential Revision: D31362662

fbshipit-source-id: 20049971e18c17e9c75e50540500c567266daa55
2021-10-04 19:58:08 -07:00
James Reed
9117eed6ed [FX} Add torch.ops.profiler._record_function_{enter,exit} as stateful ops for DCE (#65180)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65180

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31007115

Pulled By: jamesr66a

fbshipit-source-id: 823b15db712a382a4f2a4fd409983d47bc067150
2021-09-16 21:31:54 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00
Patrick Hu
18cb3fc910 [FX] Validate data type of target on Node Construction (#64050)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64050

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D30585535

Pulled By: yqhu

fbshipit-source-id: 96778a87e75f510b4ef42f0e5cf76b35b7b2f331
2021-08-27 13:40:57 -07:00
=
e6e579ce74 [FX] Add torch.memory_format as a BaseArgumentType (#62593)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/62498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62593

Reviewed By: H-Huang

Differential Revision: D30104091

Pulled By: cpuhrsch

fbshipit-source-id: 25b7a4b308219860c969db54d7b1867b1aa4180a
2021-08-06 14:03:41 -07:00
James Reed
36adc3f04d [FX] Add APIs to mutate specific args/kwargs (#58571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58571

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D28543359

Pulled By: jamesr66a

fbshipit-source-id: 44812d04886e653b5439c880dd831ecbc893fe23
2021-05-19 14:54:16 -07:00
Horace He
86b061c80e [FX] Changes in order to move python key out of tree (#57427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57427

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28215322

Pulled By: Chillee

fbshipit-source-id: 94439376097c74f2004e6eca214d7940df20865d
2021-05-05 20:55:51 -07:00
Jordan Fix
4ef8205104 [fx][normalize] Allow for args to be left as args (#55995)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55995

Normalization is kind of broken currently. But making default arguments visible still appears to work, and is nice functionality to still be able to rely on/use. Adds an option to `NormalizeArgs`'s `__init__` called `normalize_to_only_use_kwargs` which defaults to true, which if set to false will keep using the same signature as provided, but additionally set kwargs in kwargs.

Test Plan: Added test to `test_fx_experimental`.

Reviewed By: 842974287

Differential Revision: D27759448

fbshipit-source-id: 620061fcf46d8549ac70b62aede8b6740aee3778
2021-04-24 08:15:17 -07:00
Allen (Congcong) Chen
798dd4665d Add a new API replace_input_with to node.py (#55887)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55887

Reviewed By: jfix71

Differential Revision: D27731389

fbshipit-source-id: 754654e64c4f3a584dfea06322d833bc11bcc3cc
2021-04-23 11:37:41 -07:00
Nikita Shulga
47d2edd597 Fix quick-checks for operator-schemas (#56692)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56692

Reviewed By: heitorschueroff

Differential Revision: D27939830

Pulled By: malfet

fbshipit-source-id: 67a054de5c58832fcd7d0df0dd37faf1ea1406fd
2021-04-22 08:11:29 -07:00
Horace He
0df239e550 [FX] Make arg normalization a method on Node and not a pass (also augment tests to be exhaustive) (#55992)
Summary:
Commandeered from https://github.com/pytorch/pytorch/pull/54563

Primary changes from first PR:
1. Refactored primary `normalize_function` logic into `operator_schemas.py` so that non-FX users can use it.
2. Refactored tests a bit, and added a path to call `normalize_function` directly.
3. Moved check for `boolean_dispatch` so that `torch.lu` also gets properly handled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55992

Reviewed By: mruberry

Differential Revision: D27774396

Pulled By: Chillee

fbshipit-source-id: 7f65632e1d608e4abd55aec5ccbfdc3f67f52b8e
2021-04-22 03:53:41 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
Jordan Fix
5b52ff6c8e [fx] Add DCE pass (#52658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52658

DCE will reverse iterate over the graph looking for nodes without users and delete them. It will skip over unused placeholders (since this affects the signature of the method) and outputs (which never have users but we want to keep them :) )

Test Plan: Added unit tests

Reviewed By: jamesr66a, khabinov, chenccfb

Differential Revision: D26602212

fbshipit-source-id: f4f196973e40546076636090bb0008c24f33795e
2021-03-08 19:54:56 -08:00
James Reed
8b5b7fa83d [WIP][FX] Optionally record stack traces when symtracing (#53081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53081

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D26742402

Pulled By: jamesr66a

fbshipit-source-id: 7987f9ddf061f6de3b4a638d98e0fae6d68d90c6
2021-03-03 12:30:43 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
Jason Ansel
0410cba23e [FX] make map_arg require a callable (#51907)
Summary:
This makes something like: `map_arg(lambda x: x, [Node(), Node()])` throw an error (before it would silently return `lambda x: x`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51907

Reviewed By: jamesr66a

Differential Revision: D26323916

Pulled By: jansel

fbshipit-source-id: f56ebcf9a3af47546d75603567025163f1fb8454
2021-02-09 13:36:27 -08:00
Ansley Ussery
215d9daceb Refactor internal methods into debugging utilities (#51737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51737

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26288613

Pulled By: ansley

fbshipit-source-id: 4504b1af5be7a200c1a6a376d432d7224eb8a796
2021-02-05 21:42:18 -08:00
James Reed
a7e92f120c [FX} Implement wrap() by patching module globals during symtrace (#50182)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50182

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25819730

Pulled By: jamesr66a

fbshipit-source-id: 274f4799ad589887ecf3b94f5c24ecbe1bc14b1b
2021-01-11 11:01:15 -08:00
James Reed
67d0c18241 [FX] Try to make it more clear that _update_args_kwargs should not be called (#49745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49745

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25682177

Pulled By: jamesr66a

fbshipit-source-id: 4910577541c4d41e1be50a7aa061873f061825b6
2020-12-22 15:20:02 -08:00
James Reed
fb755ad33e [FX] Emit named tuple construction node when NamedTuple appears as an arg (#49553)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49553

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25618577

Pulled By: jamesr66a

fbshipit-source-id: 042f742f9ca02e59bbceda97bfcf47f9bac07873
2020-12-18 14:10:17 -08:00
James Reed
e9d7d37ad0 [FX] Rename Node._uses and refactor Node.all_input_nodes (#49415)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49415

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25565341

Pulled By: jamesr66a

fbshipit-source-id: 2290ab62572632788809ba16319578bf0c0260ee
2020-12-15 17:13:57 -08:00
James Reed
80f7510d92 [FX] Fix create_arg for NamedTuple (#48986)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48986

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25387156

Pulled By: jamesr66a

fbshipit-source-id: 0d38c43e02088fb7afb671683c88b6e463fe7c76
2020-12-10 15:32:04 -08:00
James Reed
c92c8598a3 [FX][2/2] Make docstrings pretty when rendered (#48871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48871

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25351588

Pulled By: jamesr66a

fbshipit-source-id: 4c6fd341100594c204a35d6a3aab756e3e22297b
2020-12-08 11:14:43 -08:00
James Reed
ae9f39eb58 [FX][1/2] Make docstrings pretty when rendered (#48738)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48738

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25280867

Pulled By: jamesr66a

fbshipit-source-id: d08641c19a6c69b4042389c800a48e699f0be628
2020-12-05 17:23:40 -08:00
James Reed
998c4cac9a [FX] Add Node.all_input_nodes (#48270)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48270

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25100241

Pulled By: jamesr66a

fbshipit-source-id: f742f5a13debebb5be37f7c0045c121f6eaff1d5
2020-11-19 19:53:28 -08:00
James Reed
d1351c66a8 [FX] Add a bunch of docstrings (#47719)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47719

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24875400

Pulled By: jamesr66a

fbshipit-source-id: a1dd43d2eee914a441eff43c4f2efe61a399e8a5
2020-11-11 10:59:57 -08:00
Natalia Gimelshein
317b78d56e Revert D24665950: Create prototype for AST rewriter
Test Plan: revert-hammer

Differential Revision:
D24665950 (54feb00bbd)

Original commit changeset: b72110436126

fbshipit-source-id: 961412df006acd33c91a745c809832d5c6494c76
2020-10-31 18:07:10 -07:00
Ansley Ussery
54feb00bbd Create prototype for AST rewriter (#46410)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46410

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D24665950

Pulled By: ansley

fbshipit-source-id: b72110436126a24ddc294b8ee7b3f691281c1f1b
2020-10-31 10:51:17 -07:00
Zachary DeVito
fc1d6bf135 [fx] make sure args/kwargs are immutable (#46325)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46325

Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24308672

Pulled By: zdevito

fbshipit-source-id: a5305e1d82668b36e46876c3bc517f6f1d03dd78
2020-10-14 15:51:43 -07:00
Mike Ruberry
38e64cf949 Revert D24232288: [fx] make sure args/kwargs are immutable
Test Plan: revert-hammer

Differential Revision:
D24232288 (61df99b78e)

Original commit changeset: c95b1a73ae55

fbshipit-source-id: b910a6618f76ef64caead20e8207997317bc2f5e
2020-10-14 01:39:33 -07:00
Zachary DeVito
61df99b78e [fx] make sure args/kwargs are immutable (#46121)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46121

Otherwise, mutating them would make the uses/users lists inaccurate.
You can still mutate the node by assigning a new value to .args or .kwargs

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24232288

Pulled By: zdevito

fbshipit-source-id: c95b1a73ae55ad9bdb922ca960c8f744ff732100
2020-10-13 21:33:19 -07:00
Zachary DeVito
88dcb95e22 [fx] use a linked list for nodes (#45708)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45708

This makes it possible to define reasonable semantics for what happens
when a node in the list is deleted. In particular the iteration over nodes
will continue at the node that was after the deleted node _when it was deleted_.
If the new node is also deleted, we skip it and, continue to the node after it.
Eventually we either reach a node still in the list or we reach the end of the list.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24089516

Pulled By: zdevito

fbshipit-source-id: d01312d11fe381c8d910a83a08582a2219f47dda
2020-10-12 18:20:14 -07:00
James Reed
00b8ebe60c [FX] Preserve type annotations on generated code in Graph (#45880)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45880

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D24127303

Pulled By: jamesr66a

fbshipit-source-id: 3a042bcfb0bf9f58ac318cc814dfc3cca683c7f8
2020-10-07 21:34:47 -07:00
James Reed
8cdb638c62 [FX] Track use nodes in Node (#45775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45775

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24091082

Pulled By: jamesr66a

fbshipit-source-id: b09bb6ae78436a7722fb135b8ec71464ef9587cd
2020-10-07 00:15:04 -07:00
James Reed
b04ae953b4 [FX][WIP] Mutable Graph APIs (#45227)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45227

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23880730

Pulled By: jamesr66a

fbshipit-source-id: eb4e8c14d7f6b1deb1ddd6cf38a360413a1705ed
2020-10-05 17:07:08 -07:00
James Reed
53aea60bce [FX] Make output a non-special Node (#45599)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45599

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24027586

Pulled By: jamesr66a

fbshipit-source-id: 747c25e3c7668ca45f03bed0be71fd3c9af67286
2020-10-02 17:08:17 -07:00
James Reed
6bdb871d47 [FX] Lint pass for Graphs (#44973)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44973

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23792631

Pulled By: jamesr66a

fbshipit-source-id: d8faef0c311d8bd611ba0a7e1e2f353e3e5a1068
2020-09-28 23:00:32 -07:00
James Reed
e9c6449b46 [FX][EZ] Allow constructing GraphModule with dict for root (#44679)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44679

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23696766

Pulled By: jamesr66a

fbshipit-source-id: fe18b7b579c1728d00589bd5fd5e54c917cc61fe
2020-09-16 12:43:23 -07:00
Zachary DeVito
1f0cfbaaad [fx] add type annotations (#43083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43083

This adds type annotations to all classes, arguments, and returns
for fx. This should make it easier to understand the code, and
encourage users of the library to also write typed code.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D23145853

Pulled By: zdevito

fbshipit-source-id: 648d91df3f9620578c1c51408003cd5152e34514
2020-08-23 15:38:33 -07:00
Zachary DeVito
b349f58c21 [fx] enabling typechecking of fx files (#43082)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43082

Fixes all present errors in mypy. Does not try to add annotations everywhere.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23145854

Pulled By: zdevito

fbshipit-source-id: 18e483ed605e89ed8125971e84da1a83128765b7
2020-08-23 15:37:29 -07:00
Zachary DeVito
4011685a8b [fx] split Node into Node/Proxy (#42991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42991

Have Node both be a record of the operator in the graph, and the
way we _build_ the graph made it difficult to keep the IR datastructure
separate from the proxying logic in the build.

Among other issues this means that typos when using nodes would add
things to the graph:
```
    for node in graph.nodes:
        node.grph # does not error, returns an node.Attribute object!
```

This separates the builder into a Proxy object. Graph/Node no longer
need to understand `delegate` objects since they are now just pure IR.
This separates the `symbolic_trace` (proxy.py/symbolic_trace.py) from
the IR (node.py, graph.py).

This also allows us to add `create_arg` to the delegate object,
allowing the customization of how aggregate arguments are handled
when converting to a graph.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23099786

Pulled By: zdevito

fbshipit-source-id: 6f207a8c237e5eb2f326b63b0d702c3ebcb254e4
2020-08-14 16:45:21 -07:00
James Reed
0134deda0f [FX] Add interface to reject nodes (#42865)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42865

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23056584

Pulled By: jamesr66a

fbshipit-source-id: 02db08165ab41be5f3c4b5ff253cbb444eb9a7b8
2020-08-12 14:30:06 -07:00