Commit Graph

382 Commits

Author SHA1 Message Date
Yukio Siraichi
6e3a7473cf Trace calls with Python Enum values. (#109507)
Fix: #82135
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109507
Approved by: https://github.com/ezyang
2023-09-20 22:18:11 +00:00
William Wen
b904432e82 [dynamo] preserve some FX node metadata of GraphModules (#107067)
Requested from @tugsbayasgalan: we want dynamo to preserve some FX node metadata when we trace `GraphModule`s (`nn_module_stack`, `source_fn`, `stack_trace`). This is helpful for the case when we export an aten-level `GraphModule`, add some (possibly non-torch or non-aten) ops, and we want to transform the graph back into an aten-level graph. Without preserving metadata, future passes that look at metadata (e.g. quantization passes) won't work.

This feature also has the additional benefit of being able to preserve origin line of code when `print_readable`'ing a `GraphModule`. This is helpful when debugging graphs that have passed through dynamo several times.

The added unit test demonstrates the added functionality of this PR.

~This PR is currently a proof-of-concept implementation that shows that preserving node metadata across dynamo is possible.~ This PR preserves node metadata across dynamo by doing the following:
- ~inject a counter variable into the `GraphModule` source code, which is incremented every time a node is run~
- Construct a line number -> node index map in `GraphModule` as the source code is being generated.
- pass a list of node metadata and the line number map to dynamo's bytecode analyzer
- ~dynamo traces the counter as a `ConstantVariable`, so when we create a new proxy, we can determine which original node index this proxy corresponds by looking at the value of the traced counter~
- When we create a new proxy, get the current instruction's line number, and get the node index using the line number map
- index into the original node metadata ~using the counter variable's tracked value.~

~Some things that should be addressed off the top of my head:~
- ~Is this feature even desirable? (Do we really want Dynamo to have special behavior for `GraphModules`? Should we expect users to re-export `GraphModules`?)~
- ~Is there a better approach than to use a counter? We considered using node names, line numbers, and assuming that proxies are created in the same order as the nodes, but each of these 3 have shortcomings. For node names, we only have access to new node names, not the old ones. Using line number is fragile. The third is problematic since not all created nodes go through `create_proxy` (e.g. inputs). We currently generate a line number to node index map when the `GraphModule`'s code is generated.~
- ~What's the best way to send data across the "CPython gap"? That is, it is not obvious how to cleanly pass data from dynamo's `eval_frame.py:_TorchDynamoContext.__call__` to `symbolic_convert.py:InstructionTranslatorBase.__init__`. In this PR, we use a global.~

Differential Revision: [D49257108](https://our.internmc.facebook.com/intern/diff/D49257108)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107067
Approved by: https://github.com/jansel
2023-09-15 23:29:14 +00:00
ydwu4
2bf7a283cb Remove expected test failures for cond (#108709)
Remove the expected failure in def test_control_flow_tracing(self) by chaning the error message to `Expected pred to be bool or tensor, but got Proxy\(eq\)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108709
Approved by: https://github.com/ezyang, https://github.com/zou3519
ghstack dependencies: #107662, #107850
2023-09-14 21:34:31 +00:00
PyTorch MergeBot
de76c88d90 Revert "Remove expected test failures for cond (#108709)"
This reverts commit a08e1370ef.

Reverted https://github.com/pytorch/pytorch/pull/108709 on behalf of https://github.com/huydhn due to Sorry for reverting this, but test_export_with_symbool_inputs is failing in trunk a08e1370ef ([comment](https://github.com/pytorch/pytorch/pull/108709#issuecomment-1718669964))
2023-09-14 02:47:28 +00:00
ydwu4
a08e1370ef Remove expected test failures for cond (#108709)
Remove the expected failure in def test_control_flow_tracing(self) by chaning the error message to `Expected pred to be bool or tensor, but got Proxy\(eq\)`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108709
Approved by: https://github.com/ezyang, https://github.com/zou3519
ghstack dependencies: #107662, #107850
2023-09-14 01:16:29 +00:00
PyTorch MergeBot
c5e7588613 Revert "[dynamo] preserve some FX node metadata of GraphModules (#107067)"
This reverts commit 1d42148fee.

Reverted https://github.com/pytorch/pytorch/pull/107067 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/107067#issuecomment-1717321061))
2023-09-13 09:59:33 +00:00
William Wen
1d42148fee [dynamo] preserve some FX node metadata of GraphModules (#107067)
Requested from @tugsbayasgalan: we want dynamo to preserve some FX node metadata when we trace `GraphModule`s (`nn_module_stack`, `source_fn`, `stack_trace`). This is helpful for the case when we export an aten-level `GraphModule`, add some (possibly non-torch or non-aten) ops, and we want to transform the graph back into an aten-level graph. Without preserving metadata, future passes that look at metadata (e.g. quantization passes) won't work.

This feature also has the additional benefit of being able to preserve origin line of code when `print_readable`'ing a `GraphModule`. This is helpful when debugging graphs that have passed through dynamo several times.

The added unit test demonstrates the added functionality of this PR.

~This PR is currently a proof-of-concept implementation that shows that preserving node metadata across dynamo is possible.~ This PR preserves node metadata across dynamo by doing the following:
- ~inject a counter variable into the `GraphModule` source code, which is incremented every time a node is run~
- Construct a line number -> node index map in `GraphModule` as the source code is being generated.
- pass a list of node metadata and the line number map to dynamo's bytecode analyzer
- ~dynamo traces the counter as a `ConstantVariable`, so when we create a new proxy, we can determine which original node index this proxy corresponds by looking at the value of the traced counter~
- When we create a new proxy, get the current instruction's line number, and get the node index using the line number map
- index into the original node metadata ~using the counter variable's tracked value.~

~Some things that should be addressed off the top of my head:~
- ~Is this feature even desirable? (Do we really want Dynamo to have special behavior for `GraphModules`? Should we expect users to re-export `GraphModules`?)~
- ~Is there a better approach than to use a counter? We considered using node names, line numbers, and assuming that proxies are created in the same order as the nodes, but each of these 3 have shortcomings. For node names, we only have access to new node names, not the old ones. Using line number is fragile. The third is problematic since not all created nodes go through `create_proxy` (e.g. inputs). We currently generate a line number to node index map when the `GraphModule`'s code is generated.~
- ~What's the best way to send data across the "CPython gap"? That is, it is not obvious how to cleanly pass data from dynamo's `eval_frame.py:_TorchDynamoContext.__call__` to `symbolic_convert.py:InstructionTranslatorBase.__init__`. In this PR, we use a global.~

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107067
Approved by: https://github.com/jansel
2023-09-11 17:11:51 +00:00
ydwu4
49e964cad6 Automatically turn on dynamo in cond (#108028)
A replacement of https://github.com/pytorch/pytorch/pull/107932.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108028
Approved by: https://github.com/zou3519
ghstack dependencies: #108025, #108026, #108027
2023-08-28 10:16:41 +00:00
Jason Lu
bc88028e8e Back out "Reland "Make adding buffers more like adding parameters (#104069)" (#106224)" (#106743)
Summary:
Original commit changeset: 81319beb97f3

Original Phabricator Diff: D47961182

Test Plan: revert to maintain backward compat with legacy ads_dper3 production package. Read details in: S357822

Reviewed By: atuljangra

Differential Revision: D48131623

@diff-train-skip-merge
(D48131623 landed internally)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106743
Approved by: https://github.com/malfet
2023-08-08 15:27:34 +00:00
Mikayla Gawarecki
d8e5f2aa6d Reland "Make adding buffers more like adding parameters (#104069)" (#106224)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106224
Approved by: https://github.com/atalman, https://github.com/albanD
2023-07-31 17:18:56 +00:00
Aaron Gokaslan
6d43c89f37 [BE]: Update Ruff to 0.0.280 (#105724)
Removes unusued loop values in python dictionary iteration. Automated fix from Ruff master

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105724
Approved by: https://github.com/ezyang, https://github.com/janeyx99
2023-07-22 23:03:34 +00:00
Andrey Talman
c6653b65d8 Back out "Make adding buffers more like adding parameters (#104069)" (#105581)
Summary:
D47537831 is breaking pyper tests: https://fb.workplace.com/groups/802176577445480/posts/1018902842439518/

with `TypeError: register_buffer() takes 3 positional arguments but 4 were given`

Original commit changeset: d4b4069fbd38

Original Phabricator Diff: D47537831

Test Plan:
```
buck2 run //caffe2/torch/fb/training_toolkit/integration_tests/training_lifecycle/cogwheel_tests/pyper_release_v2:cogwheel_smallworld_inline_cvr_infer_pyper_pyper__canary_offline_training-launcher -- --run-harness-in-tupperware --build-fbpkg ads_dper3 --build-fbpkg training_platform
```

Reviewed By: atalman

Differential Revision: D47600140

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105581
Approved by: https://github.com/mikaylagawarecki
2023-07-20 03:39:53 +00:00
ekamiti
32d422f335 Make adding buffers more like adding parameters (#104069)
Add similar semantics for creating a buffer object similar to creating a parameter. This is done by introducing a new `Buffer` class that can be used for type disambiguation. The underlying functionality of registering a buffer remains the same as the `register_buffer` method has not been changed. The `persistent` parameter in the `Buffer` type is to indicate whether a buffer object should be persistent or not. Other non-test changes have to do with getting the new `Buffer` type recognized by inductor and dynamo. Remaining changes are test changes to make sure that the `Buffer` type can be used as a drop in replacement for `register_buffer` as it just leads to `register_buffer` being called. The addition of this new functionality still allows for normal tensors to be used as buffers so these changes are intended to be backwards compatible.

Fixes #35735

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104069
Approved by: https://github.com/mikaylagawarecki
2023-07-17 17:59:05 +00:00
Edward Z. Yang
666aeaa313 Preserve original co_filename when FX symbolic_trace (#103885)
Previously, you'd get `<eval_with_key>.0`; now you get `<eval_with_key>.0 from /data/users/ezyang/b/pytorch/test/dynamo/test_misc.py:5683 in forward`

I used to do this with globals, but now I do it with a `co_fields` parameter that's plumbed around, because putting things in globals has implications(TM). Happy to bikeshed on the `co_fields` structure.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103885
Approved by: https://github.com/albanD
2023-07-05 22:00:05 +00:00
Shiyan Deng
3c34a00d1b Preserve all submodules/parameters/buffers when unpickle graph module (#104115)
Summary:
When we pickle/unpickle graph module in multipy, we would lost modules/attributes that are not referred in the graph. This is because when unpickle fx graph module, we use the stored `__dict__` and the fx graph to create a new graph module. In GraphModule init, we drop any attribute that is not referred in the graph.

This behavior is not ideal because we actually expect a graph module that's exactly the same after unpickling.

Test Plan:
```
buck test mode/opt caffe2/test:fx -- test_preserve_unused_attr_after_unpickle

Tests finished: Pass 1. Fail 0. Fatal 0. Skip 0. Build failure 0
```

Differential Revision: D46976230

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104115
Approved by: https://github.com/houseroad
2023-06-26 06:59:48 +00:00
PyTorch MergeBot
29e3fddb08 Revert "Preserve original co_filename when FX symbolic_trace (#103885)"
This reverts commit b9f81a483a.

Reverted https://github.com/pytorch/pytorch/pull/103885 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/103885#issuecomment-1603612781))
2023-06-23 02:49:04 +00:00
Edward Z. Yang
b9f81a483a Preserve original co_filename when FX symbolic_trace (#103885)
Previously, you'd get `<eval_with_key>.0`; now you get `<eval_with_key>.0 from /data/users/ezyang/b/pytorch/test/dynamo/test_misc.py:5683 in forward`

I used to do this with globals, but now I do it with a `co_fields` parameter that's plumbed around, because putting things in globals has implications(TM). Happy to bikeshed on the `co_fields` structure.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103885
Approved by: https://github.com/albanD
2023-06-21 08:28:50 +00:00
Michael Suo
a475ea4542 [fx] change from #users to num_users in graph printout (#101140)
`#users` means stuff in various chat apps, which makes it annoying to copypasta graphs into them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101140
Approved by: https://github.com/ezyang
2023-06-20 21:24:32 +00:00
Shabab Ayub
a896962f0a [fx][2/n] Add metadata to placeholders (#102195)
Summary:
# Context
In TorchRec's train pipeline, we need to fx trace a module to analyze the arguments on the forward call. In order to do this, we need to preserve some sort of meaning with each argument (a key or name of sorts that lets us identify the argument).

The issue is, when you use concrete args, internally, fx will unflatten the arg into it's constituents (to locate PHs).

Given a function that looks like this:
```
def process(batch: Dict[str, torch.Tensor]):
   ....

symbolic_trace(process, concrete_args: {"batch": {"f1": PH, "f2": PH}})

# function will be rewritten to look like:
def process(batch_1, batch_2):  # batch_1 -> "f1", batch_2->"f2"
  ...
```

When you traverse through the nodes of the graph, the names of the argument nodes to the function are batch_1 and batch_2. **This doesn't mean anything to the user who is fx tracing.** There isn't anything indicating that batch_1 corresponds to key "f1" in the batch input.

# Solution

When fx sees a "PH", it creates a proxy node.

The user does not have direct access to proxy creation, but only through the PH structure.

Attach a piece of metadata, `ph_key`, to the PH when you set it in the concrete args, it will get passed into proxy + node creation. So when you traverse the graph, this metadata sticks onto the node as an attribute. This way you have a way of tagging that  "batch_1" as "f1".

Test Plan: added a unit test

Reviewed By: dstaay-fb

Differential Revision: D44947653

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102195
Approved by: https://github.com/PaliC
2023-05-25 07:04:20 +00:00
Shabab Ayub
8243abc84a [1/n] instanceof instead of singleton for ph check (#102008)
Summary: Change placeholder check from singleton to instanceof PHBase so you can create your own PH class with metadata

Test Plan: added unit test

Reviewed By: joshuadeng

Differential Revision: D46085128

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102008
Approved by: https://github.com/PaliC
2023-05-23 00:07:45 +00:00
PyTorch MergeBot
66eef31444 Revert "[fx] change from #users to num_users in graph printout (#101140)"
This reverts commit e568c5a18d.

Reverted https://github.com/pytorch/pytorch/pull/101140 on behalf of https://github.com/jeanschmidt due to There are internal changes to this commit that are preventing landing, so I am reverting to unblock the diff train ([comment](https://github.com/pytorch/pytorch/pull/101140#issuecomment-1547989487))
2023-05-15 14:35:22 +00:00
Michael Suo
e568c5a18d [fx] change from #users to num_users in graph printout (#101140)
`#users` means stuff in various chat apps, which makes it annoying to copypasta graphs into them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101140
Approved by: https://github.com/ezyang
2023-05-12 04:34:01 +00:00
Angela Yi
3c5ec6af14 Partition modules (#98628)
Added helper functions to match nodes in the graph that are decomposed from their source (leaf modules, or functional ops), as a result of dynamo tracing.

`get_source_partitions(graph: torch.fx.Graph, wanted_sources: List[Any]) -> Dict[Any, SourcePartition]`

Args:
* graph: The graph we want to partition
* wanted_sources: List of sources of nodes that were decomposed from this source. This can be a function (ex. torch.nn.functional.linear) or a leaf module type (ex. torch.nn.Linear)

Returns:
* Dictionary mapping sources (ex. torch.nn.modules.linear.Linear) to a list of SourcePartitions that correspond to the list of nodes that were flattened from a module of that type.

```
@dataclass
class SourcePartition():
    # Nodes in a particular partition
    nodes: List[Node]
    # Module type
    module_type: Type
    # Nodes in the graph that are needed as inputs to the partition
    input_nodes: List[Node] = field(default_factory=list)
    # Nodes in the partition that are being used by nodes outside of the partition
    output_nodes: List[Node] = field(default_factory=list)
    # Parameters that are being used
    params: List[str] = field(default_factory=list)
```

Example:

Original:
```
x -> linear -> linear -> relu -> linear
```
Traced graph:
```
.graph():
    %arg0 : [#users=1] = placeholder[target=arg0]
    %_param_constant0 : [#users=1] = get_attr[target=_param_constant0]
    %t_default : [#users=1] = call_function[target=torch.ops.aten.t.default](args = (%_param_constant0,), kwargs = {})
    %_param_constant1 : [#users=1] = get_attr[target=_param_constant1]
    %addmm_default : [#users=1] = call_function[target=torch.ops.aten.addmm.default](args = (%_param_constant1, %arg0, %t_default), kwargs = {})
    %_param_constant0_1 : [#users=1] = get_attr[target=_param_constant0]
    %t_default_1 : [#users=1] = call_function[target=torch.ops.aten.t.default](args = (%_param_constant0_1,), kwargs = {})
    %_param_constant1_1 : [#users=1] = get_attr[target=_param_constant1]
    %addmm_default_1 : [#users=1] = call_function[target=torch.ops.aten.addmm.default](args = (%_param_constant1_1, %addmm_default, %t_default_1), kwargs = {})
    %relu_default : [#users=1] = call_function[target=torch.ops.aten.relu.default](args = (%addmm_default_1,), kwargs = {})
    %_param_constant2 : [#users=1] = get_attr[target=_param_constant2]
    %t_default_2 : [#users=1] = call_function[target=torch.ops.aten.t.default](args = (%_param_constant2,), kwargs = {})
    %_param_constant3 : [#users=1] = get_attr[target=_param_constant3]
    %addmm_default_2 : [#users=1] = call_function[target=torch.ops.aten.addmm.default](args = (%_param_constant3, %relu_default, %t_default_2), kwargs = {})
    return [addmm_default_2]
```
Result of `get_module_partitions`:
```
{<class 'torch.nn.modules.linear.Linear'>: [
    ModulePartition(nodes=[_param_constant0, t_default, _param_constant1, addmm_default], module_type=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[arg0], output_nodes=[addmm_default], params=["_param_constant0", "_param_constant1"]),
    ModulePartition(nodes=[_param_constant0_1, t_default_1, _param_constant1_1, addmm_default_1], module_type=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[addmm_default], output_nodes=[addmm_default_1], params=["_param_constant0_1", "_param_constant1_1"]),
    ModulePartition(nodes=[_param_constant2, t_default_2, _param_constant3, addmm_default_2], module_type=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[relu_default], output_nodes=[addmm_default_2], params=["_param_constant2", "_param_constant3"])],

 <class 'torch.nn.modules.activation.ReLU'>: [
    ModulePartition(nodes=[relu_default], module_type=<class 'torch.nn.modules.activation.ReLU'>, input_nodes=[addmm_default_1], output_nodes=[relu_default], params=[])]}
```

Also added helper function to check if two module partitions are connected:
`check_subgraphs_connected(subgraph1: SourcePartition, subgraph2: SourcePartition) -> bool`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/98628
Approved by: https://github.com/cccclai
2023-05-03 23:31:56 +00:00
Angela Yi
004f3d71aa [export] Move verifier over to export from torch/fx (#100019)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100019
Approved by: https://github.com/tugsbayasgalan
2023-04-26 18:26:46 +00:00
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
Shiyan Deng
82a54513ac [fx] Add a function to allow adding more functions to the side effect function set (#97288)
Summary: There're some customized functions that we would also like to keep during eliminate dead code pass. Add a function to help us to do.

Test Plan: Added a unit test

Differential Revision: D44273630

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97288
Approved by: https://github.com/houseroad
2023-04-22 04:42:24 +00:00
Lin Yang
cf357adc7e Allow torch.fx to take Modules that return dataclass (#99576)
Summary:
Currently torch.fx support Modules with input of namedtuple/dataclass, return as namedtuple, but does not allow Module.forward to return a dataclass, running `test_trace_return_dataclass` without this change will have following error:

  NotImplementedError: argument of type: <class 'test_fx.TestFX.test_trace_return_dataclass.<locals>.MyOutput'>
  File "test_trace_return_dataclass
    traced_graph = symbolic_trace(module).graph
  File "test/__fx__/fx#link-tree/torch/fx/_symbolic_trace.py", line 1114, in symbolic_trace
    graph = tracer.trace(root, concrete_args)
  File "test/__fx__/fx#link-tree/torch/fx/_symbolic_trace.py", line 783, in trace
    (self.create_arg(fn(*args)),),
  File "test/__fx__/fx#link-tree/torch/fx/_symbolic_trace.py", line 378, in create_arg
    return super().create_arg(a)
  File "test/__fx__/fx#link-tree/torch/fx/proxy.py", line 269, in create_arg
    raise NotImplementedError(f"argument of type: {type(a)}")

this diff handle dataclass type.

Test Plan:
buck test @//mode/opt @//mode/inplace //caffe2/test:fx -- test_trace_

  graph():
    %d : torch.Tensor [#users=1] = placeholder[target=d]
    %my_output : [#users=1] = call_function[target=test_fx.MyOutput](args = (), kwargs = {foo: %d, bar: %d})
    return my_output

Differential Revision: D44916519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99576
Approved by: https://github.com/suo
2023-04-21 23:46:49 +00:00
Bug Hunter Yan
7257de6eac Fix typos in torch/fx/_compatibility.py (#97618)
Fixes #ISSUE_NUMBER
Modify the _compatibility.py file global variable name and modify its test file simultaneously.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97618
Approved by: https://github.com/ezyang
2023-03-29 21:55:13 +00:00
Edward Z. Yang
fa4c77e39b Rename PyOperator to HigherOrderOperator (#97493)
Twice this week I have had people confuse "operator defined with Python
operator registration aka torch.library" and "PyOperator which is used
to define control flow operators and other operators that cannot be
represented in JIT schema."  Renaming PyOperator for clarity.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97493
Approved by: https://github.com/SherlockNoMad
2023-03-24 05:04:02 +00:00
Han Qi
9e3f173636 [1/n] Add verifier for EXIR Aten dialect (#94783)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94783
Approved by: https://github.com/zhxchen17
2023-03-08 04:55:54 +00:00
ydwu4
674ef1f9be Make fx.Transformer.get_attr call tracer to preserve node.meta (#95245)
Currently, transformer creates proxy objects directly for get_attr method. node.meta is lost in this step. In order to keep it, we invoke tracer.create_proxy. Meta data is copied over in tracer.create_proxy and tracer.create_node.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95245
Approved by: https://github.com/SherlockNoMad, https://github.com/tugsbayasgalan
2023-02-22 22:33:37 +00:00
Xuehai Pan
046e88a291 [BE] [3/3] Rewrite super() calls in test (#94592)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94592
Approved by: https://github.com/ezyang, https://github.com/seemethere
2023-02-12 22:20:53 +00:00
Aaron Gokaslan
67d9790985 [BE] Apply almost all remaining flake8-comprehension checks (#94676)
Applies the remaining flake8-comprehension fixes and checks. This changes replace all remaining unnecessary generator expressions with list/dict/set comprehensions which are more succinct, performant, and better supported by our torch.jit compiler. It also removes useless generators such as 'set(a for a in b)`, resolving it into just the set call.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94676
Approved by: https://github.com/ezyang
2023-02-12 01:01:25 +00:00
Aaron Gokaslan
9171f7d4cd [BE] Modernize PyTorch even more for 3.8 with pyupgrade (#94520)
Applies some more pyupgrade fixits to PyTorch

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94520
Approved by: https://github.com/ezyang
2023-02-10 18:02:50 +00:00
Angela Yi
d990ddadd5 [fx] Fix matching args (#94375)
To match nodes within the graph, the matcher currently flattens the arguments and compares each argument against each other. However, if it believes that a list input contains all literals, it will not flatten the list and will instead compare the list directly against each other. It determines if a list is a literal by checking if the first element is a node. However this doesn't work in some cases (like the test cases I added).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94375
Approved by: https://github.com/SherlockNoMad
2023-02-10 17:37:57 +00:00
PyTorch MergeBot
fe00722539 Revert "feat(fx): make_fx should be aware of functions wrapped with @fx.wrap (#93273)"
This reverts commit 6a4bf3b71b.

Reverted https://github.com/pytorch/pytorch/pull/93273 on behalf of https://github.com/ezyang due to nervous about this before branch cut. lets take our time post branch cut
2023-02-09 03:33:09 +00:00
Aaron Gokaslan
1e2d82b8e4 [BE] Merge isinstance calls together (#94419)
Simplify and speeds up isinstance calls by checking for multiple types at the same time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94419
Approved by: https://github.com/ezyang
2023-02-09 00:47:26 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
jon-chuang
6a4bf3b71b feat(fx): make_fx should be aware of functions wrapped with @fx.wrap (#93273)
Fixes https://github.com/pytorch/pytorch/issues/89421

The strategy is to patch the given function wrapped with `@torch.fx.wrap` so that if a tensor tracer is active, we will `proxy_call` the function.

`proxy_call` will also skip certain checks if the function to proxy call is not a torch op (checked with `isinstance(.., OpOverload)`.

@IvanYashchuk @ezyang @Chillee
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93273
Approved by: https://github.com/ezyang
2023-02-02 01:57:52 +00:00
Han Qi
fc4e9931da [fx.GraphModule] Populate memo in deepcopy BEFORE copying children. (#93295)
Summary:
Apparently if not then at somepoint, we might lose fields if the submodules have circular reference

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93295
Approved by: https://github.com/jerryzh168
2023-01-31 01:45:35 +00:00
Han Qi
8d7f9e2f79 Make __deepcopy__ of GraphModule able to handle circular reference. (#93038)
Summary:
One of such places where circular reference can occur is: _load_state_dict_pre_hooks contains a _WrappedHook, _WrappedHook has a weakref to the same module.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93038
Approved by: https://github.com/jerryzh168
2023-01-27 01:19:59 +00:00
Nikita Shulga
6c7e6d9689 Make torch.fx compatible with Python-3.11 (#92895)
In 3.11 bytecode size is not constant, so in order to get from `f_lasti` to opcode index, one need to search for the closes offset in disassembled instructions.

Update `_patch_function` to construct code with all the properties that exist in 3.11 runtime.
Update `_torchscript_schema_to_signature` to mark `from` named arg as positional argument only, as this is a reserved keyword in Python and as such checked by `inspect` package in 3.11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92895
Approved by: https://github.com/albanD
2023-01-24 22:11:50 +00:00
Driss Guessous
df14650f0b [SDPA] Update SDPA API and make function Public (#92189)
# Summary
In preparation for pt 2.0 launch this PR updates SDPA's API and makes the function a nn.funcitonal public function.

## Changes
### API
Previously the the function signature was:
`scaled_dot_product_attention(query, key, value, attn_mask=None, need_attn_weights=False, dropout_p=0.0, is_causal=False) -> (Tensor, Tensor)`
Updated signature:
`scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False) -> Tensor`

This PR removes the need_attn_weights optional boolean variable and updates the return type to a singular tensor.

#### Reasoning:
The main goal of this function is to provide an easy interface for users to call into fused attention kernels e.g.  (FlashAttention). The fused kernels do not currently support arbitrary attn_mask or dropout but there is a PR to mem-efficient attention to enable these. We want to have the API surface ready for when the backing kernels get updated.

The fused kernels save on memory usage by not materializing the weights and it is unlikely that a fast fused implementation will enable this feature so we are removing.

Discussed with folks at FAIR/Xformers and +1 this API change.

#### Make function Public
In preparation for the pt 2.0 launch we make the function public to start to generate user feedback

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92189
Approved by: https://github.com/cpuhrsch
2023-01-23 20:50:46 +00:00
Jerry Zhang
1464db08b4 [quant][pt2e] Support setting qconfig by module_type (#92355)
Summary:
This PR supports the following feature for QConfigMapping:
```
qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Conv2d, qconfig)
backend_config = get_qnnpack_pt2e_backend_config()
m = prepare_pt2e(m, qconfig_mapping, example_inputs, backend_config)
```
which means users want to set the qconfig for all calls to `torch.nn.Conv2d` to use `qconfig`, note this is only verified for the case when the module is broken down to a single aten op right now, e.g. torch.nn.Conv2d will be torch.ops.aten.convolution op when traced through. will need to support more complicated modules that is broken down to multiple operators later, e.g. (MaxPool)

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_qconfig_module_type

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92355
Approved by: https://github.com/jcaip
2023-01-20 03:18:21 +00:00
Angela Yi
493a6ced74 [fx] Throw error when symbolically tracing control flow ops (#92313)
Throws a better error when symbolically tracing control flow ops. Right now it throws an error when creating the function arguments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92313
Approved by: https://github.com/zhxchen17
2023-01-20 00:38:21 +00:00
Alex Settle
f8a07ca422 Reland 2nd attempt "Add heirachical module names to torchFX graph.node" (#91721)
Fixes #87659

Reland of PR #87742 and PR #90205

PR #90205 was reverted due to BC issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91721
Approved by: https://github.com/jerryzh168
2023-01-18 23:00:36 +00:00
Richard Zou
5d01277fea Deprecate torch.nn.utils.stateless.functional_call (#92280)
This PR:
- Updates the docs to say it is deprecated
- Raises a UserWarning
- Changes most of the callsites inside PyTorch to use
torch.func.functional_call, minus the test_stateless testing.

The motivation behind this is that we can now align behind a single
functional_call API in PyTorch.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92280
Approved by: https://github.com/albanD
2023-01-18 14:26:25 +00:00
Han Qi
00fe63d1d8 fx Graph should copy meta on deepcopy (#92062)
Summary:
fx Graph should copy meta on deepcopy

Test Plan:
Unit test

Reviewers:

Subscribers:

Tasks:

Tags:

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92062
Approved by: https://github.com/zhxchen17
2023-01-18 02:49:14 +00:00
PyTorch MergeBot
1119d2fa54 Revert "Reland "Add heirachical module names to torchFX graph.node" (#90205)"
This reverts commit 6b7efac3c9.

Reverted https://github.com/pytorch/pytorch/pull/90205 on behalf of https://github.com/seemethere due to Reverting since this caused failures in internal systems, see https://fb.workplace.com/groups/802176577445480/posts/894284641568006 for discussion
2022-12-13 17:47:07 +00:00
Alex Settle
6b7efac3c9 Reland "Add heirachical module names to torchFX graph.node" (#90205)
Fixes #87659

Reland of PR #87742

Resolves errors that caused the changes to be backed out.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90205
Approved by: https://github.com/jerryzh168
2022-12-09 06:20:31 +00:00
Ram Rachum
351d73b97f Fix exception causes all over the codebase (#90271)
This is the continuation to #90134 and hopefully the final PR in this series.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90271
Approved by: https://github.com/kit1980
2022-12-07 04:29:00 +00:00
Jongsoo Park
2bca280a31 Revert D41683102: Multisect successfully blamed D41683102 for test or build failures (#90117)
Summary:
This diff is reverting D41683102
D41683102 has been identified to be causing the following test or build failures:
Tests affected:
- https://www.internalfb.com/intern/test/281475051072735/

Here's the Multisect link:
https://www.internalfb.com/intern/testinfra/multisect/1444960
Here are the tasks that are relevant to this breakage:
T124964606: 41 tests started failing for oncall ads_trainer_release in the last 2 weeks
We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it.

Test Plan: NA

Reviewed By: jspark1105

Differential Revision: D41710842

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90117
Approved by: https://github.com/soumith
2022-12-03 19:54:04 +00:00
alexmsettle
b703e4b3c2 Add hierarchical module names to torchFX graph.node #87659 (#87742)
Fixes #87659

Pass down the module hierarchy from module.named_modules() to the name field of graph.node.
This makes it so the name of each node contains descriptive information about the network architecture.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87742
Approved by: https://github.com/jerryzh168
2022-12-02 05:58:06 +00:00
Ryan Spring
534ae6ae47 [primTorch] Implement group norm reference (#87054)
Add group norm reference
Split from #81191
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87054
Approved by: https://github.com/mruberry
2022-11-11 01:08:20 +00:00
soulitzer
4c20c0509d Split out forward AD tests from test_ops_gradients and reenable slow gradcheck CI (#88216)
Fixes: https://github.com/pytorch/pytorch/issues/88010

This PR does a couple things to stop slow gradcheck from timing out:
- Splits out test_ops_fwd_gradients from test_ops_gradients, and factors out TestFwdGradients and TestBwdGradients which both inherit from TestGradients, now situated in common_utils (maybe there is a better place?)
- Skips CompositeCompliance (and several other test files) for slow gradcheck CI since they do not use gradcheck
- because test times for test_ops_fwd_gradients and test_ops_gradients are either unknown or wrong, we hardcode them for now to prevent them from being put together. We can undo the hack after we see actual test times are updated. ("def calculate_shards" randomly divides tests with unknown test times in a round-robin fashion.)
- Updates references to test_ops_gradients and TestGradients
- Test files that are skipped for slow gradcheck CI are now centrally located in in run_tests.py, this reduces how fine-grained we can be with the skips, so for some skips (one so far) we still use the old skipping mechanism, e.g. for test_mps

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88216
Approved by: https://github.com/albanD
2022-11-03 00:20:45 +00:00
Philip Meier
bc73affdad prepare removal of deprecated functionality in torch.testing (#87969)
_Redo of #86586 with all BC breaking changes granularly placed into separate commits._

---

Per title. Deprecation happened on Feb 25, 2022 in c6f1bbc0ac, which made it into the 1.12 release. Since it is now 245 days later and the next release will be 1.14, the removals later in the stack comply with the [BC policy](https://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#minimizing-the-disruption-of-bc-breaking-changes).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87969
Approved by: https://github.com/mruberry
2022-11-02 14:04:48 +00:00
Peter Bell
bc9caafc78 record_function: update to use custom_class API (#76420)
Re-submit of gh-72302

This still has a small performance hit, but it much smaller. On my
machine I see `_record_fucntion_exit._RecordFunction` takes 1.05 us
compared to the `Tensor` overload taking 0.79 us.

In an overall comparison, I see a 0.7 us slowdown from 6.0 us to
6.7 us for this timeit benchmark
```python
import torch

def foo():
  with torch.profiler.record_function("foo"):
    return torch.eye(3)

%timeit foo()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76420
Approved by: https://github.com/robieta
2022-11-02 00:39:28 +00:00
lezcano
787028cadb Implement col2im decomposition and fix im2col and add a few preconditions (#85541)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85541
Approved by: https://github.com/jansel
2022-09-30 09:31:53 +00:00
Renfei Chen
4befe45084 [FX] Add one option to maintain the FX graph execution order after splitting_module (#85188)
Summary:
{F770932209}

Given the original execution order and the node dependency relationship (note that the same dependency order could generate multiple execution order, which refers to “Topological Order”), after reunion, we could find the new execution order of the new GraphModule is different from the original one which is not what we want.
For example, let’s assume that NewLeaf_1 is EmbeddingLookup (Calling EmbeddingLookup is awaitable, we will keep executing the following nodes rather than waiting for the result until we have to know the lookup result), NewLeaf_4 is the node where we HAVE to get the lookup result to interact with the NewLeaf_3. So NewLeaf_1 will launch a lookup kernel and all2all communication stream to distribute the result to all ranks. In the meantime, we want to keep executing NewLeaf_2 and NewLeaf_3 to avoid meaningless waiting. However, given the new execution order, we have to wait for the lookup kernel and all2all communication to be finished since the next node NewLeaf_4 needs the result, until then we can execute NewLeaf_2, etc. It cannot leverage the advantage of parallel computation and communication stream and will hurt the QPS a lot.
So while constructing the GraphModule, we have to change from the topological order to the original order

Test Plan:
Unit test

Not sure how to add tests in FX as there's no TARGETS, so I added in the TorchRec folder

Differential Revision: D39567314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85188
Approved by: https://github.com/SherlockNoMad
2022-09-23 23:21:54 +00:00
Kunal Bhalla
b00a4b7cf1 [torch.fx.wrap] Use callable / function.__name__ instead of function.__code__.co_name (#84373)
Ran across this issue while using torch.fx.wrap on a decorated function: it triggered a KeyError: 'wrapper_inside_decorator'. torch.fx.wrap stores function.__code__.co_name, but that isn't set correctly (and doesn't match it's name in the global namespace) for decorators; function.__name__ is set correctly.

Also adjusted to checking for callable instead of checking for the existing of __code__ to allow for a broader variety of functions that can be passed in. Eg. using functools.cache returns a callable that won't have a __code__ attribute.

I added a unit test (that incidentally fails every test in the suite before the fix commit -- because it affects the global state), and then a fix that addresses it.

```
In [1]: import functools

In [2]: def decorator(f):
   ...:     @functools.wraps(f)
   ...:     def wrapper(*args, **kwargs):
   ...:         return f(*args, **kwargs)
   ...:     return wrapper
   ...:

In [3]: @decorator
   ...: def some_function(x):
   ...:     return x
   ...:

In [4]: some_function.__name__
Out[4]: 'some_function'

In [5]: some_function.__code__.co_name
Out[5]: 'wrapper'
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84373
Approved by: https://github.com/jamesr66a, https://github.com/SherlockNoMad
2022-09-09 05:44:29 +00:00
Vasilis Vryniotis
7e05879b46 Fix fx test for S3D (#84526)
Fixing [failing](https://github.com/pytorch/pytorch/runs/8083404365?check_suite_focus=true) tests by adjusting the input size for S3D. The reason the test is failing is because S3D requires a bigger input size than previously passed.

As noted before, TorchVision already checks that its models are FX traceable and ensures all the tests are updated and work properly prior adding new architectures. The tests here seem to duplicate our efforts and often break because they don't factor in details about each model. It might be worth considering running TorchVision's tests instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84526
Approved by: https://github.com/pbelevich
2022-09-05 13:15:55 +00:00
Vasilis Vryniotis
cff55682d8 Change the input of mvit_v2_s on the FX test (#83242)
Addresses some [breakages](https://github.com/pytorch/pytorch/runs/7782559841?check_suite_focus=true) from #82560

Context: The tests are breaking because a new architecture was added in TorchVision (see https://github.com/pytorch/vision/pull/6373) that requires a different input size. This PR addresses it by using the right size for the `mvit_v2_s` architecture.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83242
Approved by: https://github.com/ezyang
2022-08-11 15:27:38 +00:00
Vasilis Vryniotis
6a09847c42 Fix broken FX tests (#83187)
Resolves [breakages](https://github.com/pytorch/pytorch/runs/7762125339?check_suite_focus=true) observed at #82560

Context:
The current FX tests assume that every public method under `torchvision.models` is a model builder method. To get a list of those methods, they query the `__dict__` attribute of the module. Unfortunately this assumption is not true and the tests already contain some workarounds to filter some methods. A better approach would be to query TorchVision for all of its available models under a specific module. This is exactly what the new Registration API can help us do and that's what we use in this PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83187
Approved by: https://github.com/ezyang
2022-08-11 07:38:35 +00:00
Sherlock Huang
6915676448 Preserve node's stack trace during retrace (#83050)
AOTAutograd retraces graph module produced by torch dynamo, this PR preserves the stack trace in the original fx.Node.

Differential Revision: [D38595638](https://our.internmc.facebook.com/intern/diff/D38595638)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83050
Approved by: https://github.com/ezyang, https://github.com/voznesenskym
2022-08-11 04:18:14 +00:00
David Berard
45e7d0268a [fx] Implement __deepcopy__ for fx.Tracer (#83130)
Copied from @jamesr66a 's example in #83116.

Implements `__deepcopy__` to skip deepcopying the elements of `_autowrap_search`, because it contains modules, which cannot/should not be deepcopied

Differential Revision: [D38560212](https://our.internmc.facebook.com/intern/diff/D38560212)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83130
Approved by: https://github.com/SherlockNoMad
2022-08-11 00:13:21 +00:00
Sherlock Huang
752579a373 Preseve stack trace in nodes during fx.Transform (#82670)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82670
Approved by: https://github.com/ezyang
2022-08-03 20:24:19 +00:00
Nikita Shulga
d80fe49de0 [Reland] Add py-3.10 config (#82329)
This is a re-land of #81372 and #81233 with the exception that it does not force the range-checks on older Python runtime versions and as such should not affect the internal workloads, which were the reason for revert, see https://github.com/pytorch/pytorch/pull/81372#issuecomment-1187516464

- [Py3.10] Allow floats to be imported as Long (#81372)
- [CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)
- Don't do anything about range checks for pre-py3.10
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82329
Approved by: https://github.com/kit1980
2022-07-27 20:22:47 +00:00
PyTorch MergeBot
5df1ce46f0 Revert "[resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)"
This reverts commit ce92c1cfe9.

Reverted https://github.com/pytorch/pytorch/pull/81999 on behalf of https://github.com/ZainRizvi due to test_bce_with_logits_has_correct_forward_grad consistently fails with an error that it takes 2 positional arguments but 3 were given
2022-07-26 03:29:50 +00:00
soulitzer
0fcdf936e7 Skip tests that don't call gradcheck in slow gradcheck CI (#82117)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82117
Approved by: https://github.com/kit1980, https://github.com/albanD
2022-07-25 21:33:52 +00:00
James Reed
ce92c1cfe9 [resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)
Differential Revision: D38077793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81999
Approved by: https://github.com/pbelevich, https://github.com/osalpekar
2022-07-25 21:00:42 +00:00
PyTorch MergeBot
0d1710ade5 Revert "[FX] Fix PyTree unpacking carrying forward type annotations (#81906)"
This reverts commit e0d83a0bdc.

Reverted https://github.com/pytorch/pytorch/pull/81906 on behalf of https://github.com/jeanschmidt due to breaking internal builds
2022-07-22 11:11:10 +00:00
James Reed
e0d83a0bdc [FX] Fix PyTree unpacking carrying forward type annotations (#81906)
Resolves https://github.com/pytorch/pytorch/issues/81902

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81906
Approved by: https://github.com/Chillee, https://github.com/voznesenskym
2022-07-22 04:25:23 +00:00
Shangdi Yu
c52ee6dc0a CSE Pass and common pass Tests (#81742)
Test cases for CSE Pass and common passes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81742
Approved by: https://github.com/SherlockNoMad
2022-07-22 03:45:09 +00:00
Edward Z. Yang
5b88a2078b Follow GitHub relabeling of oncall: fx for test owners (#81821)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81821
Approved by: https://github.com/janeyx99
2022-07-21 01:50:06 +00:00
PyTorch MergeBot
75aa049a81 Revert "[Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)"
This reverts commit c09d84d325.

Reverted https://github.com/pytorch/pytorch/pull/81695 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-20 23:31:05 +00:00
Pavel Belevich
c09d84d325 [Reland] Add should_traverse_fn to torch.fx.node.map_aggregate (#81695)
Test Plan: CI

Differential Revision: D37956824

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81695
Approved by: https://github.com/jamesr66a
2022-07-20 03:50:09 +00:00
PyTorch MergeBot
fde1107fe8 Revert "Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)"
This reverts commit d52f8c2533.

Reverted https://github.com/pytorch/pytorch/pull/81510 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-19 09:51:54 +00:00
PyTorch MergeBot
c96485804f Revert "[CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)"
This reverts commit 7ccf693cf6.

Reverted https://github.com/pytorch/pytorch/pull/81233 on behalf of https://github.com/janeyx99 due to this should have been reverted along with 81372 for breaking internal builds
2022-07-18 17:15:50 +00:00
Nikita Shulga
7ccf693cf6 [CI] Move CUDA-11.6 to Python-3.10 configuration (#81233)
Second attempt of landing the change after https://github.com/pytorch/pytorch/pull/66530

Skip nan hashes comparison validation in `jit/test_hash.py`, as it behaves differently in 3.10 vs other pythons
Skip tensor_fx assert tests
Skip initializing uint8 tensors from negative values in `TestScript.test_torch_tensor_as_tensor`

Final step in closing https://github.com/pytorch/pytorch/issues/66424

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81233
Approved by: https://github.com/seemethere
2022-07-16 20:41:04 +00:00
Pavel Belevich
d52f8c2533 Add should_traverse_fn to torch.fx.node.map_aggregate (#81510)
Adds an optional callback that checks if map_aggregate should continue recursive traversal. The main motivation is to not traverse torch.Size which is tuple

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81510
Approved by: https://github.com/SherlockNoMad, https://github.com/jamesr66a
2022-07-15 11:57:07 +00:00
Angela Yi
3d0b0b2f9b [fx] PassManager changes (#80531)
PassManager is a class used to run multiple passes on a given graph module.

Class Attributes
* `passes: List[Callable]`: A list of callable passes
* `constraints: List[Callable]`: A list of constraints
* `run_checks_after_each_pass`: Flag for running checks each pass

Class Methods:
* `__call__(graph_module: DispatchGraphModule)`:
    * Runs the passes based on the list of passes until the graph stops changes, or until `steps` number of times.
    * Each time a pass is run, it will check that the graph module still maintains the required invariants by calling `check()` and will lint the graph to check that it’s well formed if the flag `run_checks_after_each_pass` is set.
* `check(graph_module: DispatchGraphModule)`: Runs various checks on the given graph module to make sure that it contains the needed data for passes
* `add_check(check: Callable)`: Adds the `check` function to the given pass manager instance
* `add_constraint(constraint: Callable)`: Adds a constraint to the current list of constraints

We can create a PassManager and run it by doing:
```
PassManager(passes=[pass1, pass2])(graph_module)
```

Differential Revision: [D37523159](https://our.internmc.facebook.com/intern/diff/D37523159)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80531
Approved by: https://github.com/SherlockNoMad
2022-07-15 00:58:43 +00:00
Tim Gates
3a87b47de9 docs: Fix a few typos (#81435)
There are small typos in:
- caffe2/python/recurrent.py
- test/distributed/test_c10d_nccl.py
- test/test_fx.py
- torch/csrc/jit/runtime/autodiff.cpp
- torchgen/gen.py

Fixes:
- Should read `propagation` rather than `propogation`.
- Should read `multiplied` rather than `multuplied`.
- Should read `eliminate` rather than `elminate`.
- Should read `dispatcher` rather than `disaptcher`.

Semi-automated pull request generated by
https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81435
Approved by: https://github.com/ngimel
2022-07-14 04:20:26 +00:00
Jeff Daily
340ae3ca43 [ROCm] unskip test_fx tests (#81125)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81125
Approved by: https://github.com/ngimel
2022-07-14 00:42:16 +00:00
Nikita Shulga
80bf2ea3d9 [CI] Install vision without pep517 (#81074)
If installed with pep517 support, `torchvision` will be build againstreleased version of PyTorch rather than against the one currently installed on the system

Also update `torchvision` hash to 8a45147f9d and:
 - Added `maskrcnn_resnet50_fpn_v2`, `maskrcnn_resnet50_fpn_v2`, `retinanet_resnet50_fpn_v2`, `ssd300_vgg16`, `fcos_resnet50_fpn` and `ssdlite320_mobilenet_v3_large` to the list of untraceable models
 - Set default input size to (1, 3, 16, 224, 224) for `mvit_v1_b` model
 - Skipped `test_roi_aligned`,`test_batched_nms`, `test_roi_pooled` and `test_roi_align_aligned`  ONNX test (tracked in https://github.com/pytorch/pytorch/issues/81121 )
 - Skipped TorchVision integration tests in `test_package` (tracked in https://github.com/pytorch/pytorch/issues/81115 )

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81074
Approved by: https://github.com/kit1980
2022-07-08 22:53:44 +00:00
Joel Benjamin Schlosser
2d73c8e6e0 Add Dropout1d module
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79545

Approved by: https://github.com/ngimel, https://github.com/albanD
2022-06-15 14:39:07 +00:00
Brian Hirsh
0161e9eb00 [test] attempt to functionalize ops with mutable positional-only args
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76320

Approved by: https://github.com/ezyang
2022-05-19 18:50:34 +00:00
sijiac
efcbbb177e [Re-submit] Make tracer be able to trace different forward functions
Summary: The root module may have different forward functions. The current implementation assumes only the func forward can be traced. In this PR, we add an attribute func name to Tracer class to enable users trace different functions

Test Plan:
python3 test/test_fx.py TestFX.test_trace_multiple_funcs

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77502

Approved by: https://github.com/jamesr66a
2022-05-17 01:05:33 +00:00
Jon Janzen
fa018ef989 Revert "Make tracer be able to trace different forward functions (#77109)"
This reverts commit bf4b6d0dce.
2022-05-13 13:06:47 -07:00
Sijia Chen
bf4b6d0dce Make tracer be able to trace different forward functions (#77109)
Summary: The root module may have different forward functions. The current implementation assumes only the func `forward` can be traced. In this diff, we add an argument of forward func name to enable users trace different forward functions

Test Plan: N1903198

Differential Revision: D36157032

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77109
Approved by: https://github.com/jamesr66a
2022-05-13 16:11:23 +00:00
Xiaodong Wang
2291960d3f Back out "record_function: update to use custom_class API" (#76253)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76253

We're observing large QPS regression on the original PR https://github.com/pytorch/pytorch/pull/72302. For the training job we had, it regressed from 720k QPS to 450k QPS (see the test plan in FB internal). We suspect this is because the api was changed from `_record_function_enter` to `_record_function_enter_new`, and we're running experiments to confirm that. Will add more details when the runs in the test plan has finished. For now, it's better to revert the diff to unblock internal usecases and we can think about how to reland this diff later.

Original commit changeset: dc9939f1fa6d

Original Phabricator Diff: D35257354

Test Plan:
on trunk: f338665947

with this diff: f338502850

Reviewed By: malfet, robieta

Differential Revision: D35853300

fbshipit-source-id: dd38042aeacb848f66756491a4c849c7c652a0e1
2022-04-26 17:49:57 -04:00
Peter Bell
cb37e7a080 Remove F.pad python implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73433

Approved by: https://github.com/albanD, https://github.com/jbschlosser
2022-04-23 00:13:20 +00:00
Alban Desmaison
eb69e8a3ed Revert "Revert "record_function: update to use custom_class API""
This reverts commit 3f9f35b9f8.

This should be done via a clean revert as this has been in master for a long time.
Doing a quick fix here to make sure we don't break master.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76172
Approved by: https://github.com/atalman
2022-04-21 14:18:28 +00:00
PyTorch MergeBot
3f9f35b9f8 Revert "record_function: update to use custom_class API"
This reverts commit 5630c5ac75.

Reverted https://github.com/pytorch/pytorch/pull/72302 on behalf of https://github.com/atalman
2022-04-21 13:59:48 +00:00
Peter Bell
5630c5ac75 record_function: update to use custom_class API
Merge after forward-compatibility period is over.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72302

Approved by: https://github.com/albanD
2022-03-30 15:57:28 +00:00
James Reed
a2d2610ec9 [FX] Assert None concrete_args and improve error messages (#74662)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74662

Previously, we would not emit a check that `concrete_args` with value `None` matched that value during runtime. This fixes that and improves some of the warning messages

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D35137362

Pulled By: jamesr66a

fbshipit-source-id: 222a2c8a907748f90290f1c1b4ab8012b46099a0
(cherry picked from commit b960405ad87e57dcf62ca25dd4d4bdfc34c8744c)
2022-03-25 23:36:27 +00:00
Shiyan Deng
3f164e0395 [reland] Process inputs and outputs in fx interpreter (#74637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74637

Forgot to update the expect file in https://github.com/pytorch/pytorch/pull/74242. Reland to include changes in expect file.

Test Plan: unit test

Reviewed By: yinghai

Differential Revision: D35089989

fbshipit-source-id: 5e3ad9c696cf31cbc691d34fdb77eff26f92e38d
(cherry picked from commit 110ac12f5e2bcca7552d4b4691c7d98fafb21a57)
2022-03-24 18:32:57 +00:00
Michael Suo
bf5e25f3a9 Revert D34898108: Process inputs and outputs in fx interpreter
Test Plan: revert-hammer

Differential Revision:
D34898108 (f65594fc9f)

Original commit changeset: 250bd236f6c8

Original Phabricator Diff: D34898108 (f65594fc9f)

fbshipit-source-id: 5f634bbc0b393ebcacc0298fd86505a26637ea84
(cherry picked from commit 5804247425afd758d6df6e935374f6965a1c0f54)
2022-03-22 19:14:24 +00:00
Shiyan Deng
f65594fc9f Process inputs and outputs in fx interpreter (#74242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74242

The inputs and outputs of the graph module might be different from the graph inputs and outputs if users are using custom codegen. In interpreter, it runs the graph instead of the generated forward function so it might not work if user provides the inputs to the graph module. To fill the gap, we call `process_inputs` and `process_outputs` inside interpreter.

Test Plan: unit test: test_interpreter_with_codegen

Reviewed By: jamesr66a, Chillee

Differential Revision: D34898108

fbshipit-source-id: 250bd236f6c8c1268a363cf19a09521a4f64b3a9
(cherry picked from commit b33076fa3b10788d455cecc590bc01c4ad8ef94c)
2022-03-22 17:26:01 +00:00
Jane Xu
6ecd13dfef Add super() calls for Fx TestCases (#74216)
Summary:
The fx test case wasn't disabled properly because it didn't call the parent class' setUp().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74216

Reviewed By: zou3519

Differential Revision: D34898707

Pulled By: janeyx99

fbshipit-source-id: 83e56f5a1efc50d24646c182160f7cfcb5bc9935
(cherry picked from commit bb8dd72d1640c1ef0201d615c5d405479afdf078)
2022-03-16 22:12:56 +00:00
Shiyan Deng
f98b316f13 Preserve codegen on fx graph in transformer (#74189)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74189

Use the codegen on the original graph module for the new graph module produced by transformer.

Test Plan: Added a unit test: test_custom_codegen_with_transformer

Reviewed By: yinghai

Differential Revision: D34867938

fbshipit-source-id: fcda6600faeccfa7a650ba7226ca125e8440b19c
(cherry picked from commit d098c12081f61ddcf69052db5b8a1f31b0a0b67b)
2022-03-16 16:33:44 +00:00