Commit Graph

22 Commits

Author SHA1 Message Date
Benjamin Ghaemmaghami
424dc238f4 Fix split module interaction with dead code (#104554)
Summary:
This change fixes split_module's interaction with dead code. Previously if a dead region was split out, split module would throw an error while attempting to access the outputs for the partition even though the partition has no outputs.

This change adds a new unit test to cover the dead code case and changes the output check to allow no output. The split module with no output will now output None like a normal python function

Unit Test Added:
test_split_module_dead_code

A module with dead code:
```
class ModWithDeadCode(torch.nn.Module):
            def forward(self, x):
                output = x * 2 # we want this
                dead_line = x + 2 # this is dead
                return output
```

Before:
```
torch/fx/passes/split_module.py, line 357, in split_module
base_mod_env[list(partition.outputs)[0]] = output_val
IndexError: list index out of range
```

After:
```
class GraphModule(torch.nn.Module):
    def forward(self, x):
        # No stacktrace found for following nodes
        submod_2 = self.submod_2(x)
        submod_1 = self.submod_1(x);  x = None
        return submod_1

    class GraphModule(torch.nn.Module):
        def forward(self, x):
            # No stacktrace found for following nodes
            add = x + 2;  x = None
            return None

    class GraphModule(torch.nn.Module):
        def forward(self, x):
            # No stacktrace found for following nodes
            mul = x * 2;  x = None
            return mul
```
Submod 2 is correctly extracted

Test Plan: Tested with new unit test

Differential Revision: D47196732

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104554
Approved by: https://github.com/yf225
2023-08-03 21:36:35 +00:00
Kazuaki Ishizaki
105ef68f72 Fix typos under torch/fx directory (#97596)
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses, https://github.com/kit1980
2023-04-10 21:57:36 +00:00
Renfei Chen
c44a733018 Fix split_module bug (#95493)
Summary: Title, the mapping currently has lots of unused keys due to the condition or always return True, but it will not affect the correctness.

Test Plan: N/A

Differential Revision: D43579510

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95493
Approved by: https://github.com/Skylion007
2023-02-27 19:11:49 +00:00
Renfei Chen
0d2e91573e Reorder the Fx execution order to in-time get_attr rather than putting all get_attr ahead (#95014)
Summary:
Basically today we:
[getattr....getattr, call partition1, call parition2]
this makes getattr just in time:
so [getattr, call partition1, getattr, call partition 2 ..]

Test Plan:
CMF and MAI test result:
https://fb.quip.com/K5J9A7G246Ox

Differential Revision: D43376080

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95014
Approved by: https://github.com/angelayi
2023-02-21 20:05:30 +00:00
Tran Le
b769005924 [fx][passes] Implement annotate getitem node FX passes (#90237)
Summary: One common cause of jit unscriptability issue is loss of node type annotations on local names after one or several FX transform(s). One way to improve the type coverage is to eagerly annotate the type for `getitem` nodes from its parent sequence node. This diff introduces an fx pass to do that.

Test Plan:
```
buck2 test //caffe2/test:fx_experimental
```

Reviewed By: xush6528

Differential Revision: D41749744

Pull Request resolved: https://github.com/pytorch/pytorch/pull/90237
Approved by: https://github.com/xush6528
2022-12-06 23:18:55 +00:00
Tran Le
0e1fcc8aa8 [FX] Add type annotation to getitem node before split_module (#88510)
Summary: Some nodes lost the type annotation during `split_module`, causing the submodels to be un-scriptable. This is because compiler always infer Tensor type, which is wrong for non-Tensor types. We attempt to infer type annotation for `getitem` node to improve scriptability.

Test Plan:
```
buck2 test //caffe2/test:fx_experimental
```

Differential Revision: D41037819

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88510
Approved by: https://github.com/xush6528
2022-11-18 23:19:14 +00:00
Kazuaki Ishizaki
1cd6ebe095 Fix typos in messages under torch (#89049)
This PR fixes typos of messages in `.py` files under torch directory.
Only in `torch/onnx/symbolic_opset16.py`, fix a typo in comment to make the operator name correct.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89049
Approved by: https://github.com/lezcano
2022-11-17 04:18:14 +00:00
Renfei Chen
4befe45084 [FX] Add one option to maintain the FX graph execution order after splitting_module (#85188)
Summary:
{F770932209}

Given the original execution order and the node dependency relationship (note that the same dependency order could generate multiple execution order, which refers to “Topological Order”), after reunion, we could find the new execution order of the new GraphModule is different from the original one which is not what we want.
For example, let’s assume that NewLeaf_1 is EmbeddingLookup (Calling EmbeddingLookup is awaitable, we will keep executing the following nodes rather than waiting for the result until we have to know the lookup result), NewLeaf_4 is the node where we HAVE to get the lookup result to interact with the NewLeaf_3. So NewLeaf_1 will launch a lookup kernel and all2all communication stream to distribute the result to all ranks. In the meantime, we want to keep executing NewLeaf_2 and NewLeaf_3 to avoid meaningless waiting. However, given the new execution order, we have to wait for the lookup kernel and all2all communication to be finished since the next node NewLeaf_4 needs the result, until then we can execute NewLeaf_2, etc. It cannot leverage the advantage of parallel computation and communication stream and will hurt the QPS a lot.
So while constructing the GraphModule, we have to change from the topological order to the original order

Test Plan:
Unit test

Not sure how to add tests in FX as there's no TARGETS, so I added in the TorchRec folder

Differential Revision: D39567314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85188
Approved by: https://github.com/SherlockNoMad
2022-09-23 23:21:54 +00:00
anjali411
4bf076e964 Add __all__ to torch.distributed, futures, fx, nn, package, benchmark submodules (#80520)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80520
Approved by: https://github.com/rohan-varma
2022-07-08 14:31:24 +00:00
PyTorch MergeBot
58532256e9 Revert "Add __all__ for torch.distributed and fx modules (#80460)"
This reverts commit 5d40c3d5c8.

Reverted https://github.com/pytorch/pytorch/pull/80460 on behalf of https://github.com/malfet due to Broke MacOS testing, see https://github.com/pytorch/pytorch/runs/7105579664?check_suite_focus=true
2022-06-29 16:20:55 +00:00
anjali411
5d40c3d5c8 Add __all__ for torch.distributed and fx modules (#80460)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80460
Approved by: https://github.com/albanD, https://github.com/rohan-varma
2022-06-29 02:53:56 +00:00
James Reed
214951bc6b [FX] Make split_module preserve proper placeholder names (#74736)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74736

Previously, `split_module` would incorrectly carry over the `name` of placeholders rather than their `target`:

Original GraphModule

```
def forward(self, x, **kwargs):
    _kwargs = kwargs
    getitem = _kwargs['foo'];  _kwargs = None
    add = x + getitem;  x = getitem = None
    return add
```

After splitting:

```
def forward(self, x, _kwargs):
    submod_0 = self.submod_0(_kwargs);  _kwargs = None
    submod_1 = self.submod_1(x, submod_0);  x = submod_0 = None
    return submod_1
```

Notice that `**kwargs` is turned into `_kwargs`, which is incorrect and we lose the kwarg expansion behavior. This patch switches the erroneous code in `split_module`, resulting in the correct split code being emitted:

Original GraphModule

```
def forward(self, x, **kwargs):
    _kwargs = kwargs
    getitem = _kwargs['foo'];  _kwargs = None
    add = x + getitem;  x = getitem = None
    return add
```

After splitting:

```
def forward(self, x, **kwargs):
    _kwargs = kwargs
    submod_0 = self.submod_0(_kwargs);  _kwargs = None
    submod_1 = self.submod_1(x, submod_0);  x = submod_0 = None
    return submod_1
```

Test Plan: Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D35137361

Pulled By: jamesr66a

fbshipit-source-id: 46d079cfe16093c293fc268404fb8bc86ffcf583
(cherry picked from commit a020066281856184621561a8672eb57f5de31e92)
2022-03-25 23:36:27 +00:00
Ke Wen
d14de3139a [PyTorch FX] Return mapping of qualified names from split_module() (#73564)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73564

While maintaining API backward compatibility, add an optional output parameter to split_module() that returns a mapping from the new qualified names in the modules after split to the old qualified names in the original module

Test Plan:
1. Added a test (test_split_qualname_mapping) to test_fx_experimental.py to check the returned qualname mapping
```
$ python test_fx_experimental.py
...
Ran 1084 tests in 73.464s
OK (skipped=531, expected failures=4)
```
2. Ask test_fx.py to accept split_module's new signature
```
$ python test_fx.py --accept
```

Reviewed By: jamesr66a

Differential Revision: D34541792

fbshipit-source-id: e8ec7e77ec884e4db7cad0c0593e31861c76e42d
(cherry picked from commit d2e5a95a353ee5fb52cdba065f127489e9df47ae)
2022-03-02 23:32:54 +00:00
Yinghai Lu
e5794974cb [acc_tracer] Do not rewrite the leaf modules (#71790)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71790

If a leaf module is specified, it means we should treat it as a blackbox and we should just avoid rewriting it too.

Test Plan:
```
buck test caffe2/test:test_fx_acc_tracer
```
with a new unit test.

Reviewed By: jfix71, houseroad, wushirong

Differential Revision: D33731903

fbshipit-source-id: 0560d9e8435b40f30d9b99dc3b2f47d1a04eb38b
(cherry picked from commit 747e9e44ee)
2022-01-26 07:32:04 +00:00
James Reed
de902b5d02 [FX] Add a default_value arg to Graph.placeholder and fix split_module (#71016)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71016

I found out that `split_module` doesn't preserve default values for arguments. In trying to fix that, I noticed that `Graph.placeholder` doesn't make it easy to add a default argument when making a placeholder. This PR addresses both of those issues

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D33482218

Pulled By: jamesr66a

fbshipit-source-id: 57ebcdab25d267333fb1034994e08fc1bdb128ee
2022-01-12 14:03:17 -08:00
Patrick Spencer
9fb6ba24e7 Update torch.fx.passes.split_module docstring (#65542)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65542

Add docstring for torch.fx.passes.split_module that conforms to Google Python Style conventions.

Changed original example to the example from this diff:
https://www.internalfb.com/diff/D24925283 (9734c042b8)

Test Plan:
Ran buck test //caffe2/test:fx. No errors detected
https://pxl.cl/1QCch

Reviewed By: jamesr66a

Differential Revision: D31145694

fbshipit-source-id: 8e54f3b1be3dca1c4d414fdeeab71b9f2b5d9f3e
2021-10-07 10:37:10 -07:00
Jordan Fix
592481a5cc [fx][const_fold] Refactor to use base split module to simplify, and correctly handle non-single-Tensor outputs (#65933)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65933

We use `split_module` to split the input model that we want to const fold into const and non-const subgraphs. Previously we were taking the non-const graph and trying to hack it back into the same signature as the input model. However this was complex/buggy.

Instead, refactor to just keep using the base split module that contains both const and non-const graphs. This means we:
- Inline the non-const graph into the split module
- Remove the const graph from the module and replace it with a getattr that will be run to insert that attr when we `run_folding`

Test Plan: Added test coverage to cover newly supported folding, and updated other tests for new strategy.

Reviewed By: yinghai

Differential Revision: D31293307

fbshipit-source-id: 6e283a8c7222cf07b14e30e74dffc8ae5ee8b55f
2021-10-01 10:26:29 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00
Philip Meier
d5988c5eca remove unused type: ignore directives (#60006)
Summary:
During development it is common practice to put `type: ignore` comments on lines that are correct, but `mypy` doesn't recognize this. This often stems from the fact, that the used `mypy` version wasn't able to handle the used pattern.

With every new release `mypy` gets better at handling complex code. In addition to fix all the previously accepted but now failing patterns, we should also revisit all `type: ignore` comments to see if they are still needed or not. Fortunately, we don't need to do it manually: by adding `warn_unused_ignores = True` to the configuration, `mypy` will error out in case it encounters an `type: ignore` that is no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60006

Reviewed By: jbschlosser, malfet

Differential Revision: D29133237

Pulled By: albanD

fbshipit-source-id: 41e82edc5cd5affa7ccedad044b59b94dad4425a
2021-06-18 07:23:31 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Jordan Fix
5eadc243f3 Preserve node meta info in split_module (#56212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56212

The current design doesn't make it easy to use `node.copy()`. Explicitly copy over the node's meta.

Test Plan: Updated `test_subgraph_creation` in `test_fx_experimental`

Reviewed By: jamesr66a

Differential Revision: D27808477

fbshipit-source-id: 7fe7b6428c830307dbd1e395f16fa2774936d3b3
2021-04-16 18:02:50 -07:00
James Reed
a1c5eba4bd [FX] Move some heavily used passes out of experimental (#51392)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51392

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26161172

Pulled By: jamesr66a

fbshipit-source-id: 04bfe606555bdf1988f527231d4de2e0196e6b37
2021-02-01 19:02:26 -08:00