Commit Graph

161 Commits

Author SHA1 Message Date
kshitij12345
ff982ef73d OpInfo: reshape, reshape_as and minor clean-up (#57460)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57460

Reviewed By: nairbv

Differential Revision: D28151675

Pulled By: anjali411

fbshipit-source-id: 2b3bcadab3ff5d1761b2922b63afd70a354e785c
2021-05-12 06:05:21 -07:00
Ilqar Ramazanli
8b816e9010 To implement gradient for Pytorch (#54617)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54617

Reviewed By: anjali411

Differential Revision: D28057452

Pulled By: iramazanli

fbshipit-source-id: 9bd86679282d34f5e5393e6447121586517eb4f0
2021-05-11 18:52:20 -07:00
kshitij12345
9e6b7e6e6e OpInfo: expand and expand_as (#57606)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57606

Reviewed By: albanD

Differential Revision: D28249191

Pulled By: mruberry

fbshipit-source-id: d985ab4e8a99b116c45953e621092929a9a8028e
2021-05-07 02:50:00 -07:00
kshitij12345
154eca0309 OpInfo: ravel, view, view_as (#56910)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56910

Reviewed By: ngimel

Differential Revision: D28141867

Pulled By: mruberry

fbshipit-source-id: bff49d40d7e3bb36bc83d1405bd77f5529eeffe9
2021-05-02 22:10:36 -07:00
Yukio Siraichi
ce4449918a Port reverse binary ops to OpInfo (#56471)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54296
Tracking Issue https://github.com/pytorch/pytorch/issues/54261

**Summary:**
- `rsub` (aten function) was already ported
- Ported tests for its dunder version: `__rsub__`
- Ported tests for the other dunder functions: `__radd__`, `__rmul__`, `__rdiv__`, `__rpow__`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56471

Reviewed By: ngimel

Differential Revision: D28142843

Pulled By: mruberry

fbshipit-source-id: 3d1bd88a4f124774f48d33a7ca7bfc7f796360df
2021-05-02 16:01:12 -07:00
Horace He
786b0a8091 [FX] fix normalization issues with lists of tensors (#57004)
Summary:
Fixes issue with lists of tensors not being normalized correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57004

Reviewed By: jamesr66a

Differential Revision: D28034559

Pulled By: Chillee

fbshipit-source-id: f935f0b73a8356acd8a2ae93fcfc0417f0eab224
2021-04-27 20:02:00 -07:00
Heitor Schueroff
57e37080cd Added OpInfo for torch.einsum (#56276)
Summary:
Adds OpInfo testing for torch.einsum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56276

Reviewed By: mruberry

Differential Revision: D27967095

Pulled By: heitorschueroff

fbshipit-source-id: 60524273d2ca885e7eeb932db3e7fd697ae5ca8e
2021-04-27 07:39:38 -07:00
iramazanli
3e006fc57e Adding hsplit,vsplit and dsplit methods (#53536)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53536

Reviewed By: albanD

Differential Revision: D27938880

Pulled By: iramazanli

fbshipit-source-id: f741119517783ec2bafa296622ee518b587dd127
2021-04-26 09:39:09 -07:00
Jordan Fix
4ef8205104 [fx][normalize] Allow for args to be left as args (#55995)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55995

Normalization is kind of broken currently. But making default arguments visible still appears to work, and is nice functionality to still be able to rely on/use. Adds an option to `NormalizeArgs`'s `__init__` called `normalize_to_only_use_kwargs` which defaults to true, which if set to false will keep using the same signature as provided, but additionally set kwargs in kwargs.

Test Plan: Added test to `test_fx_experimental`.

Reviewed By: 842974287

Differential Revision: D27759448

fbshipit-source-id: 620061fcf46d8549ac70b62aede8b6740aee3778
2021-04-24 08:15:17 -07:00
Horace He
0df239e550 [FX] Make arg normalization a method on Node and not a pass (also augment tests to be exhaustive) (#55992)
Summary:
Commandeered from https://github.com/pytorch/pytorch/pull/54563

Primary changes from first PR:
1. Refactored primary `normalize_function` logic into `operator_schemas.py` so that non-FX users can use it.
2. Refactored tests a bit, and added a path to call `normalize_function` directly.
3. Moved check for `boolean_dispatch` so that `torch.lu` also gets properly handled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55992

Reviewed By: mruberry

Differential Revision: D27774396

Pulled By: Chillee

fbshipit-source-id: 7f65632e1d608e4abd55aec5ccbfdc3f67f52b8e
2021-04-22 03:53:41 -07:00
Jordan Fix
5eadc243f3 Preserve node meta info in split_module (#56212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56212

The current design doesn't make it easy to use `node.copy()`. Explicitly copy over the node's meta.

Test Plan: Updated `test_subgraph_creation` in `test_fx_experimental`

Reviewed By: jamesr66a

Differential Revision: D27808477

fbshipit-source-id: 7fe7b6428c830307dbd1e395f16fa2774936d3b3
2021-04-16 18:02:50 -07:00
James Reed
2236f43da0 [FX] Put tensor metadata into a NamedTuple in ShapeProp (#55930)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55930

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27741730

Pulled By: jamesr66a

fbshipit-source-id: 0a0a1b94beed6c482add9e9551f316f3b4220ab2
2021-04-13 22:21:50 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Shiyan Deng
43ede4c2e3 Add Per Tensor Quantization Support to FXIRImporter (#55405)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55405

Pull Request resolved: https://github.com/pytorch/glow/pull/5516

Allows FXIRImport to import quantized model.

This diff doesn't include the supports for per-channel weights, linear and conv. Will address them in the next diff.

Test Plan: buck test glow/fb/fx/nnpi_importer:test_importer

Reviewed By: jackm321, jfix71

Differential Revision: D27313543

fbshipit-source-id: bf5c96ef5f2ff1835c09db981e0ceefaec56dd5b
2021-04-09 10:49:48 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
Horace He
24bfcd537e [FX] Added FX prepare_for_inference for Intel CPUs (#53805)
Summary:
Part of https://github.com/pytorch/pytorch/issues/48209

Taken from the docstring:
 Performs a set of optimization passes to optimize a model for the purposes of inference. Specifically, the passes that are run are:
    1. Conv/BN fusion
    2. Dropout removal
    3. MKL layout optimizations

The third optimization takes a function `use_mkl_heuristic` that's used to determine whether a subgraph should be explicity run in MKL layout.

I implemented 2 heuristics:
1. Does it in MKL if the subgraph is larger than 2.
2. Benchmarks each subgraph with MKL layout and without, and keeps the subgraph if it's faster.

### Batch size of 10 and multi-threaded.

Results with the second heuristic are generally as strong as the "jit.freeze" version, except in `densenet` and `vgg`, where it's faster, likely due to the heuristic being better. With the first heuristic, there are some notable gaps, particularly on `inception_v3` and `alexnet`.

```
model         Eager      FX         FX Auto   jit.mkldnn
------------  ---------  ---------  ---------  ---------  -
custom        0.195614   0.14686    0.15929    0.156442   6
resnet18      0.172012   0.114007   0.119678   0.12945    6
resnet50      0.486463   0.294308   0.299518   0.318121   6
densenet161   0.955309   0.893502   0.882798   1.29315    6
inception_v3  0.38454    0.307076   0.239513   0.233083   6
googlenet     0.229388   0.237486   0.170458   0.174106   6
shufflenet    0.0513613  0.0286739  0.0292908  0.0267209  6
alexnet       0.0709602  0.0768137  0.0660831  0.0650399  6
vgg16         1.053993   0.9013264  0.9360212  1.082820   6
mobilenet     0.12264    0.0970935  0.0936568  0.106314   6
mnasnet       0.0989875  0.0412083  0.0424499  0.0472336  6
resnext       0.476811   0.315428   0.314422   0.343156   6
```

For single-threaded (still running...)
```
model             eager         FX    FX auto        mkl    threads
------------  ---------  ---------  ---------  ---------  ---------
custom        0.0401415  0.259863   0.0263152  0.200667           1
resnet18      0.499931   0.382113   0.383711   0.396335           1
resnet50      1.10353    0.911865   0.923645   0.992125           1
densenet161   2.20158    2.39421    2.08204    2.30124            1
inception_v3  0.79161    0.849207   0.703546   0.724492           1
googlenet     0.66896    0.820965   0.515927   0.529414           1
shufflenet    0.0987308  0.0689343  0.0629298  0.0617193          1
alexnet       0.198795   0.19862    0.19325    0.211934           1
vgg16         3.744      3.2499     3.28503    3.31576            1
mobilenet     0.152725   0.14505    0.135555   0.159754           1
mnasnet       0.141983   0.089406   0.089599   0.0956167          1
resnext       1.13778    0.97016    0.955417   0.965376           1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53805

Reviewed By: gmagogsfm

Differential Revision: D27424611

Pulled By: Chillee

fbshipit-source-id: a39137159de962fba7ca15121dfa9e78c1e01223
2021-03-31 10:15:01 -07:00
James Reed
c656a5befa [FX] Normalize Python operators to torch. ops when called with Tensors (#54236)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54236

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D27149411

Pulled By: jamesr66a

fbshipit-source-id: fe9c468f7c84c254dbb1b70163d08b343725861a
2021-03-25 22:27:49 -07:00
James Reed
a27f46bbe3 [FX] Experimental type annotation pass using Python signatures (#53831)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53831

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26982804

Pulled By: jamesr66a

fbshipit-source-id: 17db9f71e729206f29ee231e34723d9616f128b7
2021-03-17 20:43:17 -07:00
Jordan Fix
1053c96693 [GraphModule] Back out changes to module root version of __init__ (#53791)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53791

Reviewed By: houseroad

Differential Revision: D26970869

fbshipit-source-id: 80684516f57fd2d1aca794f17fe488b2fe2b2f64
2021-03-10 23:18:56 -08:00
Jordan Fix
3b0e4a6ed4 [GraphModule] Improve buffer registration during init (#53444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53444

GraphModule construction has two options when constructing the base nn.Module: a dict of names to attrs to assign to the GraphModule, or another nn.Module to copy attrs from.

- For the dict case, add logic to explicitly register `nn.Tensors` that are not `nn.Parameter` as buffers on the GraphModule, else fall back to `__setattr__`.
- For the other `nn.Module` case, update so that it checks in the other module whether the attr to copy in is a buffer, and register it as such, else fall back to `__setattr__`.

Test Plan: Added tests for fetching params and buffers from a GraphModule using both dict and module `__init__`s

Reviewed By: jamesr66a

Differential Revision: D26860055

fbshipit-source-id: 8d9999f91fef20aaa10969558006fc356247591f
2021-03-09 21:05:01 -08:00
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
James Reed
f7a3634466 [WIP][FX] Normalize torch.nn.functional calls (#51816)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51816

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26290764

Pulled By: jamesr66a

fbshipit-source-id: 9c05ff1b7c6f0ab8a13516f7cc2fe279980ebe5d
2021-02-17 15:18:03 -08:00
James Reed
a1c5eba4bd [FX] Move some heavily used passes out of experimental (#51392)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51392

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26161172

Pulled By: jamesr66a

fbshipit-source-id: 04bfe606555bdf1988f527231d4de2e0196e6b37
2021-02-01 19:02:26 -08:00
Garret Catron
0e8e739a9f Move AcceleratedGraphModule out of graph_manipulation. (#51220)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51220

testing with OS this time...

Reviewed By: jfix71

Differential Revision: D26105140

fbshipit-source-id: b4b7a8f0f4cc8f96f9f8b270277a71061d5e5e84
2021-01-28 02:39:12 -08:00
Nikita Shulga
57484103be Revert D25675618: Move AcceleratedGraphModule out of graph_manipulation.
Test Plan: revert-hammer

Differential Revision:
D25675618 (c8a24ebe54)

Original commit changeset: 55636bb2d3d6

fbshipit-source-id: 7b196f7c32830061eca9c89bbcb346cdd66a211e
2021-01-26 15:31:18 -08:00
Garret Catron
c8a24ebe54 Move AcceleratedGraphModule out of graph_manipulation.
Test Plan:
buck test //caffe2/test:test_fx_experimental
buck test //glow/fb/fx_nnpi_importer:test_importer

Reviewed By: jfix71

Differential Revision: D25675618

fbshipit-source-id: 55636bb2d3d6102b400f2044118a450906954083
2021-01-26 12:39:49 -08:00
Meghan Lele
11cdb910b4 [fx] Add matrix multiplication fusion pass (#50151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50151

**Summary**
This commit adds a graph transformation pass that merges several matrix
multiplications that use the same RHS operand into one large matrix
multiplication. The LHS operands from all of the smaller matrix multiplications
are concatenated together and used as an input in the large matrix multiply,
and the result is split in order to obtain the same products as the original
set of matrix multiplications.

**Test Plan**
This commit adds a simple unit test with two matrix multiplications that share
the same RHS operand.

`python test/test_fx_experimental.py -k merge_matmul -v`

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25809409

Pulled By: SplitInfinity

fbshipit-source-id: fb55c044a54dea9f07b71aa60d44b7a8f3966ed0
2021-01-06 21:49:37 -08:00
Natalia Gimelshein
ad7d208ba5 Revert D25239967: [fx] Add matrix multiplication fusion pass
Test Plan: revert-hammer

Differential Revision:
D25239967 (9b7f3fa146)

Original commit changeset: fb99ad25b7d8

fbshipit-source-id: 370167b5ade8bf2b3a6cccdf4290ea07b8347c79
2021-01-05 23:22:26 -08:00
Meghan Lele
9b7f3fa146 [fx] Add matrix multiplication fusion pass (#50120)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50120

This commit adds a graph transformation pass that merges several matrix
multiplications that use the same RHS operand into one large matrix
multiplication. The LHS operands from all of the smaller matrix multiplications
are concatenated together and used as an input in the large matrix multiply,
and the result is split in order to obtain the same products as the original
set of matrix multiplications.

Test Plan:
This commit adds a simple unit test with two matrix multiplications that share
the same RHS operand.

`buck test //caffe2/test:fx_experimental`

Reviewed By: jamesr66a

Differential Revision: D25239967

fbshipit-source-id: fb99ad25b7d83ff876da6d19dc4abd112d13001e
2021-01-05 19:37:08 -08:00
Shiyan Deng
107c31f2f5 Add a pass to fetch attributes of nn.Module to fx.node (#47935)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47935

Fetch the parameters that are needed for lowering from nn.Module to fx.node for leaf_modules.

Test Plan: A test `test_fetch` is added to test_fx_experimental.py.

Reviewed By: jfix71

Differential Revision: D24957142

fbshipit-source-id: a349bb718bbcb7f543a49f235e071a079da638b7
2020-12-08 18:06:37 -08:00
Wang Xu
6000481473 add a unit test for large node error (#48938)
Summary:
add a unit test to test the situation where a node is too large to fit into any device

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48938

Reviewed By: zhangguanheng66

Differential Revision: D25402967

Pulled By: scottxu0730

fbshipit-source-id: a2e2a3dc70d139fa678865ef03e67fa57eff4a1d
2020-12-08 14:45:44 -08:00
Wang Xu
799b700ada add a unit test for lack of devices (#48858)
Summary:
add a unit test for the situation where devices have no enough memory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48858

Reviewed By: malfet, gcatron

Differential Revision: D25341254

Pulled By: scottxu0730

fbshipit-source-id: c0524c22717b6c8afd67f5b0ad0f1851b973e4b7
2020-12-05 06:09:04 -08:00
Horace He
092e52a4da [fx]added prototype of to_folder (#47544)
Summary:
What this does is that given a `FxModule foo`, you can call `foo.to_folder('foo_folder', 'Foo')` and dump the current FX module into runnable Python code.

That is
```
foo = <fxModule>
foo = foo.to_folder('bar', 'Foo')
from bar import Foo
foo2 = Foo()

forall x, foo2(x) == Foo(x)
```

This has several use cases, largely lifted from jamesr66a's doc here: https://fb.quip.com/U6KHAFaP2cWa (FB-internal).

1. As we apply more heavy-weight function transformations with FX, figuring out what's going on can be quite a difficult experience. In particular, things that can typically be used for debugging (like `print` or `import pdb; pdb.set_trace()`) no longer work. This is particularly necessary if you're using a FX transform like `grad` or `vmap. With this, you simply open up the dumped file, and add `print`/`pdb` statements wherever you'd like.

2. This also provides an immense amount of user control. Some potential use-cases:
-  Let's say an existing FX transform has some bug, or generates suboptimal code. Instead of needing to modify that FX transform, writing another FX pass that fixes the suboptimal code, or simply giving up on FX, they can workaround it by simply modifying the resulting code themselves.
- This allows users to check in their FX modules into source control.
- You could even imagine using this as part of some code-gen type workflow, where you write a function, `vmap` it to get the function you actually want, and then simply copy the output of the `vmap` function without needing FX at all in the final code.

An example:
```python
class Test(nn.Module):
    def __init__(self):
        super(Test, self).__init__()
        self.W = torch.nn.Parameter(torch.randn(2))
        self.linear = nn.Linear(2, 2)
        self.attr = torch.randn(2)
        self.attr2 = torch.randn(2)

    def forward(self, x):
        return self.linear(self.W + (self.attr + self.attr2) + x)

mod = fx.symbolic_trace(Test())
mod.to_folder('foo', 'Foo')
```
results in
```python
import torch
class Foo(torch.nn.Module):
    def __init__(self):
        super().__init__()
        state_dict = torch.load('foo/state_dict.pt')
        self.linear = torch.load('foo/linear.pt') # Linear(in_features=2, out_features=2, bias=True)
        self.__tensor_constant0 = state_dict['__tensor_constant0']
        self.W = torch.nn.Parameter(state_dict['W'])

    def forward(self, x):
        w = self.W
        tensor_constant0 = self.__tensor_constant0
        add_1 = w + tensor_constant0
        add_2 = add_1 + x
        linear_1 = self.linear(add_2)
        return linear_1
```
Some current issues:
1. How do you actually ... save things like modules or parameters? I don't think FX is in the business of tracking initializations and such. Thus, the only way I see to do it is to dump the parameters/modules as blobs, and then load them in the generated initialization. This is a somewhat subpar user experience, and perhaps prevents it from being in some use cases (ie: you would need to check in the blobs into source control to save the model).

2. Currently, the only "atomic" modules we have are those in `torch.nn`. However, if we want to allow flexibility in this, and for example, allow "atomic" modules that are user-defined, then it's not clear how to allow those to be dumped in a way that we can then load elsewhere.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47544

Reviewed By: jamesr66a

Differential Revision: D25232917

Pulled By: Chillee

fbshipit-source-id: fd2b61a5f40e614fc94256a2957ed1d57fcf5492
2020-12-04 18:33:27 -08:00
Wang Xu
9af627fda1 fix some typos in the fx ir test_fx_experiemntal (#48847)
Summary:
fix some typos in test_fx_experimental.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48847

Reviewed By: malfet, gcatron

Differential Revision: D25339391

Pulled By: scottxu0730

fbshipit-source-id: 388d9da94259d2b306d59f3f4a167e486ac06d60
2020-12-04 12:18:36 -08:00
Wang Xu
7a59a1b574 add aot_based_partition (#48336)
Summary:
This PR add supports on AOT based partition. Given each node and its corresponding partition id, generate the partition, submodules and dag

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48336

Reviewed By: gcatron

Differential Revision: D25226899

Pulled By: scottxu0730

fbshipit-source-id: 8afab234afae67c6fd48e958a42b614f730a61d9
2020-11-30 19:11:02 -08:00
Horace He
0a3db1d460 [FX] Prototype Conv/BN fuser in FX (#47657)
Summary:
Some interesting stuff going on. All benchmarks are tested with both my implementation as well as the current quantized fuser.

For these benchmarks, things like using MKLDNN/FBGEMM make a big differene.

## Manual compilation (everything turned off)
In the small case, things look good
```
non-fused:  1.174886703491211
fused:  0.7494957447052002
```

However, for `torchvision.resnet18`, we see
```
non-fused:  1.2272708415985107
fused:  3.7183213233947754
```

This is because Conv (no bias) -> Batch Norm is actually faster than Conv (bias) if you don't have any libraries...

## Nightly (CPU)
```
Toy
non-fused:  0.45807552337646484
fused:  0.34779977798461914

resnet18
non-fused:  0.14216232299804688
fused:  0.13438796997070312

resnet50
non-fused:  0.2999534606933594
fused:  0.29364800453186035

densenet161
non-fused:  0.6558926105499268
fused:  0.6190280914306641

inception_v3
non-fused:  1.2804391384124756
fused:  1.181272029876709
```
with MKLDNN.

We see a small performance gain across the board, with more significant performance gains for smaller models.

## Nightly (CUDA)

```
M
non-fused:  1.2220964431762695
fused:  1.0833759307861328

resnet18
non-fused:  0.09721899032592773
fused:  0.09089207649230957

resnet50
non-fused:  0.2053072452545166
fused:  0.19138741493225098

densenet161
non-fused:  0.6830024719238281
fused:  0.660109281539917
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47657

Reviewed By: eellison

Differential Revision: D25127546

Pulled By: Chillee

fbshipit-source-id: ecdf682038def046045fcc09faf9aeb6c459b5e3
2020-11-20 18:51:32 -08:00
Wang Xu
4b56aef05d add kl_based_partition (#48197)
Summary:
This is a partition search based on Kernighan-Lin algorithm. First, the graph is partitioned using size_based_partition, then nodes from different partitions are swapped until the cost reaches minimum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48197

Reviewed By: gcatron

Differential Revision: D25097065

Pulled By: scottxu0730

fbshipit-source-id: 3a11286bf4e5a712ab2848b92d0b98cd3d6a89be
2020-11-19 17:38:25 -08:00
James Reed
4316bf98f5 [FX] Refactor unique name handling (#48205)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48205

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25068934

Pulled By: jamesr66a

fbshipit-source-id: 04e02bbfd2cc9a8c3b963d9afdf40bac065c319b
2020-11-18 21:56:52 -08:00
Wang Xu
fa0acb73bd fix node manipulation in partition class (#48016)
Summary:
This PR fixes the add_node and remove_node in partition class and also add a unit test for node manipulation in partition

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48016

Reviewed By: gcatron

Differential Revision: D24996368

Pulled By: scottxu0730

fbshipit-source-id: 0ddffd5ed3f95e5285fffcaee8c4b671929b4df3
2020-11-16 15:33:11 -08:00
Vasiliy Kuznetsov
ee995d33bd rename torch.Assert to torch._assert (#47763) (#47972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47972

Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```

Reviewed By: supriyar

Differential Revision: D24974298

Pulled By: vkuzo

fbshipit-source-id: 24ded93a7243ec79a0375f4eae8a3db9b787f857
2020-11-16 11:43:27 -08:00
Wang Xu
0dbff184e9 change file name to snake style (#47914)
Summary:
Change Partitioner.py file name to partitioner.py
Change GraphManipulation.py file name to graph_manipulation.py
Move test_replace_target_nodes_with() to test_fx_experimental.py
Remove the unnecessary argument in size_based_partition() in Partitioner class

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47914

Reviewed By: gcatron

Differential Revision: D24956653

Pulled By: scottxu0730

fbshipit-source-id: 25b65be7dc7d64e90ffdc59cf394446fee83c3e6
2020-11-14 01:29:25 -08:00
Richard Zou
e5da3b6097 Revert D24891767: rename torch.Assert to torch._assert
Test Plan: revert-hammer

Differential Revision:
D24891767 (a8ca042ec0)

Original commit changeset: 01c7a5acd83b

fbshipit-source-id: cd2271467151b578185758723fcd23f69051d3a3
2020-11-13 08:35:05 -08:00
Wang Xu
759a548d6e add dependency check in cost_aware_partition (#47856)
Summary:
In the cost_aware_partition, check the circular dependency in try_combining_partitions. Also fix the calculate of communication time between partitions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47856

Reviewed By: gcatron

Differential Revision: D24926591

Pulled By: scottxu0730

fbshipit-source-id: c634608675ac14b13b2370a727e4fb05e1bb94f0
2020-11-13 02:49:39 -08:00
Vasiliy Kuznetsov
a8ca042ec0 rename torch.Assert to torch._assert (#47763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47763

Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```

Imported from OSS

Reviewed By: ezyang

Differential Revision: D24891767

fbshipit-source-id: 01c7a5acd83bf9c962751552780930c242134dd2
2020-11-12 23:59:34 -08:00
James Reed
9734c042b8 [FX] Fix submodule naming for subgraph split (#47869)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47869

Test Plan: Imported from OSS

Reviewed By: scottxu0730

Differential Revision: D24925283

Pulled By: jamesr66a

fbshipit-source-id: a33bff20667405a3bbfc81e1e640c2649c0db03b
2020-11-12 15:58:45 -08:00
Garret Catron
21f447ee2c Added serialization of parameters for leaf modules (#47729)
Summary:
This adds the serialization of parameters of leaf nodes to the json serialization.
Specifically __constants__ of the leaf module is serialized as parameters in the JSON.
It also adds type/shape to leaf modules as well.
```
{
            "shape": "[3, 3, 1, 1]",
            "dtype": "torch.float32",
            "parameters": {
                "name": "Conv2d",
                "stride": [
                    1,
                    1
                ],
                "padding": [
                    0,
                    0
                ],
                "dilation": [
                    1,
                    1
                ],
                "groups": 1,
                "padding_mode": "zeros",
                "output_padding": [
                    0,
                    0
                ],
                "in_channels": 3,
                "out_channels": 3,
                "kernel_size": [
                    2,
                    2
                ]
            },
            "target": "conv",
            "op_code": "call_module",
            "name": "conv",
            "args": [
                {
                    "is_node": true,
                    "name": "c"
                }
            ],
            "kwargs": {}
        },
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47729

Reviewed By: ailzhang

Differential Revision: D24901632

Pulled By: gcatron

fbshipit-source-id: 7f2d923937042b60819c58fd180b426a3733ff5f
2020-11-12 14:28:31 -08:00
Wang Xu
b46787d6d7 add cost_aware_partition (#47673)
Summary:
[WIP]This PR adds cost_aware_partition method in Partitioner class. The method partitions the fx graph module based on the latency of the whole graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47673

Reviewed By: gcatron

Differential Revision: D24896685

Pulled By: scottxu0730

fbshipit-source-id: 1b1651fe82ce56554f99d68da116e585c74099ed
2020-11-11 19:31:37 -08:00
Garret Catron
497cd2506f Add serialize GraphModule to JSON support (#47612)
Summary:
re-opening PR, missed mypy issues, they are now addressed.
Example:

class TestModule(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.linear = torch.nn.Linear(4, 4)
                self.e = torch.rand(4)

            def forward(self, a, b):
                add_1 = a + b
                linear = self.linear(add_1)
                add_2 = linear + self.e
                return add_2
JSON:

{
    "modules": {},
    "weights": {
        "linear.weight": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4, 4]"
        },
        "linear.bias": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        },
        "e": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        }
    },
    "nodes": [
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "a",
            "op_code": "placeholder",
            "name": "a",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "b",
            "op_code": "placeholder",
            "name": "b",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_1",
            "args": [
                {
                    "is_node": true,
                    "name": "a"
                },
                {
                    "is_node": true,
                    "name": "b"
                }
            ],
            "kwargs": {}
        },
        {
            "target": "linear",
            "op_code": "call_module",
            "name": "linear_1",
            "args": [
                {
                    "is_node": true,
                    "name": "add_1"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "e",
            "op_code": "get_attr",
            "name": "e",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_2",
            "args": [
                {
                    "is_node": true,
                    "name": "linear_1"
                },
                {
                    "is_node": true,
                    "name": "e"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "output",
            "op_code": "output",
            "name": "output",
            "args": [
                {
                    "is_node": true,
                    "name": "add_2"
                }
            ],
            "kwargs": {}
        }
    ]
}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47612

Reviewed By: scottxu0730

Differential Revision: D24836223

Pulled By: gcatron

fbshipit-source-id: d3da2b5f90d143beba3b7f1f67462fb7430df906
2020-11-10 11:54:02 -08:00
Nikita Shulga
6248e0621c Revert D24801481: [pytorch][PR] Add AcceleratedGraphModule and serialzie GraphModule to JSON
Test Plan: revert-hammer

Differential Revision:
D24801481 (9e0102c10f)

Original commit changeset: 6b3fe69b51f7

fbshipit-source-id: f8287ef88b302e0f08d58090dc61603a4ef5cb3c
2020-11-09 08:28:22 -08:00
Garret Catron
9e0102c10f Add AcceleratedGraphModule and serialzie GraphModule to JSON (#47233)
Summary:
Example:
```
class TestModule(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.linear = torch.nn.Linear(4, 4)
                self.e = torch.rand(4)

            def forward(self, a, b):
                add_1 = a + b
                linear = self.linear(add_1)
                add_2 = linear + self.e
                return add_2
```
JSON:
```
{
    "modules": {},
    "weights": {
        "linear.weight": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4, 4]"
        },
        "linear.bias": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        },
        "e": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        }
    },
    "nodes": [
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "a",
            "op_code": "placeholder",
            "name": "a",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "b",
            "op_code": "placeholder",
            "name": "b",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_1",
            "args": [
                {
                    "is_node": true,
                    "name": "a"
                },
                {
                    "is_node": true,
                    "name": "b"
                }
            ],
            "kwargs": {}
        },
        {
            "target": "linear",
            "op_code": "call_module",
            "name": "linear_1",
            "args": [
                {
                    "is_node": true,
                    "name": "add_1"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "e",
            "op_code": "get_attr",
            "name": "e",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_2",
            "args": [
                {
                    "is_node": true,
                    "name": "linear_1"
                },
                {
                    "is_node": true,
                    "name": "e"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "output",
            "op_code": "output",
            "name": "output",
            "args": [
                {
                    "is_node": true,
                    "name": "add_2"
                }
            ],
            "kwargs": {}
        }
    ]
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47233

Reviewed By: jackm321, yinghai

Differential Revision: D24801481

Pulled By: gcatron

fbshipit-source-id: 6b3fe69b51f7ac57f445675acdac36b0e563f73d
2020-11-08 19:26:02 -08:00
Wang Xu
b4b0fa6371 add get_device_to_partitions_mapping (#47361)
Summary:
add get_device_to_partitions_mapping function in the Partitioner class to make size_based_partition more modular and organized. This function will also be used in the future cost_aware_partition

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47361

Reviewed By: gcatron

Differential Revision: D24760911

Pulled By: scottxu0730

fbshipit-source-id: 8cdda51b9a1145f9d13ebabbb98b4d9df5ebb6cd
2020-11-05 16:33:02 -08:00
Wang Xu
5107a411cd add partition_by_partition_cost (#47280)
Summary:
This PR adds the support to calculate the cost of a partitioned graph partition by partition based on the node cost. In a partitioned graph, top partitions (partitions without parents) are collected as the starting points, then use DFS to find the critical path among all partitions in the graph

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47280

Reviewed By: gcatron

Differential Revision: D24735932

Pulled By: scottxu0730

fbshipit-source-id: 96653a8208554d2c3624e6c8718628f7c13e320b
2020-11-04 18:21:18 -08:00
Ansley Ussery
dec1c36487 Create prototype for AST rewriter (#47216)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47216

Test Plan: Imported from OSS

Reviewed By: glaringlee

Differential Revision: D24687539

Pulled By: ansley

fbshipit-source-id: 421108d066ff93ee18f4312ee67c287ca1cef881
2020-11-03 19:21:58 -08:00
Wang Xu
1fe273d798 add node by node cost function (#47009)
Summary:
This PR adds node-by-node cost function. Given a partition of nodes, get_latency_of_one_partition function will find the critical path in the partition and return its latency. A test unit is also provided. In the test unit, a graph module is partitioned into two partitions and the latency of each partition is tested.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47009

Reviewed By: gcatron

Differential Revision: D24692542

Pulled By: scottxu0730

fbshipit-source-id: 64c20954d842507be0d1afa2516d88f705e11224
2020-11-02 21:15:43 -08:00
Natalia Gimelshein
317b78d56e Revert D24665950: Create prototype for AST rewriter
Test Plan: revert-hammer

Differential Revision:
D24665950 (54feb00bbd)

Original commit changeset: b72110436126

fbshipit-source-id: 961412df006acd33c91a745c809832d5c6494c76
2020-10-31 18:07:10 -07:00
Ansley Ussery
54feb00bbd Create prototype for AST rewriter (#46410)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46410

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D24665950

Pulled By: ansley

fbshipit-source-id: b72110436126a24ddc294b8ee7b3f691281c1f1b
2020-10-31 10:51:17 -07:00
Wang Xu
6c34aa720c add add_node function for partition to fix partition mem size calculation (#47083)
Summary:
Placeholders and constants in the partition are counted twice when combining two partitions. This PR fixes it by adding add_node function into Partition class. A unit test is also updated to test if the partition size is correct

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47083

Reviewed By: gcatron

Differential Revision: D24634368

Pulled By: scottxu0730

fbshipit-source-id: ab408f29da4fbf729fd9741dcb3bdb3076dc30c4
2020-10-30 01:59:42 -07:00
Wang Xu
a86b3438eb add support for different memory sizes on size_based_partition (#46919)
Summary:
WIP: add support for different memory sizes on size_based_partition, so the size_based_partition could support different logical devices with different memory sizes. Compared to the original size_based_partition, the new one also supports partition to logical device mapping. Multiple partitions can be mapped into one device if the memory size is allowed. A test unit test_different_size_partition is also added.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46919

Reviewed By: gcatron, VitalyFedyunin

Differential Revision: D24603511

Pulled By: scottxu0730

fbshipit-source-id: 1ba37338ae054ad846b425fbb7e631d3b6c500b6
2020-10-28 21:11:41 -07:00
Wang Xu
8640905088 add sparse_nn_partition (#46390)
Summary:
WIP: This PR adds sparse_nn_partition into Partitioner class. It includes logical device assignment for all dag nodes. The basic idea is to do size_based_partition separately for embedding nodes and non-embedding nodes. A test unit is also added in test_fx_experimental.py.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46390

Reviewed By: gcatron

Differential Revision: D24555415

Pulled By: scottxu0730

fbshipit-source-id: 8772af946d5226883759a02a1c827cfdfce66097
2020-10-27 00:11:58 -07:00
Wang Xu
62d37b9f26 add size_based_partition final (#46282)
Summary:
Reopen the PR: https://github.com/pytorch/pytorch/pull/45837
This PR add a new feature for Partitioner() class called size_based_partition. Given a list of devices with the same memory size, this function could distribute graph nodes into different devices. To implement this feature, several help functions are created in Partitioner.py and GraphManipulation.py.
An unit test is also added in test/test_fx_experimental.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46282

Reviewed By: gcatron

Differential Revision: D24288470

Pulled By: scottxu0730

fbshipit-source-id: e81b1e0c56e34f61e497d868882126216eba7538
2020-10-14 03:44:05 -07:00