Commit Graph

89 Commits

Author SHA1 Message Date
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
Michael Suo
958d9a8364 [fx/package] make GraphModules packageable (#51976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51976

FX serializes things by serializing Python code as a string and exec'ing
it on load. This accomplishes one goal (we don't have to pickle the
graph object directly) but breaks the pickle abstraction in ways that
are not composable with `torch.package`.

In particular:
1. `forward` is serialized by saving Python code. On load, it's
installed
by  `exec`ing that code. This `exec` call needs to have the right
importer installed, otherwise it will not import modules from the
`torch.package` but instead import from the Python environment.
2. Any types/functions used are emitted as `import` statement in the
generated Python code. These are effectively dynamic dependencies of the
`GraphModule` being saved, and need to be registered as such so that the
`PackageImporter` will package them.

To address these, this PR introduces a new protocol for the
importer/exporter: `__reduce_package__`.

A class can implement `__reduce_package__` to customize how it is placed
in the importer/exproter. It functions very similarly to `__reduce__`,
except:
- `__reduce_package__` takes one argument, which is the
`PackageExporter`
instance. Users can use this instance to save stuff to the package to
implement their serialization. `__reduce__` takes no args.
- Only the 2-element tuple version of the return value for `__reduce__`
is supported (this could be extended if necessary).
- When the reduction function is called on load, an additional argument
is added to the beginning of the args tuple. This is the
`PackageImporter`
instance doing the loading.

The `__reduce_package__` protocol is defined using `persistent_id` and
`persistent_load`, which ensures that we can still use the cpickle
implementation of the pickler by default.

Pull Request resolved: #51971

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D26340591

Pulled By: suo

fbshipit-source-id: 5872a7d22e832056399a7372bae8a57807717882
2021-02-23 22:43:00 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
Ansley Ussery
4cc10563e7 Customize traceback for calls to symbolically-traced code (#51648)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51648

The following code will throw during the call to `traced(5)`:
```python
class M(nn.Module):
    def __init__(self):
        super(M, self).__init__()
        self.W = torch.nn.Parameter(torch.randn(5))

    def forward(self, x):
        return torch.dot(self.W, x)

traced = fx.symbolic_trace(M())
traced(5)
```

Traceback before:
```
Traceback (most recent call last):
  File "test/tinytest.py", line 26, in <module>
    traced(5)
  File "/home/ansley/local/pytorch/torch/fx/graph_module.py", line 338, in wrapped_call
    return self._cls_call(self, *args, **kwargs)
  File "/home/ansley/local/pytorch/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "<eval_with_key_0>", line 4, in forward
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int
```

Traceback after:
```
Traceback (most recent call last):
  File "/home/ansley/local/pytorch/torch/fx/graph_module.py", line 338, in wrapped_call
    return torch.nn.Module.__call__(self, *args, **kwargs)
  File "/home/ansley/local/pytorch/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "<eval_with_key_1>", line 4, in forward
    dot_1 = torch.dot(w, x);  w = x = None
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int

Call using an FX-traced Module, line 4 of the traced Module’s generated forward function:
    w = self.W
    dot_1 = torch.dot(w, x);  w = x = None

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    relu_1 = dot_1.relu();  dot_1 = None

    return relu_1
```

(Note that the same `TypeError` is thrown despite modifying the traceback.)

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26424005

Pulled By: ansley

fbshipit-source-id: 368f46ba81fb3111bd09654825bb2ac5595207d1
2021-02-12 18:31:23 -08:00
Ansley Ussery
4ac489091a Improve call provenance during GraphModule scripting (#50538)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50538

Test Plan: Imported from OSS

Reviewed By: pbelevich, SplitInfinity

Differential Revision: D25935403

Pulled By: ansley

fbshipit-source-id: 2baf5e0ba0fa3918e645fc713a9e80d10bbc84e5
2021-01-21 12:03:19 -08:00
James Reed
5205cc1c62 [FX] Fix NoneType annotation in generated code (#50777)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50777

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25966026

Pulled By: jamesr66a

fbshipit-source-id: 8e36521eee03eade7e1b602e801229c085b03488
2021-01-19 23:16:58 -08:00
James Reed
ae9f39eb58 [FX][1/2] Make docstrings pretty when rendered (#48738)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48738

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25280867

Pulled By: jamesr66a

fbshipit-source-id: d08641c19a6c69b4042389c800a48e699f0be628
2020-12-05 17:23:40 -08:00
Horace He
092e52a4da [fx]added prototype of to_folder (#47544)
Summary:
What this does is that given a `FxModule foo`, you can call `foo.to_folder('foo_folder', 'Foo')` and dump the current FX module into runnable Python code.

That is
```
foo = <fxModule>
foo = foo.to_folder('bar', 'Foo')
from bar import Foo
foo2 = Foo()

forall x, foo2(x) == Foo(x)
```

This has several use cases, largely lifted from jamesr66a's doc here: https://fb.quip.com/U6KHAFaP2cWa (FB-internal).

1. As we apply more heavy-weight function transformations with FX, figuring out what's going on can be quite a difficult experience. In particular, things that can typically be used for debugging (like `print` or `import pdb; pdb.set_trace()`) no longer work. This is particularly necessary if you're using a FX transform like `grad` or `vmap. With this, you simply open up the dumped file, and add `print`/`pdb` statements wherever you'd like.

2. This also provides an immense amount of user control. Some potential use-cases:
-  Let's say an existing FX transform has some bug, or generates suboptimal code. Instead of needing to modify that FX transform, writing another FX pass that fixes the suboptimal code, or simply giving up on FX, they can workaround it by simply modifying the resulting code themselves.
- This allows users to check in their FX modules into source control.
- You could even imagine using this as part of some code-gen type workflow, where you write a function, `vmap` it to get the function you actually want, and then simply copy the output of the `vmap` function without needing FX at all in the final code.

An example:
```python
class Test(nn.Module):
    def __init__(self):
        super(Test, self).__init__()
        self.W = torch.nn.Parameter(torch.randn(2))
        self.linear = nn.Linear(2, 2)
        self.attr = torch.randn(2)
        self.attr2 = torch.randn(2)

    def forward(self, x):
        return self.linear(self.W + (self.attr + self.attr2) + x)

mod = fx.symbolic_trace(Test())
mod.to_folder('foo', 'Foo')
```
results in
```python
import torch
class Foo(torch.nn.Module):
    def __init__(self):
        super().__init__()
        state_dict = torch.load('foo/state_dict.pt')
        self.linear = torch.load('foo/linear.pt') # Linear(in_features=2, out_features=2, bias=True)
        self.__tensor_constant0 = state_dict['__tensor_constant0']
        self.W = torch.nn.Parameter(state_dict['W'])

    def forward(self, x):
        w = self.W
        tensor_constant0 = self.__tensor_constant0
        add_1 = w + tensor_constant0
        add_2 = add_1 + x
        linear_1 = self.linear(add_2)
        return linear_1
```
Some current issues:
1. How do you actually ... save things like modules or parameters? I don't think FX is in the business of tracking initializations and such. Thus, the only way I see to do it is to dump the parameters/modules as blobs, and then load them in the generated initialization. This is a somewhat subpar user experience, and perhaps prevents it from being in some use cases (ie: you would need to check in the blobs into source control to save the model).

2. Currently, the only "atomic" modules we have are those in `torch.nn`. However, if we want to allow flexibility in this, and for example, allow "atomic" modules that are user-defined, then it's not clear how to allow those to be dumped in a way that we can then load elsewhere.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47544

Reviewed By: jamesr66a

Differential Revision: D25232917

Pulled By: Chillee

fbshipit-source-id: fd2b61a5f40e614fc94256a2957ed1d57fcf5492
2020-12-04 18:33:27 -08:00
Mehdi Mirzazadeh
c5834b6a23 Look in named-buffers of module for tensors (#47641)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47641

ghstack-source-id: 116450114

Test Plan: Presubmit tests

Reviewed By: jamesr66a

Differential Revision: D24848318

fbshipit-source-id: f6ede3def9d6f1357c4fd3406f97721dea06b9f1
2020-11-11 19:08:16 -08:00
James Reed
d1351c66a8 [FX] Add a bunch of docstrings (#47719)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47719

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24875400

Pulled By: jamesr66a

fbshipit-source-id: a1dd43d2eee914a441eff43c4f2efe61a399e8a5
2020-11-11 10:59:57 -08:00
Horace He
373246733d [FX] get the correct error message (#47108)
Summary:
Currently, code like
```
class Test(nn.Module):
    def __init__(self):
        super(Test, self).__init__()
        self.W = torch.nn.Parameter(torch.randn(5))

    def forward(self, x):
        return torch.dot(self.W, x)

mod = Test()
print(fx.symbolic_trace(Test())(5))
```
gives an error like the below, which does not show the actual code that throws the error.
```
Traceback (most recent call last):
  File "t.py", line 20, in <module>
    print(fx.symbolic_trace(Test())(5))
  File "/home/chilli/fb/pytorch/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/chilli/fb/pytorch/torch/fx/graph_module.py", line 191, in debug_forward
    return src_forward(self, *args, **kwargs)
  File "<eval_with_key_0>", line 5, in forward
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int
```

This is particularly annoying when your function has already been transformed several times.

So, the really annoying thing is that the error clearly has the requisite information in `exception.__traceback__` - it just isn't printing it.

I think the right way of doing this is simply replacing `sys.excepthook`. This appears to be the standard way to modify exception messages.

**Scratch the below**

The 2 methods in the PR right now are:
1. Just prepend the final part of the traceback to the beginning of your error message. Looks like
```
Traceback (most recent call last):
  File "t.py", line 20, in <module>
    print(fx.symbolic_trace(Test())(5))
  File "/home/chilli/fb/pytorch/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/chilli/fb/pytorch/torch/fx/graph_module.py", line 197, in debug_forward
    raise e
  File "/home/chilli/fb/pytorch/torch/fx/graph_module.py", line 192, in debug_forward
    return src_forward(self, *args, **kwargs)
  File "<eval_with_key_0>", line 5, in forward
TypeError:   File "<eval_with_key_0>", line 5, in forward
    dot_1 = torch.dot(w, x)
dot(): argument 'tensor' (position 2) must be Tensor, not int
```

2. Use the `from exception` feature in Python. Looks like
```
Traceback (most recent call last):
  File "/home/chilli/fb/pytorch/torch/fx/graph_module.py", line 192, in debug_forward
    return src_forward(self, *args, **kwargs)
  File "<eval_with_key_0>", line 5, in forward
TypeError:   File "<eval_with_key_0>", line 5, in forward
    dot_1 = torch.dot(w, x)
dot(): argument 'tensor' (position 2) must be Tensor, not int

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "t.py", line 20, in <module>
    print(fx.symbolic_trace(Test())(5))
  File "/home/chilli/fb/pytorch/torch/nn/modules/module.py", line 744, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/chilli/fb/pytorch/torch/fx/graph_module.py", line 197, in debug_forward
    raise Exception(last_tb) from e
Exception:   File "<eval_with_key_0>", line 5, in forward
    dot_1 = torch.dot(w, x)
```

I think the first one looks better, but it's pretty hacky since we're shoving the traceback in the message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47108

Reviewed By: jamesr66a

Differential Revision: D24751019

Pulled By: Chillee

fbshipit-source-id: 83e6ed0165f98632a77c73de75504fd6263fff40
2020-11-05 10:59:01 -08:00
James Reed
d0df29ac22 [FX] Put inf and nan in globals instead of with an import string (#47035)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47035

Chillee thought the `from math import inf, nan` string at the top of `.code` was annoying so here's an alternative way to do it by putting those values in `globals` before we `exec`

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D24611278

Pulled By: jamesr66a

fbshipit-source-id: c25ef89e649bdd3e79fe91aea945a30fa7106961
2020-10-29 00:35:41 -07:00
James Reed
b04ae953b4 [FX][WIP] Mutable Graph APIs (#45227)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45227

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23880730

Pulled By: jamesr66a

fbshipit-source-id: eb4e8c14d7f6b1deb1ddd6cf38a360413a1705ed
2020-10-05 17:07:08 -07:00
Zachary DeVito
26a9012f84 [fx] import used modules for code gen (#45471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45471

Intead of assuming that 'torch' is the only module used by generated code,
use the qualified names of builtin functions to generate import statements
for all builtins. This allows user-captured functions to also get code generated correctly.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23978696

Pulled By: zdevito

fbshipit-source-id: ecbff150e3de38532531cdadbfe4965468f29a38
2020-10-05 15:21:44 -07:00
James Reed
2ab74a4839 [FX] Make Tracer.trace() just return a Graph (#45704)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45704

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24067982

Pulled By: jamesr66a

fbshipit-source-id: c82aa6be504d45e110055a3c4db129d0b9ac3ef5
2020-10-03 21:13:48 -07:00
James Reed
53aea60bce [FX] Make output a non-special Node (#45599)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45599

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24027586

Pulled By: jamesr66a

fbshipit-source-id: 747c25e3c7668ca45f03bed0be71fd3c9af67286
2020-10-02 17:08:17 -07:00
Meghan Lele
09b3e16b40 [JIT] Enable @unused syntax for ignoring properties (#45261)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45261

**Summary**
This commit enables `unused` syntax for ignoring
properties. Inoring properties is more intuitive with this feature enabled.
`ignore` is not supported because class type properties cannot be
executed in Python (because they exist only as TorchScript types) like
an `ignored` function and module properties that cannot be scripted
are not added to the `ScriptModule` wrapper so that they
may execute in Python.

**Test Plan**
This commit updates the existing unit tests for class type and module
properties to test properties ignored using `unused`.

Test Plan: Imported from OSS

Reviewed By: navahgar, Krovatkin, mannatsingh

Differential Revision: D23971881

Pulled By: SplitInfinity

fbshipit-source-id: 8d3cc1bbede7753d6b6f416619e4660c56311d33
2020-09-29 10:24:25 -07:00
James Reed
7f4a27be3a [resubmit][FX] s/get_param/get_attr/ (#45147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45147

ghstack-source-id: 112605923

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D23845096

fbshipit-source-id: 9ca209aa84cbaddd6e89c52b541e43b11197e2d5
2020-09-22 17:06:18 -07:00
James Reed
79fe794f87 [FX] Make Graphs immutable and make GraphModule recompile after assigning graph (#44830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44830

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23743850

Pulled By: jamesr66a

fbshipit-source-id: 501b92a89ff636c26abeff13105a75462384554c
2020-09-22 15:02:11 -07:00
James Reed
1fd48a9d1f Revert D23798016: [FX] s/get_param/get_attr/
Test Plan: revert-hammer

Differential Revision:
D23798016 (c941dd3492)

Original commit changeset: 1d2f3db1994a

fbshipit-source-id: 974d930064b37d396c5d66c905a63d45449813e5
2020-09-22 10:32:51 -07:00
James Reed
c941dd3492 [FX] s/get_param/get_attr/ (#45000)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45000

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D23798016

Pulled By: jamesr66a

fbshipit-source-id: 1d2f3db1994a62b95d0ced03bf958e54d30c35dd
2020-09-21 14:09:32 -07:00
James Reed
043466f978 [FX] Pass module's qualname to is_leaf_module (#44966)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44966

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23790360

Pulled By: jamesr66a

fbshipit-source-id: 7ef569fd93646584b27af7a615fa69c8d8bbdd3b
2020-09-18 17:02:33 -07:00
James Reed
60ae6c9c18 [FX] Fix GraphModule copy methods not regenerating forward (#44806)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44806

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23738732

Pulled By: jamesr66a

fbshipit-source-id: 14e13551c6568c562f3f789b6274b6c86afefd0b
2020-09-17 17:14:38 -07:00
James Reed
e9c6449b46 [FX][EZ] Allow constructing GraphModule with dict for root (#44679)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44679

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23696766

Pulled By: jamesr66a

fbshipit-source-id: fe18b7b579c1728d00589bd5fd5e54c917cc61fe
2020-09-16 12:43:23 -07:00
Zachary DeVito
2c1b215b48 [fx] remove delegate, replace with tracer (#44566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44566

The Delegate objects were confusing. They were suppose to be a way to
configure how tracing works, but in some cases they appeared necessary
for consturcting graphs, which was not true. This makes the organization
clearer by removing Delgate and moving its functionality into a Tracer class,
similar to how pickle has a Pickler class.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23683177

Pulled By: zdevito

fbshipit-source-id: 7605a34e65dfac9a487c0bada39a23ca1327ab00
2020-09-15 16:52:22 -07:00
James Reed
4e0ac120e9 [FX] Only copy over training attr if it\'s there (#44314)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44314

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23578189

Pulled By: jamesr66a

fbshipit-source-id: fb7643f28582bd5009a826663a937fbe188c50bc
2020-09-08 11:50:08 -07:00
Zachary DeVito
2ad5a82c43 [fx] get rid of graph_module.root (#44092)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44092

instead submodules and weights are installed directly on the
graph_module by transferring the original modules. This makes it more
likely that scripting will succeed (since we no longer have submodules
that are not used in the trace). It also prevents layered transforms
from having to special case handling of the `root` module. GraphModules
can now be re-traced as part of the input to other transforms.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23504210

Pulled By: zdevito

fbshipit-source-id: f79e5c4cbfc52eb0ffb5d6ed89b37ce35a7dc467
2020-09-04 11:35:32 -07:00
James Reed
af13faf18b [FX] __str__ for GraphModule and Graph (#44166)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44166

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D23520801

Pulled By: jamesr66a

fbshipit-source-id: f77e3466e435127ec01e66291964395f32a18992
2020-09-04 10:46:43 -07:00
James Reed
7a77d1c5c2 [FX] Only copy over forward() from exec (#44006)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44006

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23466542

Pulled By: jamesr66a

fbshipit-source-id: 12a1839ddc65333e3e3d511eeb53206f06546a87
2020-09-02 15:35:49 -07:00
James Reed
a1a23669f2 [FX] Pickle serialization of GraphModule via forward source (#43674)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/43674

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23362396

Pulled By: jamesr66a

fbshipit-source-id: cb8181edff70643b7bbe548cc6b0957328d4eedd
2020-09-01 13:31:18 -07:00
Michael Suo
89452a67de [fx] GraphModule.src -> GraphModule.code (#43655)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43655

Pure, unadulerated bikeshed. The good stuff.

This makes things more consistent with ScriptModule.

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D23401528

Pulled By: suo

fbshipit-source-id: 7dd8396365f118abcd045434acd9348545314f44
2020-08-31 11:26:05 -07:00
Jerry Zhang
5a1aa0e21e [reland][quant][graphmode][fx] Add e2e test on torchvision (#43587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43587

Add tests for graph mode quantization on torchvision and make sure it matches
current eager mode quantization

Test Plan:
Imported from OSS

Imported from OSS

Reviewed By: z-a-f

Differential Revision: D23331253

fbshipit-source-id: 0445a44145d99837a2c975684cd0a0b7d965c8f9
2020-08-27 10:12:07 -07:00
Mikhail Zolotukhin
be637fd5f6 Revert D23306683: [quant][graphmode][fx] Testing torchvision
Test Plan: revert-hammer

Differential Revision:
D23306683 (62dcd253e3)

Original commit changeset: 30d27e225d45

fbshipit-source-id: e661334d187d3d6756facd36f2ebdb3ab2cd2e26
2020-08-25 15:24:02 -07:00
Jerry Zhang
62dcd253e3 [quant][graphmode][fx] Testing torchvision (#43526)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43526

Add tests for graph mode quantization on torchvision and make sure it matches
current eager mode quantization

Test Plan: Imported from OSS

Reviewed By: vkuzo

Differential Revision: D23306683

fbshipit-source-id: 30d27e225d4557bfc1d9aa462086e416aa9a9c0e
2020-08-25 13:02:14 -07:00
Zachary DeVito
1f0cfbaaad [fx] add type annotations (#43083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43083

This adds type annotations to all classes, arguments, and returns
for fx. This should make it easier to understand the code, and
encourage users of the library to also write typed code.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D23145853

Pulled By: zdevito

fbshipit-source-id: 648d91df3f9620578c1c51408003cd5152e34514
2020-08-23 15:38:33 -07:00
Zachary DeVito
b349f58c21 [fx] enabling typechecking of fx files (#43082)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43082

Fixes all present errors in mypy. Does not try to add annotations everywhere.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23145854

Pulled By: zdevito

fbshipit-source-id: 18e483ed605e89ed8125971e84da1a83128765b7
2020-08-23 15:37:29 -07:00
Zachary DeVito
4011685a8b [fx] split Node into Node/Proxy (#42991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42991

Have Node both be a record of the operator in the graph, and the
way we _build_ the graph made it difficult to keep the IR datastructure
separate from the proxying logic in the build.

Among other issues this means that typos when using nodes would add
things to the graph:
```
    for node in graph.nodes:
        node.grph # does not error, returns an node.Attribute object!
```

This separates the builder into a Proxy object. Graph/Node no longer
need to understand `delegate` objects since they are now just pure IR.
This separates the `symbolic_trace` (proxy.py/symbolic_trace.py) from
the IR (node.py, graph.py).

This also allows us to add `create_arg` to the delegate object,
allowing the customization of how aggregate arguments are handled
when converting to a graph.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23099786

Pulled By: zdevito

fbshipit-source-id: 6f207a8c237e5eb2f326b63b0d702c3ebcb254e4
2020-08-14 16:45:21 -07:00
James Reed
0ff0fea42b [FX] fix lint (#42866)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42866

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23056813

Pulled By: jamesr66a

fbshipit-source-id: d30cdffe6f0465223354dec00f15658eb0b08363
2020-08-11 14:01:26 -07:00
James Reed
575e7497f6 Introduce experimental FX library (#42741)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42741

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23006383

Pulled By: jamesr66a

fbshipit-source-id: 6cb6d921981fcae47a07df581ffcf900fb8a7fe8
2020-08-11 10:01:47 -07:00