Commit Graph

208 Commits

Author SHA1 Message Date
Yidi Wu
4dc579838b Allow fx.Graph.owning_module to be used as attribute. (#86822)
Summary:
The current behavior of owning_module setter is difficult to understand: it changes the owning_module to None if owners is not 0 but increments the owners count. If the owning_module is None, the owners count should be 0 as none of them is accessible. On the other hand, if the owners count increases, the owning_module should be a collection (e.g. a list).

This diff changes owning_module to be a normal attribute. The semantic is that graph can have **at most one** owning module and can be assigned to new module.

The alternative is to use a list to represent the owning_modules of a graph but it breaks backward compatibility and the exact use cases of having multiple owning_modules are not clear.

Test Plan: Test with CI.

Differential Revision: D40200624

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86822
Approved by: https://github.com/tugsbayasgalan
2022-10-19 00:12:59 +00:00
Sherlock Huang
88b76ae9ea Store type(module) in the module stack (#87149)
- As requested by quantization team, it prefer storing type(module) in the module stack.
- Consequently, as module stack gets verbose, we skip printing module stack in the gm.print_readable()

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87149
Approved by: https://github.com/jerryzh168, https://github.com/jansel
2022-10-18 18:12:37 +00:00
Horace He
2c1bc216b8 Fixed partitioner issue with getitem and made metadata a storage more consistent (#87012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87012
Approved by: https://github.com/ngimel
2022-10-15 17:58:55 +00:00
Horace He
b3b9786fdd Unified symbolic shape variables between AOTAutograd and Inductor (#86659)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86659
Approved by: https://github.com/wconstab
2022-10-14 00:24:43 +00:00
Sherlock Huang
a47f93b6c9 Add type and shape annotation for gm.print_readable() (#86562)
For
```
def f(a, b):
    dim0 = a.shape[0] + b.shape[0]
    dim1 = a.shape[1] + b.shape[1]
    d = a.new_empty(dim0, dim1)
    return d

fx_g = make_fx(f, tracing_mode="symbolic")(torch.randn(5, 3), torch.randn(4, 3))
fx_g.print_readable()
```

Tracing with 'real' and 'fake' mode yields
```
class f(torch.nn.Module):
    def forward(self, a_1: Tensor<f32>[5, 3], b_1: Tensor<f32>[4, 3]):

        # No stacktrace found for following nodes
        new_empty: Tensor<f32>[9, 6] = torch.ops.aten.new_empty.default(a_1, [9, 6], dtype = torch.float32, layout = torch.strided, device = device(type='cpu'), pin_memory = False);  a_1 = None
        return new_empty
```

Tracing with 'symbolic' mode yields
```
    def forward(self, a_1: Tensor<f32>[t0.size(0), t0.size(1)], b_1: Tensor<f32>[t1.size(0), t0.size(1)]):

        # No stacktrace found for following nodes
        sym_size: Symint(t0.size(0)) = torch.ops.aten.sym_size(a_1, 0)
        sym_size_1: Symint(t1.size(0)) = torch.ops.aten.sym_size(b_1, 0)
        add: Symint(t0.size(0) + t1.size(0)) = sym_size + sym_size_1;  sym_size = sym_size_1 = None
        sym_size_2: Symint(t0.size(1)) = torch.ops.aten.sym_size(a_1, 1)
        sym_size_3: Symint(t0.size(1)) = torch.ops.aten.sym_size(b_1, 1);  b_1 = None
        add_1: Symint(2*t0.size(1)) = sym_size_2 + sym_size_3;  sym_size_2 = sym_size_3 = None
        new_empty: Tensor<f32>[t0.size(0) + t1.size(0), 2*t0.size(1)] = torch.ops.aten.new_empty.default(a_1, [add, add_1], dtype = torch.float32, layout = torch.strided, device = device(type='cpu'), pin_memory = False);  a_1 = add = add_1 = None
        return new_empty
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86562
Approved by: https://github.com/Chillee
2022-10-12 05:39:54 +00:00
Jason Ansel
f1fdb6efbd Manual changes for moving dynamo to core (#86621)
This is the subset of the changes in #86461 not auto-generated by `copy_to_core.sh`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86621
Approved by: https://github.com/albanD
2022-10-11 23:01:21 +00:00
anjali411
85073b8ddc Add __all__ to fx, fistributed and cuda submodules (#85080)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85080
Approved by: https://github.com/albanD
2022-09-21 18:04:58 +00:00
Sherlock Huang
bf8d5e8328 Pretty print stack trace with gm.print_readable() (#83706)
Precondition: https://github.com/pytorch/torchdynamo/pull/899

Given following function
```
def my_relu(a):
    return a.relu()

def func(a, b):
    d = torch.square(a + b)
    e = my_relu(d)
    f = d.sin()
    s = torch.stack([e, f])
    s = s.sum()
```

Here are the possible result with various tracing frontend: dynamo, symbolic_trace, make_fx
- joint graph with torchdynamo.optimize("aot_nop")
Notice that it has a special stack for gradient addition node (for multiple uses of tensor) in backward
Notice that "No stacktrace found for following nodes" are shown for nodes with stacktrace
```
def forward(self, primals, tangents):
    primals_1, primals_2, tangents_1, = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 41, in func, d = torch.square(a + b)
    add_tensor = torch.ops.aten.add.Tensor(primals_1, primals_2);  primals_1 = primals_2 = None
    pow_tensor_scalar = torch.ops.aten.pow.Tensor_Scalar(add_tensor, 2)

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 38, in my_relu, return a.relu()
    relu_default = torch.ops.aten.relu.default(pow_tensor_scalar)
    detach_default = torch.ops.aten.detach.default(relu_default)

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 43, in func, f = d.sin()
    sin_default = torch.ops.aten.sin.default(pow_tensor_scalar)

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 44, in func, s = torch.stack([e, f])
    stack_default = torch.ops.aten.stack.default([relu_default, sin_default]);  relu_default = sin_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 45, in func, s = s.sum()
    sum_default = torch.ops.aten.sum.default(stack_default);  stack_default = None

    # No stacktrace found for following nodes
    is_same_size_default = torch.ops.aten.is_same_size.default(sum_default, tangents_1)

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 45, in func, s = s.sum()
    expand_default = torch.ops.aten.expand.default(tangents_1, [2, 10, 10]);  tangents_1 = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 44, in func, s = torch.stack([e, f])
    unbind_int = torch.ops.aten.unbind.int(expand_default);  expand_default = None
    getitem = unbind_int[0]
    getitem_1 = unbind_int[1];  unbind_int = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 43, in func, f = d.sin()
    cos_default = torch.ops.aten.cos.default(pow_tensor_scalar);  pow_tensor_scalar = None
    mul_tensor = torch.ops.aten.mul.Tensor(getitem_1, cos_default);  getitem_1 = cos_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 38, in my_relu, return a.relu()
    detach_default_1 = torch.ops.aten.detach.default(detach_default);  detach_default = None
    threshold_backward_default = torch.ops.aten.threshold_backward.default(getitem, detach_default_1, 0);  getitem = detach_default_1 = None

    # Gradient addition node due to mulitple use of tensor around:, File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 38, in my_relu, return a.relu()
    add_tensor_1 = torch.ops.aten.add.Tensor(mul_tensor, threshold_backward_default);  mul_tensor = threshold_backward_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 41, in func, d = torch.square(a + b)
    pow_tensor_scalar_1 = torch.ops.aten.pow.Tensor_Scalar(add_tensor, 1.0);  add_tensor = None
    mul_scalar = torch.ops.aten.mul.Scalar(pow_tensor_scalar_1, 2.0);  pow_tensor_scalar_1 = None
    mul_tensor_1 = torch.ops.aten.mul.Tensor(add_tensor_1, mul_scalar);  add_tensor_1 = mul_scalar = None
    sum_sym_int = torch.ops.aten.sum.SymInt(mul_tensor_1, [0], True)
    view_sym_int = torch.ops.aten.view.SymInt(sum_sym_int, [10]);  sum_sym_int = None
    return pytree.tree_unflatten([sum_default, mul_tensor_1, view_sym_int], self._out_spec)
```
- default symbolic_trace
Notice that nodes without stacktrace are folded under same region
```
def forward(self, a, b):

    # No stacktrace found for following nodes
    add = a + b;  a = b = None
    square = torch.square(add);  add = None
    relu = square.relu()
    sin = square.sin();  square = None
    stack = torch.stack([relu, sin]);  relu = sin = None
    sum_1 = stack.sum();  stack = None
    return sum_1
```
- symbolic_trace with record_stack_traces=True
```
def forward(self, a, b):

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 41, in func, d = torch.square(a + b)
    add = a + b;  a = b = None
    square = torch.square(add);  add = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 38, in my_relu, return a.relu()
    relu = square.relu()

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 43, in func, f = d.sin()
    sin = square.sin();  square = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 44, in func, s = torch.stack([e, f])
    stack = torch.stack([relu, sin]);  relu = sin = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 45, in func, s = s.sum()
    sum_1 = stack.sum();  stack = None
    return sum_1
```

- make_fx without decomposition
```
def forward(self, a_1, b_1):

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 41, in func, d = torch.square(a + b)
    add_tensor = torch.ops.aten.add.Tensor(a_1, b_1);  a_1 = b_1 = None
    pow_tensor_scalar = torch.ops.aten.pow.Tensor_Scalar(add_tensor, 2);  add_tensor = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 38, in my_relu, return a.relu()
    relu_default = torch.ops.aten.relu.default(pow_tensor_scalar)
    detach_default = torch.ops.aten.detach.default(relu_default)

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 43, in func, f = d.sin()
    sin_default = torch.ops.aten.sin.default(pow_tensor_scalar);  pow_tensor_scalar = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 44, in func, s = torch.stack([e, f])
    stack_default = torch.ops.aten.stack.default([relu_default, sin_default]);  relu_default = sin_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 45, in func, s = s.sum()
    sum_default = torch.ops.aten.sum.default(stack_default);  stack_default = None
    return sum_default
```
- make_fx with decomposition to prims
```
def forward(self, a_1, b_1):

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 41, in func, d = torch.square(a + b)
    broadcast_in_dim_default = torch.ops.prims.broadcast_in_dim.default(b_1, [10, 10], [1]);  b_1 = None
    add_default = torch.ops.prims.add.default(a_1, broadcast_in_dim_default);  a_1 = broadcast_in_dim_default = None
    mul_default = torch.ops.prims.mul.default(add_default, add_default);  add_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 38, in my_relu, return a.relu()
    le_default = torch.ops.prims.le.default(mul_default, 0.0)
    where_default = torch.ops.prims.where.default(le_default, 0.0, mul_default);  le_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 43, in func, f = d.sin()
    sin_default = torch.ops.prims.sin.default(mul_default);  mul_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 44, in func, s = torch.stack([e, f])
    cat_default = torch.ops.prims.cat.default([where_default, sin_default], 0);  where_default = sin_default = None
    split_dim_default = torch.ops.prims.split_dim.default(cat_default, 0, 2);  cat_default = None

    # File "/fsx/users/bahuang/repos/pytorch_fsx/test.py", line 45, in func, s = s.sum()
    convert_element_type_default = torch.ops.prims.convert_element_type.default(split_dim_default, torch.float32);  split_dim_default = None
    sum_default = torch.ops.prims.sum.default(convert_element_type_default, [0, 1, 2]);  convert_element_type_default = None
    return sum_default
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83706
Approved by: https://github.com/Chillee, https://github.com/ezyang
2022-08-24 23:00:57 +00:00
PyTorch MergeBot
5df1ce46f0 Revert "[resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)"
This reverts commit ce92c1cfe9.

Reverted https://github.com/pytorch/pytorch/pull/81999 on behalf of https://github.com/ZainRizvi due to test_bce_with_logits_has_correct_forward_grad consistently fails with an error that it takes 2 positional arguments but 3 were given
2022-07-26 03:29:50 +00:00
James Reed
ce92c1cfe9 [resubmit][FX] Fix PyTree unpacking carrying forward type annotations (#81999)
Differential Revision: D38077793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81999
Approved by: https://github.com/pbelevich, https://github.com/osalpekar
2022-07-25 21:00:42 +00:00
PyTorch MergeBot
0d1710ade5 Revert "[FX] Fix PyTree unpacking carrying forward type annotations (#81906)"
This reverts commit e0d83a0bdc.

Reverted https://github.com/pytorch/pytorch/pull/81906 on behalf of https://github.com/jeanschmidt due to breaking internal builds
2022-07-22 11:11:10 +00:00
James Reed
e0d83a0bdc [FX] Fix PyTree unpacking carrying forward type annotations (#81906)
Resolves https://github.com/pytorch/pytorch/issues/81902

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81906
Approved by: https://github.com/Chillee, https://github.com/voznesenskym
2022-07-22 04:25:23 +00:00
Edward Z. Yang
031ec66311 Add warning about DCE in FX being unsound with mutation (#81818)
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81818
Approved by: https://github.com/SherlockNoMad
2022-07-21 21:41:11 +00:00
Edward Z. Yang
1a71b83e18 Increase stack level for get_attr warning (#81041)
See also https://github.com/pytorch/pytorch/issues/60548

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81041
Approved by: https://github.com/SherlockNoMad
2022-07-07 16:59:39 +00:00
James Reed
7311390d35 [WIP] Make constructor calls in experimental MetaTracer serializable
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76789

Approved by: https://github.com/pbelevich
2022-05-11 00:19:47 +00:00
Michael Suo
fb0f285638 [lint] upgrade mypy to latest version
Fixes https://github.com/pytorch/pytorch/issues/75927.

Had to fix some bugs and add some ignores.

To check if clean:
```
lintrunner --paths-cmd='git grep -Il .' --take MYPY,MYPYSTRICT
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76753

Approved by: https://github.com/malfet
2022-05-03 20:51:34 +00:00
PyTorch MergeBot
3d7428d9ac Revert "[lint] upgrade mypy to latest version"
This reverts commit 9bf18aab94.

Reverted https://github.com/pytorch/pytorch/pull/76753 on behalf of https://github.com/suo
2022-05-03 20:01:18 +00:00
Michael Suo
9bf18aab94 [lint] upgrade mypy to latest version
Fixes https://github.com/pytorch/pytorch/issues/75927.

Had to fix some bugs and add some ignores.

To check if clean:
```
lintrunner --paths-cmd='git grep -Il .' --take MYPY,MYPYSTRICT
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76753

Approved by: https://github.com/malfet
2022-05-03 19:43:28 +00:00
Angela Yi
d0af05f931 [FX] Modified __deepcopy__ to also copy _codegen
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75851
Approved by: https://github.com/Chillee
2022-04-19 23:25:12 +00:00
James Reed
6a44efa888 [FX] Fix bare generic type annotations (#74135)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74135

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D34839339

Pulled By: jamesr66a

fbshipit-source-id: fd026cab684acaae9bf7c2fa4228ed8eb7aeb788
(cherry picked from commit 3acc565324e78bbabde3f796db9f5fcc99394d6b)
2022-03-14 23:30:53 +00:00
Jordan Fix
987f146185 [fx] Improve support for tuple subclasses such as NamedTuple (#73198)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73198

Previously, if an arg to an FX node is a subclass of tuple then it gets sanitized essentially back to that base class. An example here is when setting an arg to be a TensorMetadata object, which is a NamedTuple, it will be set as a tuple instead.

- Change `map_aggregate` to repack the tuple to `type(a)` when it's not directly a tuple (try/except for best attempt)
- During codegen, call `add_global` for `type(a)` if it's not directly a tuple.
- Add an option for an arg to provide a `_custom_fx_repr_fn` for use inside stringifying via `_format_arg`

Test Plan: Added unit test coverage, where we inline the named tuple into arg/kwarg.

Reviewed By: jamesr66a

Differential Revision: D34381888

fbshipit-source-id: bd672a8542e2bba5aa604b448bec920efc256440
(cherry picked from commit 68f99c12dd)
2022-02-23 11:31:10 +00:00
Horace He
d635d0f86e Refactor FX codegen into extensible Codegen object (#72566)
Summary:
The goal of this is to make FX's codegen extensible. I've refactored it into a class with 5 extensibility points on it.

```
class Codegen(object):
    def generate_prologue(self, free_vars: List[str], maybe_return_annotation: str) -> str:
        """
        Given the free variables and a return annotation, generates the beginning of the FX function.
        By default, `generate_prologue(['a', 'b'], '') == 'def forward(a, b):'`
        """
    def generate_output(self, output_args: Argument) -> str:
        """
        Given the output arguments, generates the return statement of the FX function.
        """
    def process_inputs(self, args: Any) -> Any:
        """
        Transforms the inputs so that the graph can take them as arguments, as
        non-default codegen may result in the inputs to the function being
        different from the inputs to the graph.

        If the graph was directly runnable, this invariant should hold true
        `f.process_outputs(f.graph(*f.process_inputs(*inputs))) == f(*inputs)`
        """
    def process_outputs(self, outputs: Any) -> Any:
        """
        Transforms the outputs of the graph to be identical to the codegen.

        See ``process_inputs`` for more details.
        """
    def additional_globals(self) -> List[Tuple[str, Any]]:
        """
        If your codegen uses extra global values, add them here.
        For example, return ['List', typing.List] if you need ``List`` in the global context.
        """
```

So, for example, the `ListCodeGen` we want for AOTAutograd looks like this
```
        class ListCodeGen(CodeGen):
            def generate_prologue(self, free_vars, maybe_return_annotation):
                lst_unpack = f"""
def forward(self, args_list: List[torch.Tensor]){maybe_return_annotation}:
    {', '.join(free_vars)} = args_list"""
                return lst_unpack

            def additional_globals(self):
                return [('List', typing.List)]

            def process_inputs(self, *inputs):
                assert(len(inputs) == 1)
                return inputs[0]
```
and
```
        def f(a, b):
            return a + b

        nf = fx.symbolic_trace(f)
        nf.graph.set_codegen(ListCodeGen())
        nf.recompile()
        print(nf.code)
```
would result in
```
def forward(self, args_list: List[torch.Tensor]):
    a, b = args_list
    add = a + b;  a = b = None
    return add
```

Backwards compatibility changes - I added `process_outputs` and `process_inputs` to `fx.Graph`, while removing `flatten_inputs` and `flatten_outputs` - those didn't have `backwards_compatibility` on them, so I *think* it's probably fine?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72566

Reviewed By: desertfire

Differential Revision: D34160424

Pulled By: Chillee

fbshipit-source-id: ebf6411312b373e3fbcb13288a34befa449a2375
(cherry picked from commit 13cd12eaa1)
2022-02-11 18:13:29 +00:00
Shen Li
7c2eda3829 Fix fx docs (#72108)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/72108

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D33916855

Pulled By: mrshenli

fbshipit-source-id: 5fff6c87555109e43954eff99164e68a56ff95da
(cherry picked from commit 1611c4c75c)
2022-02-02 03:28:07 +00:00
Jason Ansel
567c2bb8e9 Support printing inplace operators in FX (#71887)
Summary:
Pretty print inplace operators (`a+=b`, etc) in generated FX code.  This is useful because it allows `torch.jit.script()` to parse these operators without error.

I don't believe FX tracing supports inplace ops yet, though I am generating them in torchdynamo and want to be able to lower them with torchscript.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71887

Reviewed By: jamesr66a

Differential Revision: D33806248

Pulled By: jansel

fbshipit-source-id: 5eb9f744caab2f745cefc83ea658e12e9e7a817d
(cherry picked from commit eacbd6bb83)
2022-01-27 20:35:22 +00:00
James Reed
de902b5d02 [FX] Add a default_value arg to Graph.placeholder and fix split_module (#71016)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71016

I found out that `split_module` doesn't preserve default values for arguments. In trying to fix that, I noticed that `Graph.placeholder` doesn't make it easy to add a default argument when making a placeholder. This PR addresses both of those issues

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D33482218

Pulled By: jamesr66a

fbshipit-source-id: 57ebcdab25d267333fb1034994e08fc1bdb128ee
2022-01-12 14:03:17 -08:00
Kefei Lu
76e9dbb0f4 [torch.fx] add code-gen customizability and support for setting breakpoint in code-gen'd forward() call (#67139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67139

This diff enables setting breakpoint in the graph module's generated python code. See test plan for usage.

In order to support this functionality, and other similar functionalities to customize the generated code, a code transformer functionality is added to `fx.Graph`. This allows flexible customization of `fx.Graph`'s code gen behavior, in composable and functional ways. See test plan for its usage.

Test Plan:
### Use of `fx.experimental.debug.set_trace`

```
In [2]: from torch.fx.experimental.debug import set_trace

In [3]: set_trace(ttop)
Out[3]:
top(
  (a): Sub()
)

In [4]: ttop(1)
> /data/users/kefeilu/fbsource33/fbcode/buck-out/dev/gen/caffe2/torch/fb/fx2trt/<eval_with_key>.10(6)forward()
(Pdb) l
  1
  2
  3
  4     def forward(self, x):
  5         import pdb; pdb.set_trace()
  6  ->     a = self.a(x);  x = None
  7         getitem = a[0]
  8         getitem_1 = a[0];  a = None
  9         add = getitem + getitem_1;  getitem = getitem_1 = None
 10         return add
 11
(Pdb)
```

### Use of `on_generate_code`

```
In [1]: def insert_pdb(body):
   ...:     return ['import pdb; pdb.set_trace()\n', *body]
   ...:

In [8]: type(ttop)
Out[8]: torch.fx.graph_module.GraphModule.__new__.<locals>.GraphModuleImpl

In [10]: with ttop.graph.on_generate_code(lambda _: insert_pdb):
    ...:     ttop.recompile()
    ...:     print(f"== _on_generate_code should not be None: { ttop.graph._on_generate_code }")
    ...:     print(ttop.code)
    ...:

== _on_generate_code should not be None: <function insert_pdb at 0x7fc9895ddd30>

def forward(self, x):
    import pdb; pdb.set_trace()
    a = self.a(x);  x = None
    getitem = a[0]
    getitem_1 = a[0];  a = None
    add = getitem + getitem_1;  getitem = getitem_1 = None
    return add

In [11]: ttop.graph._on_generate_code  # restored to None

In [12]: ttop(1) # this should drop into pdb
> /data/users/kefeilu/fbsource33/fbcode/buck-out/dev/gen/caffe2/torch/fb/fx2trt/<eval_with_key>.6(6)forward()
(Pdb) l
  1
  2
  3
  4     def forward(self, x):
  5         import pdb; pdb.set_trace()
  6  ->     a = self.a(x);  x = None
  7         getitem = a[0]
  8         getitem_1 = a[0];  a = None
  9         add = getitem + getitem_1;  getitem = getitem_1 = None
 10         return add
 11
```

Reviewed By: jamesr66a

Differential Revision: D30736160

fbshipit-source-id: 9646867aae0461b5131dfd4ba9ee77a8c2ea9c93
2021-11-16 13:28:11 -08:00
Shiyan Deng
e33a1fa680 [fx] give warning instead of fatal the program when submod not found during adding get_attr (#65225)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65225

Currently when create get_attr node, if the attribute is in a submodule, we'll fist find the submodule. If the submodule isn't in the owning module we throw an exception.

However, if the attribute can't be found, we give a warning but still allow to create the get_attr node. To align with this behavior, we change the reaction when submod not found from fatal to giving a warning.

Test Plan: CI

Reviewed By: jamesr66a, jfix71

Differential Revision: D31021535

fbshipit-source-id: 4c0b471448c09cc927d0f47b5bf56594f25a8863
2021-09-20 14:35:52 -07:00
Jason Ansel
487c771593 [FX] Fix tracing of bitwise and/or (#65196)
Summary:
Previously resulted in `AttributeError: module 'operator' has no attribute 'and'`

and/or are python keywords, so they are renamed to `operator.and_` and `operator.or_`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65196

Reviewed By: Chillee

Differential Revision: D31020336

Pulled By: jansel

fbshipit-source-id: 51d888151fe78c0c1197ecaf161976b219c59694
2021-09-17 14:33:02 -07:00
Horace He
35413a16f7 Add __matmul__ to the magic methods for FX tracing (#64512)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/64483

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64512

Reviewed By: mrshenli

Differential Revision: D30797265

Pulled By: Chillee

fbshipit-source-id: 7630e048a960e0b27c4309d04d85301abe325189
2021-09-08 10:03:48 -07:00
Patrick Hu
c6505cc383 [FX] Fix python code generation for wrapped getattr() with default value (#64271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64271

Closes #60417

Modified emit_node() in fx/graph.py to generate getattr() call with default value when len(node.args) != 2 instead of accessing the attribute.
Added test_torch_fx_getattr() in test/test_fx.py.

Test Plan:
pytest test/test_fx.py

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D30671265

fbshipit-source-id: f2db9ea47e0cb247547e200684f715aab006c374
2021-09-01 11:30:27 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00
Patrick Hu
18cb3fc910 [FX] Validate data type of target on Node Construction (#64050)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64050

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D30585535

Pulled By: yqhu

fbshipit-source-id: 96778a87e75f510b4ef42f0e5cf76b35b7b2f331
2021-08-27 13:40:57 -07:00
Bradley Davis
011fdc3b7e [fx] persist tracer_cls on fx.Graph when deep copying (#63353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63353

Custom deepcopy method copies all nodes but does not copy the tracer_cls attribute

Reviewed By: houseroad

Differential Revision: D30349424

fbshipit-source-id: 3e98bdac8a8a992eb0b4ec67fe80bb2e5cf3884d
2021-08-17 09:57:48 -07:00
Bradley Davis
7a1ab9f5d7 [fx] store Tracer class on Graph and GraphModule for package deserialization [v2, the re-do] (#63121)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63121

Re-introducing this diff with a small change to ignore setting Tracer classes on GraphModules when the Tracer class is defined not at module-level (prevents pickling).

Previous, reverted Pull Request: https://github.com/pytorch/pytorch/pull/62497

Reviewed By: houseroad

Differential Revision: D30252776

fbshipit-source-id: 42d2bc846e4b32d00563419c38c02b63cd0986e6
2021-08-12 17:28:50 -07:00
Lu Fang
847a7cfa10 Back out "[fx] store Tracer class on Graph and GraphModule for package deserialization" (#63053)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63053

Original commit changeset: eca09424ad30

The original diff - D30019214 (6286d33878) breaks the publish flow in model saving.

Test Plan: ci

Differential Revision: D30236517

fbshipit-source-id: 3e05db02fc1cbbc2ed262c83bf56d555277abb34
2021-08-10 21:58:08 -07:00
Bradley Davis
6286d33878 [fx] store Tracer class on Graph and GraphModule for package deserialization (#62497)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62497

Previously named: add support for custom tracer in __reduce_package__

Stores a Tracer class on a Graph created by Tracer, and copies the Tracer class into the GraphModule's state so that when a GraphModule is packaged by torch package, it can be reconstructed with the same Tracer and GraphModule class name.

Reviewed By: suo

Differential Revision: D30019214

fbshipit-source-id: eca09424ad30feb93524d481268b066ea55b892a
2021-08-09 13:07:30 -07:00
Zeina Migeed
07a91f1cfd fix graph deepcopy to propagate output type (#61747)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61747

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D29737565

Pulled By: migeed-z

fbshipit-source-id: 8583f0c87f2db27695e062f59a15de77f3b00fd6
2021-07-21 23:53:03 -07:00
Bradley Davis
1f4bba77b6 [fx] fix subgraph API call_module warning about no owning module (#61463)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61463

seems like a small oversight(?), current test fails when warnings are recorded. discovered this when calling `graph.call_module(existing_call_module_node.target)` and it raised a warning

Test Plan: `buck test //caffe2/test:fx`

Reviewed By: ansley

Differential Revision: D29637799

fbshipit-source-id: 2305629863230235f76a926fe2e4de480cbf853c
2021-07-09 15:25:44 -07:00
Ansley Ussery
5268b5a29a Add parsing logic for Tuple[()] annotation (#58340)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58340

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D28459502

Pulled By: ansley

fbshipit-source-id: 4bb188448d66269b42b068858b895debac86e9ee
2021-05-25 12:12:43 -07:00
James Reed
7b73fdf597 [FX] Fix retracing wrapped functions (#58061)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58061

Test Plan: Imported from OSS

Reviewed By: yuhc

Differential Revision: D28358801

Pulled By: jamesr66a

fbshipit-source-id: c7c9a8a80e5bfe1eb1f6d2cf858ac7e57153a860
2021-05-17 19:50:16 -07:00
Horace He
8d363d37da [FX] Adds PyTree support to FX through concrete_args (#55888)
Summary:
```
class Foo(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, y, x):
        for k in x:
            for v in x[k]:
                v += y
        return x

example_dict = {'x': {'a': [fx.HOLE], 'z': [fx.HOLE, fx.HOLE]}}
new_f = fx.symbolic_trace(Foo(), concrete_args=example_dict)
print(new_f.code)
new_f(torch.randn(5), {'x': {'a': [torch.randn(5)], 'z': [torch.randn(5), torch.randn(5)]}})

fx.symbolic_trace(new_f, concrete_args=example_dict)
```

prints out
```
def forward(self, y, x):
    y, tree_2, tree_3, tree_4 = pytree.tree_flatten([y, x])[0]
    add = tree_2 + y
    add_1 = tree_3 + y
    add_2 = tree_4 + y;  y = None
    return {'a': [tree_2], 'z': [tree_3, tree_4]}
```

Currently, I store `in_spec` as an extra attribute on `fx.Graph`, and then include it when we do the codegen. I'm not sure if this is the right approach - it introduces a divergence between what's in `fx.Graph` and what's in the python code.

Perhaps the best API is something explicit like `fx.Graph.flatten_args`, but that does make calling things a bit ... more verbose.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55888

Reviewed By: jamesr66a

Differential Revision: D27884694

Pulled By: Chillee

fbshipit-source-id: f9e8a70c63a8df63c9f9bd0a6459255daa5a8df8
2021-05-07 04:48:35 -07:00
Horace He
86b061c80e [FX] Changes in order to move python key out of tree (#57427)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57427

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D28215322

Pulled By: Chillee

fbshipit-source-id: 94439376097c74f2004e6eca214d7940df20865d
2021-05-05 20:55:51 -07:00
Ansley Ussery
233f2cd29f Maintain submodule references during subgraph rewriting (#55463)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55463

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D27621650

Pulled By: ansley

fbshipit-source-id: e3558c64cdc2c1d846355fa58307a18c0714874b
2021-04-30 16:46:44 -07:00
Sam Estep
e3900d2ba5 Add lint for unqualified noqa (#56272)
Summary:
As this diff shows, currently there are a couple hundred instances of raw `noqa` in the codebase, which just ignore all errors on a given line. That isn't great, so this PR changes all existing instances of that antipattern to qualify the `noqa` with respect to a specific error code, and adds a lint to prevent more of this from happening in the future.

Interestingly, some of the examples the `noqa` lint catches are genuine attempts to qualify the `noqa` with a specific error code, such as these two:
```
test/jit/test_misc.py:27:            print(f"{hello + ' ' + test}, I'm a {test}") # noqa E999
test/jit/test_misc.py:28:            print(f"format blank") # noqa F541
```
However, those are still wrong because they are [missing a colon](https://flake8.pycqa.org/en/3.9.1/user/violations.html#in-line-ignoring-errors), which actually causes the error code to be completely ignored:

- If you change them to anything else, the warnings will still be suppressed.
- If you add the necessary colons then it is revealed that `E261` was also being suppressed, unintentionally:
  ```
  test/jit/test_misc.py:27:57: E261 at least two spaces before inline comment
  test/jit/test_misc.py:28:35: E261 at least two spaces before inline comment
  ```

I did try using [flake8-noqa](https://pypi.org/project/flake8-noqa/) instead of a custom `git grep` lint, but it didn't seem to work. This PR is definitely missing some of the functionality that flake8-noqa is supposed to provide, though, so if someone can figure out how to use it, we should do that instead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56272

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI run (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2365189927

Reviewed By: janeyx99

Differential Revision: D27830127

Pulled By: samestep

fbshipit-source-id: d6dcf4f945ebd18cd76c46a07f3b408296864fcb
2021-04-19 13:16:18 -07:00
James Reed
ad823888a1 [FX] Speed up _Namespace.create_name (#55580)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55580

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27641156

Pulled By: jamesr66a

fbshipit-source-id: d2443d41c8d84dddb1794a7901e2d09ae3639846
2021-04-08 10:59:42 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
James Reed
a27f46bbe3 [FX] Experimental type annotation pass using Python signatures (#53831)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53831

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26982804

Pulled By: jamesr66a

fbshipit-source-id: 17db9f71e729206f29ee231e34723d9616f128b7
2021-03-17 20:43:17 -07:00
Jordan Fix
5b52ff6c8e [fx] Add DCE pass (#52658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52658

DCE will reverse iterate over the graph looking for nodes without users and delete them. It will skip over unused placeholders (since this affects the signature of the method) and outputs (which never have users but we want to keep them :) )

Test Plan: Added unit tests

Reviewed By: jamesr66a, khabinov, chenccfb

Differential Revision: D26602212

fbshipit-source-id: f4f196973e40546076636090bb0008c24f33795e
2021-03-08 19:54:56 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
James Reed
51d8543ac7 [FX] Use precompiled regex in graph name processing (#52853)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52853

ghstack-source-id: 122531132

Test Plan: waitforsadcastle

Reviewed By: anjali411

Differential Revision: D26668527

fbshipit-source-id: bd34d860cd3a71d3b29f2430df97a0501d542f5b
2021-02-25 17:21:38 -08:00
Michael Suo
958d9a8364 [fx/package] make GraphModules packageable (#51976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51976

FX serializes things by serializing Python code as a string and exec'ing
it on load. This accomplishes one goal (we don't have to pickle the
graph object directly) but breaks the pickle abstraction in ways that
are not composable with `torch.package`.

In particular:
1. `forward` is serialized by saving Python code. On load, it's
installed
by  `exec`ing that code. This `exec` call needs to have the right
importer installed, otherwise it will not import modules from the
`torch.package` but instead import from the Python environment.
2. Any types/functions used are emitted as `import` statement in the
generated Python code. These are effectively dynamic dependencies of the
`GraphModule` being saved, and need to be registered as such so that the
`PackageImporter` will package them.

To address these, this PR introduces a new protocol for the
importer/exporter: `__reduce_package__`.

A class can implement `__reduce_package__` to customize how it is placed
in the importer/exproter. It functions very similarly to `__reduce__`,
except:
- `__reduce_package__` takes one argument, which is the
`PackageExporter`
instance. Users can use this instance to save stuff to the package to
implement their serialization. `__reduce__` takes no args.
- Only the 2-element tuple version of the return value for `__reduce__`
is supported (this could be extended if necessary).
- When the reduction function is called on load, an additional argument
is added to the beginning of the args tuple. This is the
`PackageImporter`
instance doing the loading.

The `__reduce_package__` protocol is defined using `persistent_id` and
`persistent_load`, which ensures that we can still use the cpickle
implementation of the pickler by default.

Pull Request resolved: #51971

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D26340591

Pulled By: suo

fbshipit-source-id: 5872a7d22e832056399a7372bae8a57807717882
2021-02-23 22:43:00 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
Ansley Ussery
215d9daceb Refactor internal methods into debugging utilities (#51737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51737

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26288613

Pulled By: ansley

fbshipit-source-id: 4504b1af5be7a200c1a6a376d432d7224eb8a796
2021-02-05 21:42:18 -08:00
Ansley Ussery
7494f0233a snake_case FX IR names (#50876)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50876

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D26002640

Pulled By: ansley

fbshipit-source-id: 4de8a63ef227ae3d46fab231f739c8472289ca4d
2021-01-21 22:25:57 -08:00
Ansley Ussery
7f22af13b9 Add alternative prettyprinting method to Graph (#50878)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50878

Test Plan: Imported from OSS

Reviewed By: SplitInfinity, eellison

Differential Revision: D26009183

Pulled By: ansley

fbshipit-source-id: 300913ea634d9a0e5b00deb831154ef126ad4180
2021-01-21 22:15:56 -08:00
James Reed
5205cc1c62 [FX] Fix NoneType annotation in generated code (#50777)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50777

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25966026

Pulled By: jamesr66a

fbshipit-source-id: 8e36521eee03eade7e1b602e801229c085b03488
2021-01-19 23:16:58 -08:00
James Reed
21542b43a8 [FX] Update docstring code/graph printout (#50396)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50396

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25874253

Pulled By: jamesr66a

fbshipit-source-id: 6217eadbcbe823db14df25070eef411e184c2273
2021-01-13 15:08:20 -08:00
James Reed
d390e3d8b9 [FX] Make graph target printouts more user-friendly (#50296)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50296

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25855288

Pulled By: jamesr66a

fbshipit-source-id: dd725980fc492526861c2ec234050fbdb814caa8
2021-01-11 11:45:20 -08:00
James Reed
eb8003d8e9 [FX] Remove extraneous newlines at end of code (#50117)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50117

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25791847

Pulled By: jamesr66a

fbshipit-source-id: 9c0b296e117e6bcf69ed9624ad0b243fa3db0f76
2021-01-06 15:47:37 -08:00
Brandon Lin
c51455a7bb [FX] fix Graph python_code return type annotation (#49931)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49931

This fixes #49932. The `maybe_return_annotation` was not being passed by reference, so it was never getting modified.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D25725582

Pulled By: esqu1

fbshipit-source-id: 4136ff169a269d6b98f0b8e14d95d19e7c7cfa71
2021-01-04 19:55:33 -08:00
James Reed
11598da229 [FX] Fix python code having spurious newlines from placeholders (#49720)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49720

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25675825

Pulled By: jamesr66a

fbshipit-source-id: a9028acad9c8feb877fff5cd09aedabed52a3f4b
2020-12-21 21:41:24 -08:00
James Reed
c9e052130a [FX] Enforce args is tuple and kwargs is dict (#49526)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49526

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25606115

Pulled By: jamesr66a

fbshipit-source-id: f2a21d02a2cf8c08cbd618efc5a6a28d34806851
2020-12-18 10:21:19 -08:00
James Reed
778006918c [WIP][FX] Add FX page to docs (#48814)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48814

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25320051

Pulled By: jamesr66a

fbshipit-source-id: b1fdec9615a7a4eb97c557bb3cba7f90b0a4d933
2020-12-15 09:48:29 -08:00
Jordan Fix
38ed398580 [fx] Add constant folding pass (#48443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48443

Add a constant folding pass in FX:
- Iterate over an input graph and tag what nodes are fully constant, i.e. either `get_attr` nodes, or nodes with all inputs that are either `get_attr` or constant
- Use `model_transform.split_by_tags()` to split the graph into two
- Look for the `output` node in the constant graph to get names of attrs that will be folded
- Iterate over the non-constant graph and replace placeholders that are using the same name as the attrs with a `get_attr` as well as a dummy attr on the module
- Return these two graphs in a new `FoldedGraphModule`, which is a normal GraphModule but also stores the constant graph on the side along with a `run_folding()` method that will run const folding and update the dummy parameters with the actual folded parameters

Test Plan: Added a couple tests

Reviewed By: 842974287

Differential Revision: D25033996

fbshipit-source-id: 589c036751ea91bb8155d9be98af7dbc0552ea19
2020-12-13 18:06:07 -08:00
James Reed
53aa9b8c82 [FX] Move none assignments to same line (#49209)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49209

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25484975

Pulled By: jamesr66a

fbshipit-source-id: 44207be878f95ec9420e87af79833191d5cc0c7e
2020-12-11 15:45:40 -08:00
James Reed
c92c8598a3 [FX][2/2] Make docstrings pretty when rendered (#48871)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48871

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25351588

Pulled By: jamesr66a

fbshipit-source-id: 4c6fd341100594c204a35d6a3aab756e3e22297b
2020-12-08 11:14:43 -08:00
James Reed
ae9f39eb58 [FX][1/2] Make docstrings pretty when rendered (#48738)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48738

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25280867

Pulled By: jamesr66a

fbshipit-source-id: d08641c19a6c69b4042389c800a48e699f0be628
2020-12-05 17:23:40 -08:00
James Reed
f7986969af [FX] Delete values after their last use (#48631)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48631

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25235981

Pulled By: jamesr66a

fbshipit-source-id: f79d8873d3ad1ad90b5bd6367fc6119925f116e9
2020-12-01 17:20:49 -08:00
James Reed
4316bf98f5 [FX] Refactor unique name handling (#48205)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48205

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25068934

Pulled By: jamesr66a

fbshipit-source-id: 04e02bbfd2cc9a8c3b963d9afdf40bac065c319b
2020-11-18 21:56:52 -08:00
Ansley Ussery
9443150549 Update Graph docstring to match __init__.py (#48100)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48100

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D25023407

Pulled By: ansley

fbshipit-source-id: e00706059b4c684451d2e1e48ca634b42693c1e1
2020-11-17 10:52:28 -08:00
James Reed
dbfee42a7d [FX] Fix uses not updating when erasing a node (#47720)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47720

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24875880

Pulled By: jamesr66a

fbshipit-source-id: aae9ffd10f8085b599e7923152287c6e6950ff49
2020-11-11 11:02:15 -08:00
James Reed
d1351c66a8 [FX] Add a bunch of docstrings (#47719)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47719

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24875400

Pulled By: jamesr66a

fbshipit-source-id: a1dd43d2eee914a441eff43c4f2efe61a399e8a5
2020-11-11 10:59:57 -08:00
Ansley Ussery
4cb73f5a4c Allow for string literal return during symbolic tracing (#47618)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47618

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24870422

Pulled By: ansley

fbshipit-source-id: 41c56c2f4f1f7bb360cea0fb346f6e4d495f5c2b
2020-11-11 08:54:39 -08:00
Ansley Ussery
e914a1b976 Support default args in symbolic tracing (#47615)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47615

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D24865060

Pulled By: ansley

fbshipit-source-id: 32ff105a1fa9c4a8f00adc20e8d40d1b6bd7157f
2020-11-10 18:57:00 -08:00
Garret Catron
497cd2506f Add serialize GraphModule to JSON support (#47612)
Summary:
re-opening PR, missed mypy issues, they are now addressed.
Example:

class TestModule(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.linear = torch.nn.Linear(4, 4)
                self.e = torch.rand(4)

            def forward(self, a, b):
                add_1 = a + b
                linear = self.linear(add_1)
                add_2 = linear + self.e
                return add_2
JSON:

{
    "modules": {},
    "weights": {
        "linear.weight": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4, 4]"
        },
        "linear.bias": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        },
        "e": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        }
    },
    "nodes": [
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "a",
            "op_code": "placeholder",
            "name": "a",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "b",
            "op_code": "placeholder",
            "name": "b",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_1",
            "args": [
                {
                    "is_node": true,
                    "name": "a"
                },
                {
                    "is_node": true,
                    "name": "b"
                }
            ],
            "kwargs": {}
        },
        {
            "target": "linear",
            "op_code": "call_module",
            "name": "linear_1",
            "args": [
                {
                    "is_node": true,
                    "name": "add_1"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "e",
            "op_code": "get_attr",
            "name": "e",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_2",
            "args": [
                {
                    "is_node": true,
                    "name": "linear_1"
                },
                {
                    "is_node": true,
                    "name": "e"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "output",
            "op_code": "output",
            "name": "output",
            "args": [
                {
                    "is_node": true,
                    "name": "add_2"
                }
            ],
            "kwargs": {}
        }
    ]
}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47612

Reviewed By: scottxu0730

Differential Revision: D24836223

Pulled By: gcatron

fbshipit-source-id: d3da2b5f90d143beba3b7f1f67462fb7430df906
2020-11-10 11:54:02 -08:00
Zachary DeVito
70d34718b8 [fx] add missing modules for type annoations (#47537)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47537

When a module only appears in a type constructor List[torch.Tensor],
it previously didn't get added to the list of used modules. This fixes it
by introspecting on the type constructor.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24806317

Pulled By: zdevito

fbshipit-source-id: 263391af71e1f2156cbefaab95b9818c6b9aaae1
2020-11-09 11:36:36 -08:00
Nikita Shulga
6248e0621c Revert D24801481: [pytorch][PR] Add AcceleratedGraphModule and serialzie GraphModule to JSON
Test Plan: revert-hammer

Differential Revision:
D24801481 (9e0102c10f)

Original commit changeset: 6b3fe69b51f7

fbshipit-source-id: f8287ef88b302e0f08d58090dc61603a4ef5cb3c
2020-11-09 08:28:22 -08:00
Garret Catron
9e0102c10f Add AcceleratedGraphModule and serialzie GraphModule to JSON (#47233)
Summary:
Example:
```
class TestModule(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self.linear = torch.nn.Linear(4, 4)
                self.e = torch.rand(4)

            def forward(self, a, b):
                add_1 = a + b
                linear = self.linear(add_1)
                add_2 = linear + self.e
                return add_2
```
JSON:
```
{
    "modules": {},
    "weights": {
        "linear.weight": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4, 4]"
        },
        "linear.bias": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        },
        "e": {
            "dtype": "torch.float32",
            "is_quantized": false,
            "shape": "[4]"
        }
    },
    "nodes": [
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "a",
            "op_code": "placeholder",
            "name": "a",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "b",
            "op_code": "placeholder",
            "name": "b",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_1",
            "args": [
                {
                    "is_node": true,
                    "name": "a"
                },
                {
                    "is_node": true,
                    "name": "b"
                }
            ],
            "kwargs": {}
        },
        {
            "target": "linear",
            "op_code": "call_module",
            "name": "linear_1",
            "args": [
                {
                    "is_node": true,
                    "name": "add_1"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "e",
            "op_code": "get_attr",
            "name": "e",
            "args": [],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "_operator.add",
            "op_code": "call_function",
            "name": "add_2",
            "args": [
                {
                    "is_node": true,
                    "name": "linear_1"
                },
                {
                    "is_node": true,
                    "name": "e"
                }
            ],
            "kwargs": {}
        },
        {
            "shape": "[4]",
            "dtype": "torch.float32",
            "target": "output",
            "op_code": "output",
            "name": "output",
            "args": [
                {
                    "is_node": true,
                    "name": "add_2"
                }
            ],
            "kwargs": {}
        }
    ]
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47233

Reviewed By: jackm321, yinghai

Differential Revision: D24801481

Pulled By: gcatron

fbshipit-source-id: 6b3fe69b51f7ac57f445675acdac36b0e563f73d
2020-11-08 19:26:02 -08:00
James Reed
d0df29ac22 [FX] Put inf and nan in globals instead of with an import string (#47035)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47035

Chillee thought the `from math import inf, nan` string at the top of `.code` was annoying so here's an alternative way to do it by putting those values in `globals` before we `exec`

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D24611278

Pulled By: jamesr66a

fbshipit-source-id: c25ef89e649bdd3e79fe91aea945a30fa7106961
2020-10-29 00:35:41 -07:00
James Reed
069232a574 [FX] Fix corner case in name sanitization (#46958)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46958

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D24580474

Pulled By: jamesr66a

fbshipit-source-id: 2f8d252998c72e1e79d6a5f7766c2d51a271cc83
2020-10-28 10:22:33 -07:00
James Reed
67c1dc65a3 [FX] Fix handling of inf and nan literals (#46894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46894

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24555136

Pulled By: jamesr66a

fbshipit-source-id: 22765a4d9d373711e9e6d7b1d3898080ecbcf2f5
2020-10-27 17:55:35 -07:00
James Reed
2700932ef2 [FX] Fix recursion depth issue on Graph deepcopy (#46669)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46669

Make `Graph`'s deepcopy behavior iterative rather than recursive. This prevents stack overflow issues with very large `Graph`s

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D24455120

Pulled By: jamesr66a

fbshipit-source-id: 5c37db5acabe313b9a7a464bebe2a82c59e4e2e9
2020-10-22 11:55:23 -07:00
Zachary DeVito
88dcb95e22 [fx] use a linked list for nodes (#45708)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45708

This makes it possible to define reasonable semantics for what happens
when a node in the list is deleted. In particular the iteration over nodes
will continue at the node that was after the deleted node _when it was deleted_.
If the new node is also deleted, we skip it and, continue to the node after it.
Eventually we either reach a node still in the list or we reach the end of the list.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D24089516

Pulled By: zdevito

fbshipit-source-id: d01312d11fe381c8d910a83a08582a2219f47dda
2020-10-12 18:20:14 -07:00
James Reed
c73af6040e [FX] Make graph_copy examine existing values in val_map (#46104)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46104

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24224505

Pulled By: jamesr66a

fbshipit-source-id: ffdf8ea8cb92439f3aacf08b0c0db63ce3a15b8f
2020-10-09 16:37:55 -07:00
James Reed
00b8ebe60c [FX] Preserve type annotations on generated code in Graph (#45880)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45880

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D24127303

Pulled By: jamesr66a

fbshipit-source-id: 3a042bcfb0bf9f58ac318cc814dfc3cca683c7f8
2020-10-07 21:34:47 -07:00
James Reed
8cdb638c62 [FX] Track use nodes in Node (#45775)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45775

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24091082

Pulled By: jamesr66a

fbshipit-source-id: b09bb6ae78436a7722fb135b8ec71464ef9587cd
2020-10-07 00:15:04 -07:00
James Reed
b04ae953b4 [FX][WIP] Mutable Graph APIs (#45227)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45227

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23880730

Pulled By: jamesr66a

fbshipit-source-id: eb4e8c14d7f6b1deb1ddd6cf38a360413a1705ed
2020-10-05 17:07:08 -07:00
Zachary DeVito
26a9012f84 [fx] import used modules for code gen (#45471)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45471

Intead of assuming that 'torch' is the only module used by generated code,
use the qualified names of builtin functions to generate import statements
for all builtins. This allows user-captured functions to also get code generated correctly.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23978696

Pulled By: zdevito

fbshipit-source-id: ecbff150e3de38532531cdadbfe4965468f29a38
2020-10-05 15:21:44 -07:00
James Reed
53aea60bce [FX] Make output a non-special Node (#45599)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45599

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D24027586

Pulled By: jamesr66a

fbshipit-source-id: 747c25e3c7668ca45f03bed0be71fd3c9af67286
2020-10-02 17:08:17 -07:00
James Reed
6bdb871d47 [FX] Lint pass for Graphs (#44973)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44973

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23792631

Pulled By: jamesr66a

fbshipit-source-id: d8faef0c311d8bd611ba0a7e1e2f353e3e5a1068
2020-09-28 23:00:32 -07:00
James Reed
b0bdc82a00 [FX][EZ] Fix bug where copying node made non-unique name (#45311)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45311

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23917864

Pulled By: jamesr66a

fbshipit-source-id: 10d0a4017ffe160bce4ba0d830e035616bbded74
2020-09-28 22:55:20 -07:00
James Reed
7f4a27be3a [resubmit][FX] s/get_param/get_attr/ (#45147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45147

ghstack-source-id: 112605923

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D23845096

fbshipit-source-id: 9ca209aa84cbaddd6e89c52b541e43b11197e2d5
2020-09-22 17:06:18 -07:00
James Reed
79fe794f87 [FX] Make Graphs immutable and make GraphModule recompile after assigning graph (#44830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44830

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23743850

Pulled By: jamesr66a

fbshipit-source-id: 501b92a89ff636c26abeff13105a75462384554c
2020-09-22 15:02:11 -07:00
James Reed
1fd48a9d1f Revert D23798016: [FX] s/get_param/get_attr/
Test Plan: revert-hammer

Differential Revision:
D23798016 (c941dd3492)

Original commit changeset: 1d2f3db1994a

fbshipit-source-id: 974d930064b37d396c5d66c905a63d45449813e5
2020-09-22 10:32:51 -07:00
James Reed
c941dd3492 [FX] s/get_param/get_attr/ (#45000)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45000

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D23798016

Pulled By: jamesr66a

fbshipit-source-id: 1d2f3db1994a62b95d0ced03bf958e54d30c35dd
2020-09-21 14:09:32 -07:00
James Reed
29664e6aa3 [FX] Further sanitize generated names (#44808)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44808

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D23739413

Pulled By: jamesr66a

fbshipit-source-id: b759c3ea613dfa717fb23977b72ff4773d9dcc99
2020-09-16 18:47:38 -07:00
Zachary DeVito
2c1b215b48 [fx] remove delegate, replace with tracer (#44566)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44566

The Delegate objects were confusing. They were suppose to be a way to
configure how tracing works, but in some cases they appeared necessary
for consturcting graphs, which was not true. This makes the organization
clearer by removing Delgate and moving its functionality into a Tracer class,
similar to how pickle has a Pickler class.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23683177

Pulled By: zdevito

fbshipit-source-id: 7605a34e65dfac9a487c0bada39a23ca1327ab00
2020-09-15 16:52:22 -07:00
James Reed
1fcccd6a18 [FX] Minor fixups in Graph printout (#44214)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44214

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D23545501

Pulled By: jamesr66a

fbshipit-source-id: dabb3b051ed4da213b2087979ade8a649288bd5d
2020-09-08 14:45:32 -07:00
James Reed
af13faf18b [FX] __str__ for GraphModule and Graph (#44166)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44166

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D23520801

Pulled By: jamesr66a

fbshipit-source-id: f77e3466e435127ec01e66291964395f32a18992
2020-09-04 10:46:43 -07:00
Dmytro Dzhulgakov
633d239409 [torch.fx] Pass placeholders through delegate too (#43432)
Summary:
It's useful if we add additional attributed to nodes in the graph - it's easier to set the attribute on all nodes, even if the value would happen to be None.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43432

Reviewed By: jamesr66a

Differential Revision: D23276433

Pulled By: dzhulgakov

fbshipit-source-id: c69e7cb723bbbb4dba3b508a3d6c0e456fe610df
2020-08-28 18:07:52 -07:00
Michael Suo
3830998ac3 [fx] When generating names, avoid shadowing builtins (#43653)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43653

When nodes are created without an explicit name, a name is generated for
it based on the target. In these cases, we need to avoid shadowing
builtin names. Otherwise, code like:
```
a.foo.bar
```
results in pretty-printed code like:
```
getattr = a.foo
getattr_1 = getattr.bar
```

While this is technically allowed in Python, it's probably a bad idea,
and more importantly is not supported by TorchScript (where `getattr` is
hardcoded).

This PR changes the name generation logic to avoid shadowing all
builtins and langauge keywords. We already do this for PyTorch
built-ins, so just extend that logic. So now the generated code will
look like:

```
getattr_1 = a.foo
getattr_2 = getattr_1.bar
```
Fixes #43522

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23357420

Pulled By: suo

fbshipit-source-id: 91e9974adc22987eca6007a2af4fb4fe67f192a8
2020-08-27 10:43:56 -07:00
Zachary DeVito
1f0cfbaaad [fx] add type annotations (#43083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43083

This adds type annotations to all classes, arguments, and returns
for fx. This should make it easier to understand the code, and
encourage users of the library to also write typed code.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D23145853

Pulled By: zdevito

fbshipit-source-id: 648d91df3f9620578c1c51408003cd5152e34514
2020-08-23 15:38:33 -07:00
Zachary DeVito
b349f58c21 [fx] enabling typechecking of fx files (#43082)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43082

Fixes all present errors in mypy. Does not try to add annotations everywhere.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23145854

Pulled By: zdevito

fbshipit-source-id: 18e483ed605e89ed8125971e84da1a83128765b7
2020-08-23 15:37:29 -07:00
Zachary DeVito
4011685a8b [fx] split Node into Node/Proxy (#42991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42991

Have Node both be a record of the operator in the graph, and the
way we _build_ the graph made it difficult to keep the IR datastructure
separate from the proxying logic in the build.

Among other issues this means that typos when using nodes would add
things to the graph:
```
    for node in graph.nodes:
        node.grph # does not error, returns an node.Attribute object!
```

This separates the builder into a Proxy object. Graph/Node no longer
need to understand `delegate` objects since they are now just pure IR.
This separates the `symbolic_trace` (proxy.py/symbolic_trace.py) from
the IR (node.py, graph.py).

This also allows us to add `create_arg` to the delegate object,
allowing the customization of how aggregate arguments are handled
when converting to a graph.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D23099786

Pulled By: zdevito

fbshipit-source-id: 6f207a8c237e5eb2f326b63b0d702c3ebcb254e4
2020-08-14 16:45:21 -07:00
James Reed
0134deda0f [FX] Add interface to reject nodes (#42865)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42865

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23056584

Pulled By: jamesr66a

fbshipit-source-id: 02db08165ab41be5f3c4b5ff253cbb444eb9a7b8
2020-08-12 14:30:06 -07:00
James Reed
0ff0fea42b [FX] fix lint (#42866)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42866

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D23056813

Pulled By: jamesr66a

fbshipit-source-id: d30cdffe6f0465223354dec00f15658eb0b08363
2020-08-11 14:01:26 -07:00
James Reed
575e7497f6 Introduce experimental FX library (#42741)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/42741

Test Plan: Imported from OSS

Reviewed By: dzhulgakov

Differential Revision: D23006383

Pulled By: jamesr66a

fbshipit-source-id: 6cb6d921981fcae47a07df581ffcf900fb8a7fe8
2020-08-11 10:01:47 -07:00