Commit Graph

247 Commits

Author SHA1 Message Date
James Reed
3eb9443619 [FX] Fix issue where GraphModule.delete_all_unused_submodules deletes submodules from called leaf modules (#66430)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66430

On the whole, I'm not totally satisfied with this approach. I think we should be building a prefix tree data structure during initial iteration over the submodules and querying that when deleting submodules. But I think this approach works and I want to see if we can get it in before 1.10

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D31546137

Pulled By: jamesr66a

fbshipit-source-id: f08b8409a3cf511277017ccccb916097b7c4c4fe
2021-10-11 19:37:51 -07:00
Horace He
300613dc60 make FX symbolic tracing reuse buffers if they're the same (#66211)
Summary:
Currently, if the same tensor constant is reused multiple times, we'll store a tensor constant for each time we use it.

For example
```
val = torch.randn(5)
for _ in range(10):
    x = x + val
```
ends up storing 10 tensor constants.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66211

Reviewed By: jamesr66a

Differential Revision: D31437089

Pulled By: Chillee

fbshipit-source-id: 401169c8d58ce0afb7025ae11060680ef544419f
2021-10-06 18:35:38 -07:00
Yinghai Lu
6b0aa2958d [FX] Support torch.layout as arg (#66048)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66048

Previously, create_arg would fail if it encountered a not `None` layout argument. Adding it to `BaseArgumentTypes` list should be enough to fix that.

Test Plan: Added unittest

Reviewed By: jamesr66a

Differential Revision: D31362662

fbshipit-source-id: 20049971e18c17e9c75e50540500c567266daa55
2021-10-04 19:58:08 -07:00
Jason Ansel
487c771593 [FX] Fix tracing of bitwise and/or (#65196)
Summary:
Previously resulted in `AttributeError: module 'operator' has no attribute 'and'`

and/or are python keywords, so they are renamed to `operator.and_` and `operator.or_`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65196

Reviewed By: Chillee

Differential Revision: D31020336

Pulled By: jansel

fbshipit-source-id: 51d888151fe78c0c1197ecaf161976b219c59694
2021-09-17 14:33:02 -07:00
James Reed
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
James Reed
9117eed6ed [FX} Add torch.ops.profiler._record_function_{enter,exit} as stateful ops for DCE (#65180)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65180

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31007115

Pulled By: jamesr66a

fbshipit-source-id: 823b15db712a382a4f2a4fd409983d47bc067150
2021-09-16 21:31:54 -07:00
soulitzer
4bf7959de2 Remove run_functional_checks from test_autograd and create necessary OpInfos (#64993)
Summary:
OpInfo tracker: https://github.com/pytorch/pytorch/issues/54261

 - Eliminate duplicated testing logic in test_autograd
 - Moved tests that rely on this testing logic to use OpInfos
   - `cat` already has OpInfo (no action needed)
   - Created OpInfo for `block_diag` and `broadcast_tensors`

Running into some FX errors. Added op to skip-list and created an issue here: https://github.com/pytorch/pytorch/issues/64997
Both `block_diag` and `broadcast_tensors` are variadic, so skipping `test_variant_consistency_jit` (from comments on other OpInfos, it looks like JIT does not support variadic tensors)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64993

Reviewed By: jbschlosser

Differential Revision: D30961736

Pulled By: soulitzer

fbshipit-source-id: e169305384a683acae1178c4e12e9e214a67226a
2021-09-15 12:45:38 -07:00
Horace He
35413a16f7 Add __matmul__ to the magic methods for FX tracing (#64512)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/64483

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64512

Reviewed By: mrshenli

Differential Revision: D30797265

Pulled By: Chillee

fbshipit-source-id: 7630e048a960e0b27c4309d04d85301abe325189
2021-09-08 10:03:48 -07:00
kshitij12345
2c351c76e0 [special] Alias igamma, igammac to special.gammaninc, special.gammaincc (#61902)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Also added relevant OpInfo

TODO:
* [x] Check rendered docs gammainc : https://docs-preview.pytorch.org/61902/special.html#torch.special.gammainc
* [x] Check rendered docs gammaincc: https://docs-preview.pytorch.org/61902/special.html#torch.special.gammaincc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61902

Reviewed By: ngimel

Differential Revision: D30761428

Pulled By: mruberry

fbshipit-source-id: 06a16432873357958d53364f12a4e91c29779d26
2021-09-07 15:31:26 -07:00
James Reed
e1c3e5f830 [resubmit][FX] Prototype for guarding against mutable operations in tracing (#64467)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64467

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D30744870

Pulled By: jamesr66a

fbshipit-source-id: fc652f8b17748f90dbeb83fabf3bd5bb57d6ff1a
2021-09-02 21:13:21 -07:00
Eli Uriegas
32a93c2424 Revert D30675780: [FX] Prototype for guarding against mutable operations in tracing
Test Plan: revert-hammer

Differential Revision:
D30675780 (795387477f)

Original commit changeset: b2116b51dcc8

fbshipit-source-id: d4f1173f4989556ea54974f4c2739ef85a705fae
2021-09-02 16:07:29 -07:00
James Reed
795387477f [FX] Prototype for guarding against mutable operations in tracing (#64295)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64295

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D30675780

Pulled By: jamesr66a

fbshipit-source-id: b2116b51dcc87357f0c84192c4c336680875e27a
2021-09-02 15:17:04 -07:00
Patrick Hu
c6505cc383 [FX] Fix python code generation for wrapped getattr() with default value (#64271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64271

Closes #60417

Modified emit_node() in fx/graph.py to generate getattr() call with default value when len(node.args) != 2 instead of accessing the attribute.
Added test_torch_fx_getattr() in test/test_fx.py.

Test Plan:
pytest test/test_fx.py

Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D30671265

fbshipit-source-id: f2db9ea47e0cb247547e200684f715aab006c374
2021-09-01 11:30:27 -07:00
Jay Leverett
44fcb00a56 Fix redundant class definition in GraphModule singleton constructor (#64274)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/63883

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64274

Reviewed By: jamesr66a

Differential Revision: D30675970

Pulled By: jayleverett

fbshipit-source-id: e74ef2a28013f0fa7c58d14f38e66cfe48d26b74
2021-08-31 17:34:14 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00
Patrick Hu
18cb3fc910 [FX] Validate data type of target on Node Construction (#64050)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64050

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D30585535

Pulled By: yqhu

fbshipit-source-id: 96778a87e75f510b4ef42f0e5cf76b35b7b2f331
2021-08-27 13:40:57 -07:00
James Reed
4e37a015c7 [FX] Fix _replicate_for_data_parallel (#63821)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63821

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D30502115

Pulled By: jamesr66a

fbshipit-source-id: 0f004f95def6e1ba21ccbeab40cb0a739a0ad20c
2021-08-24 13:48:15 -07:00
Philip Meier
99203580a9 Updates internal assert_allclose callsites in favor of assert_close (#61841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61841

Redo of #60863.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D30408145

Pulled By: mruberry

fbshipit-source-id: 0b34ebc7f23ba38ecd89640b61d8aca59b7eab58
2021-08-19 12:50:41 -07:00
Mostafa Elhoushi
139413078f [FX] make ASTReriter patch wrapped functions properly (#62987)
Summary:
reference the same global namespace (instead of copying it) in ASTRewriter to patch wrapped functions properly

Fixes #{62071}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62987

Test Plan:
To test it you may write this snippet and ensure the results are as shown in the comments:

```
import torch
import torch.fx

torch.fx.wrap
def to_be_wrapped(x):
    return torch.relu(x)

class Foo(torch.nn.Module):
    def forward(self, x):
        return to_be_wrapped(x)

traced = torch.fx.symbolic_trace(Foo())
print(traced.graph)
"""
graph():
    %x : [#users=1] = placeholder[target=x]
    %to_be_wrapped : [#users=1] = call_function[target=__main__.to_be_wrapped](args = (%x,), kwargs = {})
    return to_be_wrapped
"""

from torch.fx.experimental.rewriter import RewritingTracer

rt = RewritingTracer()
graph = rt.trace(Foo())
print(graph)
"""
### AFTER FIX (CORRECT):
graph():
    %x : [#users=1] = placeholder[target=x]
    %to_be_wrapped : [#users=1] = call_function[target=__main__.to_be_wrapped](args = (%x,), kwargs = {})
    return to_be_wrapped

### BEFORE FIX (WRONG):
graph():
    %x : [#users=1] = placeholder[target=x]
    %relu : [#users=1] = call_function[target=torch.relu](args = (%x,), kwargs = {})
    return relu
"""
```

Reviewed By: ansley

Differential Revision: D30396176

Pulled By: mostafaelhoushi

fbshipit-source-id: f61eddf32e9ef42b5f5c3ce21d559945214ee833
2021-08-18 15:03:57 -07:00
James Reed
d661e646ad [FX] Fix GraphModule deepcopy to use deepcopied graph (#63090)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63090

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D30252471

Pulled By: jamesr66a

fbshipit-source-id: cafd7d7917935a5ea6ffa2a7fe9e9b2a9578b3e3
2021-08-18 13:17:14 -07:00
Bradley Davis
011fdc3b7e [fx] persist tracer_cls on fx.Graph when deep copying (#63353)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63353

Custom deepcopy method copies all nodes but does not copy the tracer_cls attribute

Reviewed By: houseroad

Differential Revision: D30349424

fbshipit-source-id: 3e98bdac8a8a992eb0b4ec67fe80bb2e5cf3884d
2021-08-17 09:57:48 -07:00
Nikita Vedeneev
dbcfd7739f Make torch.lu differentiable for wide/tall inputs + jit (#61564)
Summary:
As per title.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61564

Reviewed By: astaff

Differential Revision: D30338136

Pulled By: mruberry

fbshipit-source-id: f01436fc90980544cdfa270feee16bb3dda21b93
2021-08-16 11:40:57 -07:00
Alexander Soare
219ba6575b add autowrap_functions kwarg to fx.Tracer (#62106)
Summary:
Implements feature request https://github.com/pytorch/pytorch/issues/62021

Test it out with

```python
from torch import fx
from torch import nn

def fx_int(x):
    return int(x)

class MyModule(nn.Module):
    def forward(self, x):
        return fx_int(x.shape[0] / 2)

tracer = fx.Tracer(autowrap_functions=(fx_int,))  # or remove kwarg to demonstrate symbolic trace error
tracer.trace(MyModule())
```

First time contributor, so please advise if I could have done anything to make lives easier for next time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62106

Reviewed By: SplitInfinity, driazati

Differential Revision: D30080834

Pulled By: jamesr66a

fbshipit-source-id: 68fadf8c881ea7930e7afd62b642874010fe4903
2021-08-12 17:38:25 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Bradley Davis
093495d3f0 [fx] prevent implicit submodule inlining when submodule is a GraphModule (#62436)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62436

## Problem

Given two modules and a tracer that indiscriminately marks all modules as a leaf:
```
class InnerModule(torch.nn.Module):

    def forward(self, t):
        return t + t

class MyModule(torch.nn.Module):
    def __init__(self, inner):
        super().__init__()
        self.inner = inner

    def forward(self, t):
        x = self.inner(t)
        y = self.inner(t)
        return x + y

class MyTracer(torch.fx.Tracer):
    def is_leaf_module(self, module, name):
        return True
```

One might generally expect the following behavior (note call_module nodes):
```
print(">> Outer GraphModule (with inner module as nn.Module):")
inner = InnerModule()
m = MyModule(inner)
gm = torch.fx.GraphModule(m, MyTracer().trace(m))
print(gm.graph.print_tabular())

>> Outer GraphModule (with inner module as nn.Module):
opcode         name     target                   args              kwargs
-------------  -------  -----------------------  ----------------  --------
placeholder    t        t                        ()                {}
call_module    inner    inner                    (t,)              {}
call_module    inner_1  inner                    (t,)              {}
call_function  add      <built-in function add>  (inner, inner_1)  {}
output         output   output                   (add,)            {}
None
```

However, when the inner module is first symbolically traced, the symbolic trace of the outer module ignores `is_leaf_node` entirely, and traces through the whole module (note call_function nodes).
```
print(">> Inner module as GraphModule:")
inner = InnerModule()
inner_gm = torch.fx.GraphModule(inner, MyTracer().trace(inner))
print(inner_gm.graph.print_tabular())

print(">> Outer GraphModule (with inner module as GraphModule):")
m = MyModule(inner_gm)
gm = torch.fx.GraphModule(m, MyTracer().trace(m))
print(gm.graph.print_tabular())

>> Inner module as GraphModule:
opcode         name    target                   args    kwargs
-------------  ------  -----------------------  ------  --------
placeholder    t       t                        ()      {}
call_function  add     <built-in function add>  (t, t)  {}
output         output  output                   (add,)  {}
None

>> Outer GraphModule (with inner module as GraphModule):
opcode         name    target                   args          kwargs
-------------  ------  -----------------------  ------------  --------
placeholder    t       t                        ()            {}
call_function  add     <built-in function add>  (t, t)        {}
call_function  add_1   <built-in function add>  (t, t)        {}
call_function  add_2   <built-in function add>  (add, add_1)  {}
output         output  output                   (add_2,)      {}
None
```

This is surprising behavior and at first glance violates the tracer's intent. As I understand it, `torch.fx.symbolic_trace.Tracer.trace` intends to patch `torch.nn.Module.__call__` with a `module_call_wrapper()` that records a `call_module` node if the module is a leaf, else executes `torch.fx._symbbolic_trace._orig_module_call = torch.nn.Module.__call__`, which is set a module loading time.

**Every submodule should be a leaf, but no `call_module` nodes are created when that submodule is a `GraphModule`. Why?**

Upon further inspection, I found:

- The constructor for GraphModule includes a path to `GraphModule.recompile()` via the setter for a `fx.Graph`:
```
inner_gm = torch.fx.GraphModule(inner, MyTracer().trace(inner))

File "/torch/fx/graph_module.py", line 252, in __init__
self.graph = graph

File "/torch/nn/modules/module.py", line 1183, in __setattr__
object.__setattr__(self, name, value)

File "/torch/fx/graph_module.py", line 277, in graph
self.recompile()
```
- `recompile()` wraps the `__call__` method by holding a reference to the `__call__` method at the time of recompilation:
```
cls = type(self)
cls_call = cls.__call__
...
def wrapped_call(self, *args, **kwargs):
    try:
        return cls_call(self, *args, **kwargs)
    except Exception as e:
        ...
cls.__call__ = wrapped_call
```
- Recompilation of the inner GraphModule happens on initialization, before creation or tracing of the outer module. Adding some old-fashioned print debug statements gives:
```
Inner Module:
_orig_module_call: <function Module._call_impl at 0x7faaebfee8b0>
recompile: cls.__call__ now wraps _orig_module_call, <function Module._call_impl at 0x7faaebfee8b0>

Outer Module:
_orig_module_call: <function Module._call_impl at 0x7faaebfee8b0>
tracing: patching method <class 'torch.nn.modules.module.Module'>.__call__ <function Module._call_impl at 0x7faaebfee8b0> with <function Module._call_impl at 0x7fa9d42bce50>

outer module MRO before tracing:
(0) <class '__main__.MyModule'>: <function Module._call_impl at 0x7faaebfee8b0>
(1) <class 'torch.nn.modules.module.Module'>: <function Module._call_impl at 0x7faaebfee8b0>
(2) <class 'object'>: <method-wrapper '__call__' of type object at 0x7fac3cd15f00>

outer module MRO during tracing:
(0) <class '__main__.MyModule'>: <function Module._call_impl at 0x7fa9d42bce50>
(1) <class 'torch.nn.modules.module.Module'>: <function Module._call_impl at 0x7fa9d42bce50>
(2) <class 'object'>: <method-wrapper '__call__' of type object at 0x7fac3cd15f00>

inner module MRO before tracing:
(0) <class 'torch.fx.graph_module.GraphModule.__new__.<locals>.GraphModuleImpl'>: <function x.y.z.wrapped_call at 0x7fa9d42a8670>
(1) <class 'torch.fx.graph_module.GraphModule'>: <function Module._call_impl at 0x7faaebfee8b0>
(2) <class 'torch.nn.modules.module.Module'>: <function Module._call_impl at 0x7faaebfee8b0>
(3) <class 'object'>: <method-wrapper '__call__' of type object at 0x7fac3cd15f00>

inner module MRO during tracing:
(0) <class 'torch.fx.graph_module.GraphModule.__new__.<locals>.GraphModuleImpl'>: <function x.y.z.wrapped_call at 0x7fa9d42a8670>
(1) <class 'torch.fx.graph_module.GraphModule'>: <function Module._call_impl at 0x7fa9d42bce50>
(2) <class 'torch.nn.modules.module.Module'>: <function Module._call_impl at 0x7fa9d42bce50>
(3) <class 'object'>: <method-wrapper '__call__' of type object at 0x7fac3cd15f00>
```

- The outer module is patched correctly, but the inner module's first element in its MRO is the `wrapped_call` from `recompile` that still invokes `<function Module._call_impl at 0x7faaebfee8b0>` directly. Therefore, no call_module nodes are created.

## In Practice

In practice, this behavior affects the ability of `torch.package` to package `GraphModules` whose submodules are `GraphModules`. In our case, the `GraphModule` submodules are not passed through a constructor, but created separately and installed on the root `GraphModule` via `setattr`. This means that prior to packaging, there appear to be no issues with the module, since the root's graph was created before any call_module targets were replaced with `GraphModules`.

When unpackaging such a model with `torch.package`, `torch.fx.graph_module._deserialize_graph_module` uses an inline `KeepModules` tracer that sets all submodules to leaves; the unpackaged module is implicitly and surprisingly inlined in the process.

## Potential Solution

This behavior was previously not understood by us, and so the current workaround is a gnarly process of wrapping all submodules with a `nn.Module` with a manually installed forward method.

Changing `wrapped_call` to return `return super(type(self), self).__call__(*args, **kwargs)` whenever `__call__` is inherited at least appears to solve the issue. Does this seem like an acceptable approach?

## Other Thoughts
- Repeated calls to `recompile` create nested `wrapped_calls`, all for the purpose of error handling. This seems probably unnecessary ¯\\_(ツ)\_/¯
- If a root module with a overriden `__call__` method is symbolically traced, it is ignored

Test Plan:
```
buck test:
    ✓ ListingSuccess: caffe2/test:fx - main (12.570)
    ✓ Pass: caffe2/test:fx - test_tracing_graphmodules_as_leaf_submodules (test_fx.TestFX) (11.982)
```

Reviewed By: ansley

Differential Revision: D29997935

fbshipit-source-id: 1988fbb025b14188da26a3e73e94fb789c3c1f74
2021-08-02 13:37:08 -07:00
Jerry Cai
1b147a52f5 Allow FX tracer to trace control flow (if/while) statements when parameter shapes are in the conditionals (#61820)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61733

Allow FX tracer to trace control flow (if/while) statements when parameter shapes are in the condition.
If the user specifies the new "param_shapes_constant" option when constructing a tracer,  the model's parameter shape attribute will be evaluated and the resulting constant will be emitted into the IR during tracing.
Also added a new test

`
python test/fx/test_fx_param_shape_control_flow.py
`
The test also performs a somewhat whitebox style testing to check the generated Python code from the IR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61820

Reviewed By: bdhirsh

Differential Revision: D29969299

Pulled By: jerryzhenleicai

fbshipit-source-id: 99aae824bdfec880be69258de7ead5c8cd59eddc
2021-07-28 23:48:44 -07:00
Richard Zou
52d1ffb789 Teach pytrees about namedtuple (#62292)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62292

This PR adds pytree support for namedtuples. The challenge about namedtuple
is that each namedtuple class is actually different. This PR does the
following:
- it adds a namedtuple flatten/unflatten. The flatten function returns
a context that is the actual type of the namedtuple subclass. The
unflatten function uses that type to reconstruct the namedtuple
- Special cases all pytree logic to consider all namedtuples the same.
This is done by creating a `_get_node_type(pytree)` helper function that
returns `namedtuple` if `pytree` is any namedtuple subclass. The effect
of this is that all namedtuple subclasses will go through the namedtuple
flatten/unflatten functions
- Adds a `_namedtuple_flatten_spec` function for FX pytrees. This function
flattens the namedtuple based on the spec and is equivalent to the
`_tuple_flatten_spec`.

Test Plan
- new tests in test/test_pytree.py and test/test_fx.py

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D29947302

Pulled By: zou3519

fbshipit-source-id: 19c00665b13546642c315df0f243ad99b8e7ff7c
2021-07-28 06:27:44 -07:00
tktrungna
8152433de2 [1/n] Update testing lib*.so path (#61960)
Summary:
### Issue

Build PyTorch wheel packages during build stage for pull requests and install during test stage.

### Fix
Update all tests which call lib*.so (under `./build folder`), change to call lib*.so in `{ent}/pytorch/lib/python3.8/site-packages/torch`

### Diff
This diff starts to update test_fx, test_backend and test_torchbind first to check if current ci pass

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61960

Test Plan: check of all ci workflows pass

Reviewed By: malfet, saketh-are

Differential Revision: D29823235

Pulled By: tktrungna

fbshipit-source-id: e7f652def698e303d4843fbaedf4859f5eca2fd9
2021-07-24 05:16:35 -07:00
Bradley Davis
8880f3d450 [fx] introduce __fx_create_arg__ dunder method for controlling custom classes are handled as node args (#61780)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61780

These changes would allow objects to control how they are handled when they are an argument to a torch.fx call_module node from within their source. Previously, we have been using a custom Tracer with an overridden create_arg() method and branching based on class name to handle args that are unusual (data classes, etc).

Reviewed By: suo, houseroad

Differential Revision: D27976120

fbshipit-source-id: 0c5249c5f8398368ca0fbec0ad8a07ccf99b7da4
2021-07-21 11:27:09 -07:00
Kushashwa Ravi Shrimali
7e1f01d4c0 Alias for polygamma (#59691)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59691

Reviewed By: gchanan

Differential Revision: D29707514

Pulled By: mruberry

fbshipit-source-id: 40c15e1fda3d9f7013977b0f36a77b228dda6aa5
2021-07-16 00:06:27 -07:00
Bradley Davis
1f4bba77b6 [fx] fix subgraph API call_module warning about no owning module (#61463)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61463

seems like a small oversight(?), current test fails when warnings are recorded. discovered this when calling `graph.call_module(existing_call_module_node.target)` and it raised a warning

Test Plan: `buck test //caffe2/test:fx`

Reviewed By: ansley

Differential Revision: D29637799

fbshipit-source-id: 2305629863230235f76a926fe2e4de480cbf853c
2021-07-09 15:25:44 -07:00
Akifumi Imanishi
4d9fd8958b Support __rand__, __ror__ and __rxor__ (#59240)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58120.

This PR implements `torch.Tensor.{__rand__/__ror__/__rxor__}` for the compatibility with NumPy’s interface.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59240

Reviewed By: ngimel

Differential Revision: D29482304

Pulled By: mruberry

fbshipit-source-id: 13789202c1d8dddf8658a45381aeedcc31e2f603
2021-07-07 13:34:14 -07:00
Zeina Migeed
6f1455440b task 3: typecheck (#60805)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60805

Test Plan: Imported from OSS

Reviewed By: jamesr66a, VitalyFedyunin

Differential Revision: D29522885

Pulled By: migeed-z

fbshipit-source-id: 559a8a495a16e517af77fd5a0785a82e1ebb3bd7
2021-07-06 23:51:49 -07:00
James Reed
7a4ffbd1da [FX] s/IS_SANDCASTLE/IS_FBCODE/ in tests (#61304)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61304

Previously tests were unrunnable on devserver. This fixes that
ghstack-source-id: 133051811

Test Plan: waitforsadcastle

Reviewed By: Chillee

Differential Revision: D29561806

fbshipit-source-id: 6020e5b4ba72d6de1ea2563e70fdb0e604bee1a5
2021-07-06 17:20:53 -07:00
Zeina Migeed
9f3167ebdf task 1: annotate (#60621)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60621

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D29493619

Pulled By: migeed-z

fbshipit-source-id: 1bd3fb02c90ae5b394869a474b2e6b06af0d4791
2021-07-06 16:48:11 -07:00
kshitij12345
dfd2edc025 [special] add zeta (#59623)
Summary:
Reference https://github.com/pytorch/pytorch/issues/50345

`zeta` was already present in the codebase to support computation of `polygamma`.

However, `zeta` only had `double(double, double)` signature **for CPU** before the PR (which meant that computation `polygamma` were always upcasted to `double` for zeta part).

With this PR, float computations will take place in float and double in double.

Have also refactored the code and moved the duplicate code from `Math.cuh` to `Math.h`

**Note**: For scipy, q is optional, and if it is `None`, it defaults `1` which corresponds to Reimann-Zeta. However, for `torch.specia.zeta`, I made it mandatory cause for me it feels odd without `q` this is Reimann-Zeta and with `q` it is the general Hurwitz Zeta. I think sticking to just general made more sense as passing `1` for q sounds trivial.

Verify:
* [x] Docs https://14234587-65600975-gh.circle-artifacts.com/0/docs/special.html#torch.special.zeta

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59623

Reviewed By: ngimel

Differential Revision: D29348269

Pulled By: mruberry

fbshipit-source-id: a3f9ebe1f7724dbe66de2b391afb9da1cfc3e4bb
2021-06-24 00:00:12 -07:00
Jordan Fix
f65793507d [fx][Transformer] Add override for call_function (#60057)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60057

This ensures that if a function was `wrap`'d before symbolic tracing + being passed into the transformer then it will still be wrapped.

Test Plan: Added test to `test_fx.py`

Reviewed By: jamesr66a

Differential Revision: D29151191

fbshipit-source-id: 93560be59505bdcfe8d4f013e21d4719788afd59
2021-06-16 17:25:55 -07:00
kshitij12345
da972afdcd OpInfo: to_sparse (#59445)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59445

Reviewed By: ngimel

Differential Revision: D28920866

Pulled By: mruberry

fbshipit-source-id: ba8d3071d9937096288b69511000eeb007f53434
2021-06-05 19:13:58 -07:00
Akifumi Imanishi
0a5bfa9919 Support __rmod__ (#58476)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58035.

This PR implements `torch.Tensor.__rmod__` and `torch.remainder(scalar, tensor)` for the compatibility with NumPy’s interface.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

TODO:
  - [x] Update `tensor_binary_op` in test/test_binary_ufuncs.py after https://github.com/pytorch/pytorch/issues/58216 is merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58476

Reviewed By: ngimel

Differential Revision: D28776810

Pulled By: mruberry

fbshipit-source-id: 74f8aea80f439ef2cc370333524e39971eeb7bf4
2021-06-05 16:19:24 -07:00
kshitij12345
6620d7d688 OpInfo: norm (#59259)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

EDIT:
~~Test takes whooping 4 mins to run 😓~~ (Filtered tests also included linalg norm)

Newly added tests take around 2 mins.
```
==================================================== 193 passed, 224 skipped, 27224 deselected, 5 warnings in 138.87s (0:02:18) ====================================================
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59259

Reviewed By: jbschlosser

Differential Revision: D28833962

Pulled By: mruberry

fbshipit-source-id: 40b24d6a8cb8b7d231b2f6b34b87cee4f136c5f9
2021-06-03 08:25:58 -07:00
krshrimali
ef40757de3 OpInfo: zero_ (#58731)
Summary:
See https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58731

Reviewed By: ngimel

Differential Revision: D28784083

Pulled By: mruberry

fbshipit-source-id: f06de8045afd3728b1fedc014c091d8fd1955a9f
2021-05-30 21:49:29 -07:00
kshitij12345
445e838210 OpInfo: resize_, resize_as_ (#59176)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59176

Reviewed By: ngimel

Differential Revision: D28780083

Pulled By: mruberry

fbshipit-source-id: 472584e8faa4cb1031908df097849d2d4167fdf5
2021-05-30 18:53:17 -07:00
kshitij12345
d68df54269 OpInfo: fill_ (#59138)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59138

Reviewed By: ngimel

Differential Revision: D28776451

Pulled By: mruberry

fbshipit-source-id: 2e8e9f1805ec7d900223ea749a4a0b86a1bedb54
2021-05-29 00:35:02 -07:00
kshitij12345
c9af4c2636 OpInfo: where (#58349)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58349

Reviewed By: mrshenli

Differential Revision: D28744220

Pulled By: mruberry

fbshipit-source-id: 893a2fb88a48a60df75c7d6e2f58a42ca949daa7
2021-05-28 18:22:03 -07:00
Ansley Ussery
5268b5a29a Add parsing logic for Tuple[()] annotation (#58340)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58340

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D28459502

Pulled By: ansley

fbshipit-source-id: 4bb188448d66269b42b068858b895debac86e9ee
2021-05-25 12:12:43 -07:00
Adnios
09a8f22bf9 Add mish activation function (#58648)
Summary:
See issus: https://github.com/pytorch/pytorch/issues/58375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
2021-05-25 10:36:21 -07:00
kshitij12345
f9e8dc005a OpInfo: clone, contiguous (#58390)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58390

Reviewed By: soulitzer

Differential Revision: D28567821

Pulled By: mruberry

fbshipit-source-id: bcf42cb4a9a57d8a15a76819b8a9e2df97cf00be
2021-05-22 18:25:31 -07:00
James Reed
36adc3f04d [FX] Add APIs to mutate specific args/kwargs (#58571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58571

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D28543359

Pulled By: jamesr66a

fbshipit-source-id: 44812d04886e653b5439c880dd831ecbc893fe23
2021-05-19 14:54:16 -07:00
Akifumi Imanishi
3113a1de4a Fix some tensor operators to return NotImplemented for invalid inputs (#58216)
Summary:
Same as https://github.com/pytorch/pytorch/issues/57934. (cc/ albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58216

Reviewed By: ailzhang

Differential Revision: D28494886

Pulled By: albanD

fbshipit-source-id: 380205867ee1cde90e1c6fcfe2a31749e1243530
2021-05-19 13:09:57 -07:00
James Reed
7b73fdf597 [FX] Fix retracing wrapped functions (#58061)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58061

Test Plan: Imported from OSS

Reviewed By: yuhc

Differential Revision: D28358801

Pulled By: jamesr66a

fbshipit-source-id: c7c9a8a80e5bfe1eb1f6d2cf858ac7e57153a860
2021-05-17 19:50:16 -07:00
James Reed
00156d4845 [FX][WIP] Proxyable classes (#56737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56737

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D27953073

Pulled By: jamesr66a

fbshipit-source-id: fafc681af7bd5200a9ead2fd0720940913885575
2021-05-14 14:07:04 -07:00
Nick Korovaiko
c524448dd1 init hardshrink (#57749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57749

add to a fx test

Test Plan: Imported from OSS

Reviewed By: huiguoo

Differential Revision: D28425974

fbshipit-source-id: 195c7a1944decb7a2a99c2831cab38485f32be17
2021-05-13 19:38:05 -07:00
Alban Desmaison
5e83c62a9e Revert D28351931: [pytorch][PR] Fix some tensor operators to return NotImplemented for invalid inputs
Test Plan: revert-hammer

Differential Revision:
D28351931 (35521a2629)

Original commit changeset: 985457a44dba

fbshipit-source-id: 10724c219e53648f10a70719e25bcf774c6c7852
2021-05-12 13:58:03 -07:00
Akifumi Imanishi
35521a2629 Fix some tensor operators to return NotImplemented for invalid inputs (#57934)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57719.

This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.

cc/ mruberry: The first commit of this PR is the same as 1d209db1cc excepts the commit message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57934

Reviewed By: mruberry

Differential Revision: D28351931

Pulled By: albanD

fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
2021-05-12 11:03:23 -07:00
kshitij12345
ff982ef73d OpInfo: reshape, reshape_as and minor clean-up (#57460)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57460

Reviewed By: nairbv

Differential Revision: D28151675

Pulled By: anjali411

fbshipit-source-id: 2b3bcadab3ff5d1761b2922b63afd70a354e785c
2021-05-12 06:05:21 -07:00
Ansley Ussery
0d4dc6cb39 Let submodules be collected as args/kwargs (#57840)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57840

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D28294984

Pulled By: ansley

fbshipit-source-id: d64fe109a349516da69d2d17f58e42f98af564fd
2021-05-11 18:17:11 -07:00
James Reed
a13718b69f [FX] Make stack trace testing less strict (#58088)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58088

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D28365398

Pulled By: jamesr66a

fbshipit-source-id: 4d5d173721b4a917893a6f1202e3980aa6e85fcc
2021-05-11 15:34:06 -07:00
Nikita Shulga
b587354e4c Add Python-3.9 CI testing (#50992)
Summary:
Skip number of tests adjust typing handling

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50992

Reviewed By: walterddr

Differential Revision: D26170388

Pulled By: malfet

fbshipit-source-id: 47852512aa3d5c25faf6687bcd0b1cbb332b0b20
2021-05-10 10:51:39 -07:00
Horace He
8d363d37da [FX] Adds PyTree support to FX through concrete_args (#55888)
Summary:
```
class Foo(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, y, x):
        for k in x:
            for v in x[k]:
                v += y
        return x

example_dict = {'x': {'a': [fx.HOLE], 'z': [fx.HOLE, fx.HOLE]}}
new_f = fx.symbolic_trace(Foo(), concrete_args=example_dict)
print(new_f.code)
new_f(torch.randn(5), {'x': {'a': [torch.randn(5)], 'z': [torch.randn(5), torch.randn(5)]}})

fx.symbolic_trace(new_f, concrete_args=example_dict)
```

prints out
```
def forward(self, y, x):
    y, tree_2, tree_3, tree_4 = pytree.tree_flatten([y, x])[0]
    add = tree_2 + y
    add_1 = tree_3 + y
    add_2 = tree_4 + y;  y = None
    return {'a': [tree_2], 'z': [tree_3, tree_4]}
```

Currently, I store `in_spec` as an extra attribute on `fx.Graph`, and then include it when we do the codegen. I'm not sure if this is the right approach - it introduces a divergence between what's in `fx.Graph` and what's in the python code.

Perhaps the best API is something explicit like `fx.Graph.flatten_args`, but that does make calling things a bit ... more verbose.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55888

Reviewed By: jamesr66a

Differential Revision: D27884694

Pulled By: Chillee

fbshipit-source-id: f9e8a70c63a8df63c9f9bd0a6459255daa5a8df8
2021-05-07 04:48:35 -07:00
kshitij12345
9e6b7e6e6e OpInfo: expand and expand_as (#57606)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57606

Reviewed By: albanD

Differential Revision: D28249191

Pulled By: mruberry

fbshipit-source-id: d985ab4e8a99b116c45953e621092929a9a8028e
2021-05-07 02:50:00 -07:00
Elias Ellison
7627dd568a hardswish reland (#57652)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57652

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D28226724

Pulled By: eellison

fbshipit-source-id: 585a91ffab7a855b5600e79130a37be25ef9b354
2021-05-05 17:21:43 -07:00
Shen Li
887d0e5657 Revert D28197820: [JIT][NNC] add hardswish symbolic gradient and NNC lowering
Test Plan: revert-hammer

Differential Revision:
D28197820 (0142fd0b57)

Original commit changeset: 05305d85c5bb

fbshipit-source-id: 2e1d9699515982ba2a9be06e83a2ce043ec857ee
2021-05-05 07:53:30 -07:00
eellison
0142fd0b57 [JIT][NNC] add hardswish symbolic gradient and NNC lowering (#57383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57383

Notes: I picked up an activation from https://github.com/pytorch/pytorch/issues/56969. You can look at the [activations.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Activation.cpp#L429) file which has both forward and backward kernel code to help you write the NNC lowering and the symbolic gradient.

I added a test in test_jit_fuser_te for the fusion, and I added an OpInfo and asserted that we expect to see autodiffable nodes to test the symbolic gradient.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D28197820

Pulled By: eellison

fbshipit-source-id: 05305d85c5bb0847c8f911b95ba47b137dca7e90
2021-05-04 23:39:59 -07:00
kshitij12345
154eca0309 OpInfo: ravel, view, view_as (#56910)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56910

Reviewed By: ngimel

Differential Revision: D28141867

Pulled By: mruberry

fbshipit-source-id: bff49d40d7e3bb36bc83d1405bd77f5529eeffe9
2021-05-02 22:10:36 -07:00
Yukio Siraichi
ce4449918a Port reverse binary ops to OpInfo (#56471)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54296
Tracking Issue https://github.com/pytorch/pytorch/issues/54261

**Summary:**
- `rsub` (aten function) was already ported
- Ported tests for its dunder version: `__rsub__`
- Ported tests for the other dunder functions: `__radd__`, `__rmul__`, `__rdiv__`, `__rpow__`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56471

Reviewed By: ngimel

Differential Revision: D28142843

Pulled By: mruberry

fbshipit-source-id: 3d1bd88a4f124774f48d33a7ca7bfc7f796360df
2021-05-02 16:01:12 -07:00
Heitor Schueroff
57e37080cd Added OpInfo for torch.einsum (#56276)
Summary:
Adds OpInfo testing for torch.einsum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56276

Reviewed By: mruberry

Differential Revision: D27967095

Pulled By: heitorschueroff

fbshipit-source-id: 60524273d2ca885e7eeb932db3e7fd697ae5ca8e
2021-04-27 07:39:38 -07:00
albanD
10fd7d8be6 Add option to OpInfo to skip gradgrad check and empty cdist OpInfo (#56603)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56603

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27939204

Pulled By: albanD

fbshipit-source-id: c7c80551ef3c34c822832891a99104440893ea4c
2021-04-23 14:06:33 -07:00
Allen (Congcong) Chen
798dd4665d Add a new API replace_input_with to node.py (#55887)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55887

Reviewed By: jfix71

Differential Revision: D27731389

fbshipit-source-id: 754654e64c4f3a584dfea06322d833bc11bcc3cc
2021-04-23 11:37:41 -07:00
Joel Schlosser
7d2a9f2dc9 Fix instance norm input size validation + test (#56659)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45687

Fix changes the input size check for `InstanceNorm*d` to be more restrictive and correctly reject sizes with only a single spatial element, regardless of batch size, to avoid infinite variance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56659

Reviewed By: pbelevich

Differential Revision: D27948060

Pulled By: jbschlosser

fbshipit-source-id: 21cfea391a609c0774568b89fd241efea72516bb
2021-04-23 10:53:39 -07:00
Suraj Subramanian
78022aa62c Add more model symbolic tracing tests from torchvision (#55744)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55398

Generates tests that calls `symbolic_trace` on torchvision models and verifies the parity of outputs from eager model, `fx.GraphModule`, `jit.ScriptModule`.

Test errors: GoogleNet and Inception models throw a type mismatch when scripting the traced `fx.GraphModule`.
```
Return value was annotated as having type __torch__.torchvision.models.googlenet.GoogLeNetOutputs but is actually of type Tensor:
    dropout = self.dropout(flatten);  flatten = None
    fc = self.fc(dropout);  dropout = None
    return fc
    ~~~~~~~~~ <--- HERE
```

Relevant type-inconsistency 512ea299d4/torchvision/models/googlenet.py (L200)
```
torch.jit.unused
    def eager_outputs(self, x: Tensor, aux2: Tensor, aux1: Optional[Tensor]) -> GoogLeNetOutputs:
        if self.training and self.aux_logits:
            return _GoogLeNetOutputs(x, aux2, aux1)
        else:
            return x   # type: ignore[return-value]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55744

Reviewed By: albanD

Differential Revision: D27920595

Pulled By: suraj813

fbshipit-source-id: 01f6f2aef7badbde29b5162a7787b5af9398090d
2021-04-22 08:54:06 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
kshitij12345
df8bb5a42b Add OpInfo for polygamma and remove torch_op_tests Infra (#51966)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

* OpInfo entry for Polygamma
* Removes infra of torch_op_tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51966

Reviewed By: bdhirsh

Differential Revision: D27851858

Pulled By: mruberry

fbshipit-source-id: 7f1d0273065e1df56a152f95a14513959af29a1b
2021-04-20 01:03:09 -07:00
James Reed
d02919dd50 [FX] Make shape_prop handle targets with aggregate outputs (#56221)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56221

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D27810693

Pulled By: jamesr66a

fbshipit-source-id: 17c6ad671786b3bacb5026bd88b8f5b7b4b96a1a
2021-04-16 18:58:25 -07:00
Erjia Guan
b96cc9ab20 [FX][testing] Test tracing into all the standard torch.nn.functional (#55550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55550

Add a test for `symbolic_trace` into `torch.nn.functional`

Test against all `functional`s with `torch.Tensor` argument and `functional`s from `FUNCTIONALS_WITHOUT_ANNOTATION`.
```py
FUNCTIONALS_WITHOUT_ANNOTATION = (
        "adaptive_max_pool1d",
        "adaptive_max_pool2d",
        "adaptive_max_pool3d",
        "fractional_max_pool2d",
        "fractional_max_pool3d",
        "max_pool1d",
        "max_pool2d",
        "max_pool3d",
        "gaussian_nll_loss",
        "upsample",
        "upsample_bilinear",
        "upsample_nearest",
    )
```

`UNTRACEABLE_FUNCTIONALS` lists 110 current untraceable `functional`s with expected `Error`.
- `BUILT_IN_FUNC`: built-in functions or built-in methods can not be traced.
- `PROXY_ITERATED`: Proxy object cannot be iterated. This can be attempted when used in a for loop or as a *args or **kwargs function argument
- `LEN_ERROR`: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope
- `ARG_TYPE_MISMATCH`: `functional()`: argument <name> (position <n>) must be <type>, not Proxy
- `CONTROL_FLOW`: symbolically traced variables cannot be used as inputs to control flow
- `INTERPOLATE_ARGS_CONFLICT`: When tracing the functional by calling `interpolate(input, size, scale_factor, mode="bilinear", align_corners=True)`, `ValueError("only one of size or scale_factor should be defined")` is raised

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D27659367

Pulled By: ejguan

fbshipit-source-id: d0d05e4d94e0b85f47e6c171a31f0d41b1387373
2021-04-16 06:48:02 -07:00
James Reed
2236f43da0 [FX] Put tensor metadata into a NamedTuple in ShapeProp (#55930)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55930

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27741730

Pulled By: jamesr66a

fbshipit-source-id: 0a0a1b94beed6c482add9e9551f316f3b4220ab2
2021-04-13 22:21:50 -07:00
James Reed
8bdea14cd3 [FX] Add memory_format to shape_prop (#55815)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55815

Test Plan: Imported from OSS

Reviewed By: pbelevich, ansley

Differential Revision: D27716342

Pulled By: jamesr66a

fbshipit-source-id: f7c22dd77a4f48650700fc4c3c44b1c59196282e
2021-04-13 16:37:54 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Nikita Shulga
add49e7e4e Enforce PEP263 for PyTorch python codebase (#55346)
Summary:
All python files containing non-ASCII characters should be correctly annotated with `# -*- coding: utf-8 -*-` comment

Delete number of superfluous UTF-8 characters, most commonly UTF-8 opening closing quotation mark U+2019 (’) instead of ascii apostrophe ', for example `Module’s`->`Module's`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55346

Reviewed By: samestep

Differential Revision: D27582044

Pulled By: malfet

fbshipit-source-id: c1cd89655915858ff3a41f675cdfffff795a8e44
2021-04-06 18:31:38 -07:00
James Reed
641d4ff160 [FX] Add stride to shape_prop pass (#55108)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55108

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27482241

Pulled By: jamesr66a

fbshipit-source-id: 7d928015712126e916c86225dc3ab27aba22d431
2021-04-02 19:57:11 -07:00
Horace He
1324b0dd44 [FX] Adds C-level monkeypatching of torch.randn so that we can capture it during tracing. (#54060)
Summary:
```
def foo(x):
    return x + torch.randn(3, 3)

fx.enable_ctracing(True)
print(fx.symbolic_trace(foo).code)
```
results in
```
def forward(self, x):
    randn = torch.randn(3, 3)
    add = x + randn;  x = randn = None
    return add
```

Seems to slow down tracing by 1.5-3x.

DenseNet121: 0.05 -> 0.12 seconds
ResNet18: 0.10 -> 0.15

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54060

Reviewed By: jamesr66a

Differential Revision: D27208978

Pulled By: Chillee

fbshipit-source-id: b9e19a9b1084dadfc0dfaee41a03bc25a45910b1
2021-04-01 07:34:31 -07:00
Heitor Schueroff
5d68b3695c [Relanding] Implemented torch.linalg.multi_dot (#52859)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52859

This reverts commit 92a4ee1cf6.

Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D27402390

Pulled By: heitorschueroff

fbshipit-source-id: 73c5ccf54f3da3d29eb63c9ed3601e2fe6951034
2021-04-01 04:49:05 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
Kurt Mohler
49b07ac5d1 Enable complex autograd for index, add index and index_put OpInfos (#54562)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53605

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54562

Reviewed By: malfet

Differential Revision: D27300086

Pulled By: anjali411

fbshipit-source-id: 23e8335e6e4c8f10888b5c54a040880c5b499215
2021-03-29 14:36:43 -07:00
James Reed
a28c7db9f9 [FX] Garbage collect values in Interpreter (#54726)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54726

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27341449

Pulled By: jamesr66a

fbshipit-source-id: 9dc5f9675ed197dee4a31c8b0e6276248378f1ea
2021-03-25 20:35:32 -07:00
James Reed
4a74b0f2dd Fix logic in TestFX.test_get_torch_func_signature_exhaustive (#54510)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54510

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27264670

Pulled By: jamesr66a

fbshipit-source-id: 0ef6395dacde3eb2a4b9c7eeff760a1be38b6dfe
2021-03-23 16:23:25 -07:00
Mike Ruberry
7b939d934e Simplifes OpInfo test matrix to reduce test time (#53255)
Summary:
This PR:

- Updates the structure of the SampleInput class to require the "input" attribute be a tensor
- Limits unary ufuncs to test only the uint8, long, float16, bfloat16, float and cfloat dtypes by default
- Limits variant testing to the float dtype
- Removes test_variant_consistency from test_unary_ufuncs.py since it's now redundant with variant testing in test_ops.py
- Adds backwards supported testing to clarify failures that were coming from variant testing

This should decrease test e2e time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53255

Reviewed By: ngimel

Differential Revision: D27043643

Pulled By: mruberry

fbshipit-source-id: 91d6b483ad6e2cd1b9ade939d42082980ae14217
2021-03-22 03:48:27 -07:00
James Reed
255b103c1b [WIP] Function to retrieve inspect.Signature instances for PyTorch ops (#53830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53830

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26982802

Pulled By: jamesr66a

fbshipit-source-id: 18fddc9f3f34b09e173de59f2fe886f8eedd000e
2021-03-17 20:41:27 -07:00
Jordan Fix
0806126aad [fx][trivial] Add TestConstFold coverage to test_fx (#54072)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54072

att

Test Plan: Adding coverage

Differential Revision: D27085591

fbshipit-source-id: 8c5ea5a52be619249f23a938ddb0a3aed1ada0f7
2021-03-17 10:38:54 -07:00
Ansley Ussery
08f04c0db2 Test forward reference annotations (#53713)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53713

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26946847

Pulled By: ansley

fbshipit-source-id: 2f99247c4b54ee06dcb54b23fdcee3537643cad4
2021-03-15 19:40:26 -07:00
Jordan Fix
3b0e4a6ed4 [GraphModule] Improve buffer registration during init (#53444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53444

GraphModule construction has two options when constructing the base nn.Module: a dict of names to attrs to assign to the GraphModule, or another nn.Module to copy attrs from.

- For the dict case, add logic to explicitly register `nn.Tensors` that are not `nn.Parameter` as buffers on the GraphModule, else fall back to `__setattr__`.
- For the other `nn.Module` case, update so that it checks in the other module whether the attr to copy in is a buffer, and register it as such, else fall back to `__setattr__`.

Test Plan: Added tests for fetching params and buffers from a GraphModule using both dict and module `__init__`s

Reviewed By: jamesr66a

Differential Revision: D26860055

fbshipit-source-id: 8d9999f91fef20aaa10969558006fc356247591f
2021-03-09 21:05:01 -08:00
Jordan Fix
5b52ff6c8e [fx] Add DCE pass (#52658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52658

DCE will reverse iterate over the graph looking for nodes without users and delete them. It will skip over unused placeholders (since this affects the signature of the method) and outputs (which never have users but we want to keep them :) )

Test Plan: Added unit tests

Reviewed By: jamesr66a, khabinov, chenccfb

Differential Revision: D26602212

fbshipit-source-id: f4f196973e40546076636090bb0008c24f33795e
2021-03-08 19:54:56 -08:00
James Reed
1fe6a6507e [WIP][FX] Fix tracing support for torchbind (#52884)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52884

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D26675801

Pulled By: jamesr66a

fbshipit-source-id: 8e5100bcea17589a53163abf6ab991658e11fa3a
2021-03-05 23:40:16 -08:00
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
James Reed
8b5b7fa83d [WIP][FX] Optionally record stack traces when symtracing (#53081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53081

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D26742402

Pulled By: jamesr66a

fbshipit-source-id: 7987f9ddf061f6de3b4a638d98e0fae6d68d90c6
2021-03-03 12:30:43 -08:00
James Reed
f40c9db622 [FX][EZ] Hoist custom class .so loading into setUp (#52883)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52883

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26675802

Pulled By: jamesr66a

fbshipit-source-id: 7a7bcb1d0a6f8c9b1431bc3e09143ada6e5fbf4d
2021-02-25 18:46:05 -08:00
Michael Suo
958d9a8364 [fx/package] make GraphModules packageable (#51976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51976

FX serializes things by serializing Python code as a string and exec'ing
it on load. This accomplishes one goal (we don't have to pickle the
graph object directly) but breaks the pickle abstraction in ways that
are not composable with `torch.package`.

In particular:
1. `forward` is serialized by saving Python code. On load, it's
installed
by  `exec`ing that code. This `exec` call needs to have the right
importer installed, otherwise it will not import modules from the
`torch.package` but instead import from the Python environment.
2. Any types/functions used are emitted as `import` statement in the
generated Python code. These are effectively dynamic dependencies of the
`GraphModule` being saved, and need to be registered as such so that the
`PackageImporter` will package them.

To address these, this PR introduces a new protocol for the
importer/exporter: `__reduce_package__`.

A class can implement `__reduce_package__` to customize how it is placed
in the importer/exproter. It functions very similarly to `__reduce__`,
except:
- `__reduce_package__` takes one argument, which is the
`PackageExporter`
instance. Users can use this instance to save stuff to the package to
implement their serialization. `__reduce__` takes no args.
- Only the 2-element tuple version of the return value for `__reduce__`
is supported (this could be extended if necessary).
- When the reduction function is called on load, an additional argument
is added to the beginning of the args tuple. This is the
`PackageImporter`
instance doing the loading.

The `__reduce_package__` protocol is defined using `persistent_id` and
`persistent_load`, which ensures that we can still use the cpickle
implementation of the pickler by default.

Pull Request resolved: #51971

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D26340591

Pulled By: suo

fbshipit-source-id: 5872a7d22e832056399a7372bae8a57807717882
2021-02-23 22:43:00 -08:00
Shiyan Deng
238b0bbb68 Allow Transformer accept output result that is not Proxy (#52473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52473

Use `map_aggregate` to create output for new graph so that it won't raise error when we have outputs that is not `Proxy`.

Test Plan: `test_transformer_multi_outputs` in `test_fx.py`

Reviewed By: jamesr66a

Differential Revision: D26502277

fbshipit-source-id: 404d9030a9b84db3f66f8505887a75717a28ad30
2021-02-23 19:28:37 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
Ansley Ussery
d8bb932245 Support AST rewriting for submodules (#52297)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52297

Before, an `nn.Module` with submodules would fail AST rewriting with `TypeError: 'RewrittenModule' object does not support item assignment`. (Try the `test_ast_rewriter_reassigns_submodules` test case on `master`.) This PR fixes the issue as well as adding additional test cases

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26483820

Pulled By: ansley

fbshipit-source-id: 757e898dc2b0a67daf2bd039d555b85f4e443322
2021-02-17 09:08:07 -08:00