Commit Graph

212 Commits

Author SHA1 Message Date
Zeina Migeed
9f3167ebdf task 1: annotate (#60621)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/60621

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D29493619

Pulled By: migeed-z

fbshipit-source-id: 1bd3fb02c90ae5b394869a474b2e6b06af0d4791
2021-07-06 16:48:11 -07:00
kshitij12345
dfd2edc025 [special] add zeta (#59623)
Summary:
Reference https://github.com/pytorch/pytorch/issues/50345

`zeta` was already present in the codebase to support computation of `polygamma`.

However, `zeta` only had `double(double, double)` signature **for CPU** before the PR (which meant that computation `polygamma` were always upcasted to `double` for zeta part).

With this PR, float computations will take place in float and double in double.

Have also refactored the code and moved the duplicate code from `Math.cuh` to `Math.h`

**Note**: For scipy, q is optional, and if it is `None`, it defaults `1` which corresponds to Reimann-Zeta. However, for `torch.specia.zeta`, I made it mandatory cause for me it feels odd without `q` this is Reimann-Zeta and with `q` it is the general Hurwitz Zeta. I think sticking to just general made more sense as passing `1` for q sounds trivial.

Verify:
* [x] Docs https://14234587-65600975-gh.circle-artifacts.com/0/docs/special.html#torch.special.zeta

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59623

Reviewed By: ngimel

Differential Revision: D29348269

Pulled By: mruberry

fbshipit-source-id: a3f9ebe1f7724dbe66de2b391afb9da1cfc3e4bb
2021-06-24 00:00:12 -07:00
Jordan Fix
f65793507d [fx][Transformer] Add override for call_function (#60057)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60057

This ensures that if a function was `wrap`'d before symbolic tracing + being passed into the transformer then it will still be wrapped.

Test Plan: Added test to `test_fx.py`

Reviewed By: jamesr66a

Differential Revision: D29151191

fbshipit-source-id: 93560be59505bdcfe8d4f013e21d4719788afd59
2021-06-16 17:25:55 -07:00
kshitij12345
da972afdcd OpInfo: to_sparse (#59445)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59445

Reviewed By: ngimel

Differential Revision: D28920866

Pulled By: mruberry

fbshipit-source-id: ba8d3071d9937096288b69511000eeb007f53434
2021-06-05 19:13:58 -07:00
Akifumi Imanishi
0a5bfa9919 Support __rmod__ (#58476)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58035.

This PR implements `torch.Tensor.__rmod__` and `torch.remainder(scalar, tensor)` for the compatibility with NumPy’s interface.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

TODO:
  - [x] Update `tensor_binary_op` in test/test_binary_ufuncs.py after https://github.com/pytorch/pytorch/issues/58216 is merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58476

Reviewed By: ngimel

Differential Revision: D28776810

Pulled By: mruberry

fbshipit-source-id: 74f8aea80f439ef2cc370333524e39971eeb7bf4
2021-06-05 16:19:24 -07:00
kshitij12345
6620d7d688 OpInfo: norm (#59259)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

EDIT:
~~Test takes whooping 4 mins to run 😓~~ (Filtered tests also included linalg norm)

Newly added tests take around 2 mins.
```
==================================================== 193 passed, 224 skipped, 27224 deselected, 5 warnings in 138.87s (0:02:18) ====================================================
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59259

Reviewed By: jbschlosser

Differential Revision: D28833962

Pulled By: mruberry

fbshipit-source-id: 40b24d6a8cb8b7d231b2f6b34b87cee4f136c5f9
2021-06-03 08:25:58 -07:00
krshrimali
ef40757de3 OpInfo: zero_ (#58731)
Summary:
See https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58731

Reviewed By: ngimel

Differential Revision: D28784083

Pulled By: mruberry

fbshipit-source-id: f06de8045afd3728b1fedc014c091d8fd1955a9f
2021-05-30 21:49:29 -07:00
kshitij12345
445e838210 OpInfo: resize_, resize_as_ (#59176)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59176

Reviewed By: ngimel

Differential Revision: D28780083

Pulled By: mruberry

fbshipit-source-id: 472584e8faa4cb1031908df097849d2d4167fdf5
2021-05-30 18:53:17 -07:00
kshitij12345
d68df54269 OpInfo: fill_ (#59138)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59138

Reviewed By: ngimel

Differential Revision: D28776451

Pulled By: mruberry

fbshipit-source-id: 2e8e9f1805ec7d900223ea749a4a0b86a1bedb54
2021-05-29 00:35:02 -07:00
kshitij12345
c9af4c2636 OpInfo: where (#58349)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58349

Reviewed By: mrshenli

Differential Revision: D28744220

Pulled By: mruberry

fbshipit-source-id: 893a2fb88a48a60df75c7d6e2f58a42ca949daa7
2021-05-28 18:22:03 -07:00
Ansley Ussery
5268b5a29a Add parsing logic for Tuple[()] annotation (#58340)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58340

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D28459502

Pulled By: ansley

fbshipit-source-id: 4bb188448d66269b42b068858b895debac86e9ee
2021-05-25 12:12:43 -07:00
Adnios
09a8f22bf9 Add mish activation function (#58648)
Summary:
See issus: https://github.com/pytorch/pytorch/issues/58375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
2021-05-25 10:36:21 -07:00
kshitij12345
f9e8dc005a OpInfo: clone, contiguous (#58390)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58390

Reviewed By: soulitzer

Differential Revision: D28567821

Pulled By: mruberry

fbshipit-source-id: bcf42cb4a9a57d8a15a76819b8a9e2df97cf00be
2021-05-22 18:25:31 -07:00
James Reed
36adc3f04d [FX] Add APIs to mutate specific args/kwargs (#58571)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58571

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D28543359

Pulled By: jamesr66a

fbshipit-source-id: 44812d04886e653b5439c880dd831ecbc893fe23
2021-05-19 14:54:16 -07:00
Akifumi Imanishi
3113a1de4a Fix some tensor operators to return NotImplemented for invalid inputs (#58216)
Summary:
Same as https://github.com/pytorch/pytorch/issues/57934. (cc/ albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58216

Reviewed By: ailzhang

Differential Revision: D28494886

Pulled By: albanD

fbshipit-source-id: 380205867ee1cde90e1c6fcfe2a31749e1243530
2021-05-19 13:09:57 -07:00
James Reed
7b73fdf597 [FX] Fix retracing wrapped functions (#58061)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58061

Test Plan: Imported from OSS

Reviewed By: yuhc

Differential Revision: D28358801

Pulled By: jamesr66a

fbshipit-source-id: c7c9a8a80e5bfe1eb1f6d2cf858ac7e57153a860
2021-05-17 19:50:16 -07:00
James Reed
00156d4845 [FX][WIP] Proxyable classes (#56737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56737

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D27953073

Pulled By: jamesr66a

fbshipit-source-id: fafc681af7bd5200a9ead2fd0720940913885575
2021-05-14 14:07:04 -07:00
Nick Korovaiko
c524448dd1 init hardshrink (#57749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57749

add to a fx test

Test Plan: Imported from OSS

Reviewed By: huiguoo

Differential Revision: D28425974

fbshipit-source-id: 195c7a1944decb7a2a99c2831cab38485f32be17
2021-05-13 19:38:05 -07:00
Alban Desmaison
5e83c62a9e Revert D28351931: [pytorch][PR] Fix some tensor operators to return NotImplemented for invalid inputs
Test Plan: revert-hammer

Differential Revision:
D28351931 (35521a2629)

Original commit changeset: 985457a44dba

fbshipit-source-id: 10724c219e53648f10a70719e25bcf774c6c7852
2021-05-12 13:58:03 -07:00
Akifumi Imanishi
35521a2629 Fix some tensor operators to return NotImplemented for invalid inputs (#57934)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57719.

This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.

cc/ mruberry: The first commit of this PR is the same as 1d209db1cc excepts the commit message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57934

Reviewed By: mruberry

Differential Revision: D28351931

Pulled By: albanD

fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
2021-05-12 11:03:23 -07:00
kshitij12345
ff982ef73d OpInfo: reshape, reshape_as and minor clean-up (#57460)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57460

Reviewed By: nairbv

Differential Revision: D28151675

Pulled By: anjali411

fbshipit-source-id: 2b3bcadab3ff5d1761b2922b63afd70a354e785c
2021-05-12 06:05:21 -07:00
Ansley Ussery
0d4dc6cb39 Let submodules be collected as args/kwargs (#57840)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57840

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D28294984

Pulled By: ansley

fbshipit-source-id: d64fe109a349516da69d2d17f58e42f98af564fd
2021-05-11 18:17:11 -07:00
James Reed
a13718b69f [FX] Make stack trace testing less strict (#58088)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58088

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D28365398

Pulled By: jamesr66a

fbshipit-source-id: 4d5d173721b4a917893a6f1202e3980aa6e85fcc
2021-05-11 15:34:06 -07:00
Nikita Shulga
b587354e4c Add Python-3.9 CI testing (#50992)
Summary:
Skip number of tests adjust typing handling

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50992

Reviewed By: walterddr

Differential Revision: D26170388

Pulled By: malfet

fbshipit-source-id: 47852512aa3d5c25faf6687bcd0b1cbb332b0b20
2021-05-10 10:51:39 -07:00
Horace He
8d363d37da [FX] Adds PyTree support to FX through concrete_args (#55888)
Summary:
```
class Foo(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, y, x):
        for k in x:
            for v in x[k]:
                v += y
        return x

example_dict = {'x': {'a': [fx.HOLE], 'z': [fx.HOLE, fx.HOLE]}}
new_f = fx.symbolic_trace(Foo(), concrete_args=example_dict)
print(new_f.code)
new_f(torch.randn(5), {'x': {'a': [torch.randn(5)], 'z': [torch.randn(5), torch.randn(5)]}})

fx.symbolic_trace(new_f, concrete_args=example_dict)
```

prints out
```
def forward(self, y, x):
    y, tree_2, tree_3, tree_4 = pytree.tree_flatten([y, x])[0]
    add = tree_2 + y
    add_1 = tree_3 + y
    add_2 = tree_4 + y;  y = None
    return {'a': [tree_2], 'z': [tree_3, tree_4]}
```

Currently, I store `in_spec` as an extra attribute on `fx.Graph`, and then include it when we do the codegen. I'm not sure if this is the right approach - it introduces a divergence between what's in `fx.Graph` and what's in the python code.

Perhaps the best API is something explicit like `fx.Graph.flatten_args`, but that does make calling things a bit ... more verbose.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55888

Reviewed By: jamesr66a

Differential Revision: D27884694

Pulled By: Chillee

fbshipit-source-id: f9e8a70c63a8df63c9f9bd0a6459255daa5a8df8
2021-05-07 04:48:35 -07:00
kshitij12345
9e6b7e6e6e OpInfo: expand and expand_as (#57606)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57606

Reviewed By: albanD

Differential Revision: D28249191

Pulled By: mruberry

fbshipit-source-id: d985ab4e8a99b116c45953e621092929a9a8028e
2021-05-07 02:50:00 -07:00
Elias Ellison
7627dd568a hardswish reland (#57652)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57652

Test Plan: Imported from OSS

Reviewed By: Krovatkin

Differential Revision: D28226724

Pulled By: eellison

fbshipit-source-id: 585a91ffab7a855b5600e79130a37be25ef9b354
2021-05-05 17:21:43 -07:00
Shen Li
887d0e5657 Revert D28197820: [JIT][NNC] add hardswish symbolic gradient and NNC lowering
Test Plan: revert-hammer

Differential Revision:
D28197820 (0142fd0b57)

Original commit changeset: 05305d85c5bb

fbshipit-source-id: 2e1d9699515982ba2a9be06e83a2ce043ec857ee
2021-05-05 07:53:30 -07:00
eellison
0142fd0b57 [JIT][NNC] add hardswish symbolic gradient and NNC lowering (#57383)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57383

Notes: I picked up an activation from https://github.com/pytorch/pytorch/issues/56969. You can look at the [activations.cpp](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/Activation.cpp#L429) file which has both forward and backward kernel code to help you write the NNC lowering and the symbolic gradient.

I added a test in test_jit_fuser_te for the fusion, and I added an OpInfo and asserted that we expect to see autodiffable nodes to test the symbolic gradient.

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D28197820

Pulled By: eellison

fbshipit-source-id: 05305d85c5bb0847c8f911b95ba47b137dca7e90
2021-05-04 23:39:59 -07:00
kshitij12345
154eca0309 OpInfo: ravel, view, view_as (#56910)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56910

Reviewed By: ngimel

Differential Revision: D28141867

Pulled By: mruberry

fbshipit-source-id: bff49d40d7e3bb36bc83d1405bd77f5529eeffe9
2021-05-02 22:10:36 -07:00
Yukio Siraichi
ce4449918a Port reverse binary ops to OpInfo (#56471)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54296
Tracking Issue https://github.com/pytorch/pytorch/issues/54261

**Summary:**
- `rsub` (aten function) was already ported
- Ported tests for its dunder version: `__rsub__`
- Ported tests for the other dunder functions: `__radd__`, `__rmul__`, `__rdiv__`, `__rpow__`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56471

Reviewed By: ngimel

Differential Revision: D28142843

Pulled By: mruberry

fbshipit-source-id: 3d1bd88a4f124774f48d33a7ca7bfc7f796360df
2021-05-02 16:01:12 -07:00
Heitor Schueroff
57e37080cd Added OpInfo for torch.einsum (#56276)
Summary:
Adds OpInfo testing for torch.einsum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56276

Reviewed By: mruberry

Differential Revision: D27967095

Pulled By: heitorschueroff

fbshipit-source-id: 60524273d2ca885e7eeb932db3e7fd697ae5ca8e
2021-04-27 07:39:38 -07:00
albanD
10fd7d8be6 Add option to OpInfo to skip gradgrad check and empty cdist OpInfo (#56603)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56603

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D27939204

Pulled By: albanD

fbshipit-source-id: c7c80551ef3c34c822832891a99104440893ea4c
2021-04-23 14:06:33 -07:00
Allen (Congcong) Chen
798dd4665d Add a new API replace_input_with to node.py (#55887)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55887

Reviewed By: jfix71

Differential Revision: D27731389

fbshipit-source-id: 754654e64c4f3a584dfea06322d833bc11bcc3cc
2021-04-23 11:37:41 -07:00
Joel Schlosser
7d2a9f2dc9 Fix instance norm input size validation + test (#56659)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45687

Fix changes the input size check for `InstanceNorm*d` to be more restrictive and correctly reject sizes with only a single spatial element, regardless of batch size, to avoid infinite variance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56659

Reviewed By: pbelevich

Differential Revision: D27948060

Pulled By: jbschlosser

fbshipit-source-id: 21cfea391a609c0774568b89fd241efea72516bb
2021-04-23 10:53:39 -07:00
Suraj Subramanian
78022aa62c Add more model symbolic tracing tests from torchvision (#55744)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/55398

Generates tests that calls `symbolic_trace` on torchvision models and verifies the parity of outputs from eager model, `fx.GraphModule`, `jit.ScriptModule`.

Test errors: GoogleNet and Inception models throw a type mismatch when scripting the traced `fx.GraphModule`.
```
Return value was annotated as having type __torch__.torchvision.models.googlenet.GoogLeNetOutputs but is actually of type Tensor:
    dropout = self.dropout(flatten);  flatten = None
    fc = self.fc(dropout);  dropout = None
    return fc
    ~~~~~~~~~ <--- HERE
```

Relevant type-inconsistency 512ea299d4/torchvision/models/googlenet.py (L200)
```
torch.jit.unused
    def eager_outputs(self, x: Tensor, aux2: Tensor, aux1: Optional[Tensor]) -> GoogLeNetOutputs:
        if self.training and self.aux_logits:
            return _GoogLeNetOutputs(x, aux2, aux1)
        else:
            return x   # type: ignore[return-value]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55744

Reviewed By: albanD

Differential Revision: D27920595

Pulled By: suraj813

fbshipit-source-id: 01f6f2aef7badbde29b5162a7787b5af9398090d
2021-04-22 08:54:06 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
kshitij12345
df8bb5a42b Add OpInfo for polygamma and remove torch_op_tests Infra (#51966)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/42515

* OpInfo entry for Polygamma
* Removes infra of torch_op_tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51966

Reviewed By: bdhirsh

Differential Revision: D27851858

Pulled By: mruberry

fbshipit-source-id: 7f1d0273065e1df56a152f95a14513959af29a1b
2021-04-20 01:03:09 -07:00
James Reed
d02919dd50 [FX] Make shape_prop handle targets with aggregate outputs (#56221)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56221

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D27810693

Pulled By: jamesr66a

fbshipit-source-id: 17c6ad671786b3bacb5026bd88b8f5b7b4b96a1a
2021-04-16 18:58:25 -07:00
Erjia Guan
b96cc9ab20 [FX][testing] Test tracing into all the standard torch.nn.functional (#55550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55550

Add a test for `symbolic_trace` into `torch.nn.functional`

Test against all `functional`s with `torch.Tensor` argument and `functional`s from `FUNCTIONALS_WITHOUT_ANNOTATION`.
```py
FUNCTIONALS_WITHOUT_ANNOTATION = (
        "adaptive_max_pool1d",
        "adaptive_max_pool2d",
        "adaptive_max_pool3d",
        "fractional_max_pool2d",
        "fractional_max_pool3d",
        "max_pool1d",
        "max_pool2d",
        "max_pool3d",
        "gaussian_nll_loss",
        "upsample",
        "upsample_bilinear",
        "upsample_nearest",
    )
```

`UNTRACEABLE_FUNCTIONALS` lists 110 current untraceable `functional`s with expected `Error`.
- `BUILT_IN_FUNC`: built-in functions or built-in methods can not be traced.
- `PROXY_ITERATED`: Proxy object cannot be iterated. This can be attempted when used in a for loop or as a *args or **kwargs function argument
- `LEN_ERROR`: 'len' is not supported in symbolic tracing by default. If you want this call to be recorded, please call torch.fx.wrap('len') at module scope
- `ARG_TYPE_MISMATCH`: `functional()`: argument <name> (position <n>) must be <type>, not Proxy
- `CONTROL_FLOW`: symbolically traced variables cannot be used as inputs to control flow
- `INTERPOLATE_ARGS_CONFLICT`: When tracing the functional by calling `interpolate(input, size, scale_factor, mode="bilinear", align_corners=True)`, `ValueError("only one of size or scale_factor should be defined")` is raised

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D27659367

Pulled By: ejguan

fbshipit-source-id: d0d05e4d94e0b85f47e6c171a31f0d41b1387373
2021-04-16 06:48:02 -07:00
James Reed
2236f43da0 [FX] Put tensor metadata into a NamedTuple in ShapeProp (#55930)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55930

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27741730

Pulled By: jamesr66a

fbshipit-source-id: 0a0a1b94beed6c482add9e9551f316f3b4220ab2
2021-04-13 22:21:50 -07:00
James Reed
8bdea14cd3 [FX] Add memory_format to shape_prop (#55815)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55815

Test Plan: Imported from OSS

Reviewed By: pbelevich, ansley

Differential Revision: D27716342

Pulled By: jamesr66a

fbshipit-source-id: f7c22dd77a4f48650700fc4c3c44b1c59196282e
2021-04-13 16:37:54 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Nikita Shulga
add49e7e4e Enforce PEP263 for PyTorch python codebase (#55346)
Summary:
All python files containing non-ASCII characters should be correctly annotated with `# -*- coding: utf-8 -*-` comment

Delete number of superfluous UTF-8 characters, most commonly UTF-8 opening closing quotation mark U+2019 (’) instead of ascii apostrophe ', for example `Module’s`->`Module's`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55346

Reviewed By: samestep

Differential Revision: D27582044

Pulled By: malfet

fbshipit-source-id: c1cd89655915858ff3a41f675cdfffff795a8e44
2021-04-06 18:31:38 -07:00
James Reed
641d4ff160 [FX] Add stride to shape_prop pass (#55108)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55108

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27482241

Pulled By: jamesr66a

fbshipit-source-id: 7d928015712126e916c86225dc3ab27aba22d431
2021-04-02 19:57:11 -07:00
Horace He
1324b0dd44 [FX] Adds C-level monkeypatching of torch.randn so that we can capture it during tracing. (#54060)
Summary:
```
def foo(x):
    return x + torch.randn(3, 3)

fx.enable_ctracing(True)
print(fx.symbolic_trace(foo).code)
```
results in
```
def forward(self, x):
    randn = torch.randn(3, 3)
    add = x + randn;  x = randn = None
    return add
```

Seems to slow down tracing by 1.5-3x.

DenseNet121: 0.05 -> 0.12 seconds
ResNet18: 0.10 -> 0.15

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54060

Reviewed By: jamesr66a

Differential Revision: D27208978

Pulled By: Chillee

fbshipit-source-id: b9e19a9b1084dadfc0dfaee41a03bc25a45910b1
2021-04-01 07:34:31 -07:00
Heitor Schueroff
5d68b3695c [Relanding] Implemented torch.linalg.multi_dot (#52859)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52859

This reverts commit 92a4ee1cf6.

Added support for bfloat16 for CUDA 11 and removed fast-path for empty input tensors that was affecting autograd graph.

Test Plan: Imported from OSS

Reviewed By: H-Huang

Differential Revision: D27402390

Pulled By: heitorschueroff

fbshipit-source-id: 73c5ccf54f3da3d29eb63c9ed3601e2fe6951034
2021-04-01 04:49:05 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
Kurt Mohler
49b07ac5d1 Enable complex autograd for index, add index and index_put OpInfos (#54562)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53605

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54562

Reviewed By: malfet

Differential Revision: D27300086

Pulled By: anjali411

fbshipit-source-id: 23e8335e6e4c8f10888b5c54a040880c5b499215
2021-03-29 14:36:43 -07:00
James Reed
a28c7db9f9 [FX] Garbage collect values in Interpreter (#54726)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54726

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27341449

Pulled By: jamesr66a

fbshipit-source-id: 9dc5f9675ed197dee4a31c8b0e6276248378f1ea
2021-03-25 20:35:32 -07:00
James Reed
4a74b0f2dd Fix logic in TestFX.test_get_torch_func_signature_exhaustive (#54510)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54510

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27264670

Pulled By: jamesr66a

fbshipit-source-id: 0ef6395dacde3eb2a4b9c7eeff760a1be38b6dfe
2021-03-23 16:23:25 -07:00
Mike Ruberry
7b939d934e Simplifes OpInfo test matrix to reduce test time (#53255)
Summary:
This PR:

- Updates the structure of the SampleInput class to require the "input" attribute be a tensor
- Limits unary ufuncs to test only the uint8, long, float16, bfloat16, float and cfloat dtypes by default
- Limits variant testing to the float dtype
- Removes test_variant_consistency from test_unary_ufuncs.py since it's now redundant with variant testing in test_ops.py
- Adds backwards supported testing to clarify failures that were coming from variant testing

This should decrease test e2e time.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53255

Reviewed By: ngimel

Differential Revision: D27043643

Pulled By: mruberry

fbshipit-source-id: 91d6b483ad6e2cd1b9ade939d42082980ae14217
2021-03-22 03:48:27 -07:00
James Reed
255b103c1b [WIP] Function to retrieve inspect.Signature instances for PyTorch ops (#53830)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53830

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26982802

Pulled By: jamesr66a

fbshipit-source-id: 18fddc9f3f34b09e173de59f2fe886f8eedd000e
2021-03-17 20:41:27 -07:00
Jordan Fix
0806126aad [fx][trivial] Add TestConstFold coverage to test_fx (#54072)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54072

att

Test Plan: Adding coverage

Differential Revision: D27085591

fbshipit-source-id: 8c5ea5a52be619249f23a938ddb0a3aed1ada0f7
2021-03-17 10:38:54 -07:00
Ansley Ussery
08f04c0db2 Test forward reference annotations (#53713)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53713

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26946847

Pulled By: ansley

fbshipit-source-id: 2f99247c4b54ee06dcb54b23fdcee3537643cad4
2021-03-15 19:40:26 -07:00
Jordan Fix
3b0e4a6ed4 [GraphModule] Improve buffer registration during init (#53444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53444

GraphModule construction has two options when constructing the base nn.Module: a dict of names to attrs to assign to the GraphModule, or another nn.Module to copy attrs from.

- For the dict case, add logic to explicitly register `nn.Tensors` that are not `nn.Parameter` as buffers on the GraphModule, else fall back to `__setattr__`.
- For the other `nn.Module` case, update so that it checks in the other module whether the attr to copy in is a buffer, and register it as such, else fall back to `__setattr__`.

Test Plan: Added tests for fetching params and buffers from a GraphModule using both dict and module `__init__`s

Reviewed By: jamesr66a

Differential Revision: D26860055

fbshipit-source-id: 8d9999f91fef20aaa10969558006fc356247591f
2021-03-09 21:05:01 -08:00
Jordan Fix
5b52ff6c8e [fx] Add DCE pass (#52658)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52658

DCE will reverse iterate over the graph looking for nodes without users and delete them. It will skip over unused placeholders (since this affects the signature of the method) and outputs (which never have users but we want to keep them :) )

Test Plan: Added unit tests

Reviewed By: jamesr66a, khabinov, chenccfb

Differential Revision: D26602212

fbshipit-source-id: f4f196973e40546076636090bb0008c24f33795e
2021-03-08 19:54:56 -08:00
James Reed
1fe6a6507e [WIP][FX] Fix tracing support for torchbind (#52884)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52884

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D26675801

Pulled By: jamesr66a

fbshipit-source-id: 8e5100bcea17589a53163abf6ab991658e11fa3a
2021-03-05 23:40:16 -08:00
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
James Reed
8b5b7fa83d [WIP][FX] Optionally record stack traces when symtracing (#53081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53081

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D26742402

Pulled By: jamesr66a

fbshipit-source-id: 7987f9ddf061f6de3b4a638d98e0fae6d68d90c6
2021-03-03 12:30:43 -08:00
James Reed
f40c9db622 [FX][EZ] Hoist custom class .so loading into setUp (#52883)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52883

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26675802

Pulled By: jamesr66a

fbshipit-source-id: 7a7bcb1d0a6f8c9b1431bc3e09143ada6e5fbf4d
2021-02-25 18:46:05 -08:00
Michael Suo
958d9a8364 [fx/package] make GraphModules packageable (#51976)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51976

FX serializes things by serializing Python code as a string and exec'ing
it on load. This accomplishes one goal (we don't have to pickle the
graph object directly) but breaks the pickle abstraction in ways that
are not composable with `torch.package`.

In particular:
1. `forward` is serialized by saving Python code. On load, it's
installed
by  `exec`ing that code. This `exec` call needs to have the right
importer installed, otherwise it will not import modules from the
`torch.package` but instead import from the Python environment.
2. Any types/functions used are emitted as `import` statement in the
generated Python code. These are effectively dynamic dependencies of the
`GraphModule` being saved, and need to be registered as such so that the
`PackageImporter` will package them.

To address these, this PR introduces a new protocol for the
importer/exporter: `__reduce_package__`.

A class can implement `__reduce_package__` to customize how it is placed
in the importer/exproter. It functions very similarly to `__reduce__`,
except:
- `__reduce_package__` takes one argument, which is the
`PackageExporter`
instance. Users can use this instance to save stuff to the package to
implement their serialization. `__reduce__` takes no args.
- Only the 2-element tuple version of the return value for `__reduce__`
is supported (this could be extended if necessary).
- When the reduction function is called on load, an additional argument
is added to the beginning of the args tuple. This is the
`PackageImporter`
instance doing the loading.

The `__reduce_package__` protocol is defined using `persistent_id` and
`persistent_load`, which ensures that we can still use the cpickle
implementation of the pickler by default.

Pull Request resolved: #51971

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D26340591

Pulled By: suo

fbshipit-source-id: 5872a7d22e832056399a7372bae8a57807717882
2021-02-23 22:43:00 -08:00
Shiyan Deng
238b0bbb68 Allow Transformer accept output result that is not Proxy (#52473)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52473

Use `map_aggregate` to create output for new graph so that it won't raise error when we have outputs that is not `Proxy`.

Test Plan: `test_transformer_multi_outputs` in `test_fx.py`

Reviewed By: jamesr66a

Differential Revision: D26502277

fbshipit-source-id: 404d9030a9b84db3f66f8505887a75717a28ad30
2021-02-23 19:28:37 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
Ansley Ussery
d8bb932245 Support AST rewriting for submodules (#52297)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52297

Before, an `nn.Module` with submodules would fail AST rewriting with `TypeError: 'RewrittenModule' object does not support item assignment`. (Try the `test_ast_rewriter_reassigns_submodules` test case on `master`.) This PR fixes the issue as well as adding additional test cases

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26483820

Pulled By: ansley

fbshipit-source-id: 757e898dc2b0a67daf2bd039d555b85f4e443322
2021-02-17 09:08:07 -08:00
Ansley Ussery
4cc10563e7 Customize traceback for calls to symbolically-traced code (#51648)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51648

The following code will throw during the call to `traced(5)`:
```python
class M(nn.Module):
    def __init__(self):
        super(M, self).__init__()
        self.W = torch.nn.Parameter(torch.randn(5))

    def forward(self, x):
        return torch.dot(self.W, x)

traced = fx.symbolic_trace(M())
traced(5)
```

Traceback before:
```
Traceback (most recent call last):
  File "test/tinytest.py", line 26, in <module>
    traced(5)
  File "/home/ansley/local/pytorch/torch/fx/graph_module.py", line 338, in wrapped_call
    return self._cls_call(self, *args, **kwargs)
  File "/home/ansley/local/pytorch/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "<eval_with_key_0>", line 4, in forward
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int
```

Traceback after:
```
Traceback (most recent call last):
  File "/home/ansley/local/pytorch/torch/fx/graph_module.py", line 338, in wrapped_call
    return torch.nn.Module.__call__(self, *args, **kwargs)
  File "/home/ansley/local/pytorch/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "<eval_with_key_1>", line 4, in forward
    dot_1 = torch.dot(w, x);  w = x = None
TypeError: dot(): argument 'tensor' (position 2) must be Tensor, not int

Call using an FX-traced Module, line 4 of the traced Module’s generated forward function:
    w = self.W
    dot_1 = torch.dot(w, x);  w = x = None

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
    relu_1 = dot_1.relu();  dot_1 = None

    return relu_1
```

(Note that the same `TypeError` is thrown despite modifying the traceback.)

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26424005

Pulled By: ansley

fbshipit-source-id: 368f46ba81fb3111bd09654825bb2ac5595207d1
2021-02-12 18:31:23 -08:00
James Reed
d23cb94098 [FX] Generalize dict key check in create-arg (#51927)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51927

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26329655

Pulled By: jamesr66a

fbshipit-source-id: a15e7d9564551521af12a8fde1c7524856f0cbc2
2021-02-09 21:52:22 -08:00
James Reed
256f93fb0f [FX][EZ] Fix tuple type annotations (#52010)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52010

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D26355481

Pulled By: jamesr66a

fbshipit-source-id: 27bbc5d8949beb68663f2e1e7963bec9afbef0cc
2021-02-09 20:32:30 -08:00
James Reed
d4e84b0c07 [FX] Fix leaf modules in Transformer (#51998)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51998

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26352087

Pulled By: jamesr66a

fbshipit-source-id: ad8abc6507d4ea95fd3c99b226d1b40c3e9e64cf
2021-02-09 20:29:17 -08:00
Ansley Ussery
215d9daceb Refactor internal methods into debugging utilities (#51737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51737

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26288613

Pulled By: ansley

fbshipit-source-id: 4504b1af5be7a200c1a6a376d432d7224eb8a796
2021-02-05 21:42:18 -08:00
Horace He
2d305b97e9 [FX] Added partial concrete values for symbolic tracing (#51609)
Summary:
Currently it's passed in a dict but might be worth considering whether we want to support other methods of passing it in (like a list corresponding to the positional args).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51609

Reviewed By: zou3519

Differential Revision: D26224464

Pulled By: Chillee

fbshipit-source-id: 305769db1a6e5fdcfb9e7dcacfdf153acd057a5a
2021-02-04 12:06:02 -08:00
James Reed
a1c5eba4bd [FX] Move some heavily used passes out of experimental (#51392)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51392

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26161172

Pulled By: jamesr66a

fbshipit-source-id: 04bfe606555bdf1988f527231d4de2e0196e6b37
2021-02-01 19:02:26 -08:00
James Reed
a3353d1ec0 [FX] Support ellipsis as arg (#51502)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51502

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D26186578

Pulled By: jamesr66a

fbshipit-source-id: 91943af38412bafc1766398dfaebdf50b64ccd74
2021-02-01 18:54:14 -08:00
James Reed
609f76f27a [WIP][FX] Add Interpreter and Transformer (#50420)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50420

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25880330

Pulled By: jamesr66a

fbshipit-source-id: 27d34888e36e39924821fed891d79f969237a104
2021-02-01 11:40:12 -08:00
Zachary DeVito
33d5180684 [fx] improve args mutation error (#51175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51175

gives a suggestion about how to deal with immutable args/kwargs list

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26093478

Pulled By: zdevito

fbshipit-source-id: 832631c125561c3b343539e887c047f185060252
2021-01-28 10:19:38 -08:00
Jason Ansel
a66851a2ad [FX] torch.fx.symbolic_trace patching improvements and math.* support (#50793)
Summary:
This contains some improvements and refactoring to how patching is done in `torch.fx.symbolic_trace`.

1) Functions from `math.*` are now supported without needing to call `torch.fx.wrap()`.  `wrap()` actually errors on some of these function because they are written in C and don't have `__code__` requiring use of the string version.  `math` usage is relatively common, for example [BERT uses math.sqrt here](6f79061bd1/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/attention/single.py (L16)).  Both `math.sqrt()` and `from math import sqrt` (copying to module namespace) are supported.  When modules are called FX now searches the module's global scope to find methods to patch.

2) [Guarded behind `env FX_PATCH_GETITEM=1`] Fixes a failed trace of [PositionalEmbedding from BERT](6f79061bd1/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/position.py (L24)), which failed to trace with the error `TypeError: slice indices must be integers or None or have an __index__ method` (a Proxy() is getting passed into `Tensor.__getitem__`).  See https://github.com/pytorch/pytorch/issues/50710 for why this is disabled by default.

3) Support for automatically wrapping methods that may have been copied to a different module scope via an import like `from foo import wrapped_function`.  This also isn't exposed in `torch.fx.wrap`, but is used to implement `math.*` support.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50793

Test Plan: Added unittests to check each feature

Reviewed By: jamesr66a

Differential Revision: D25999788

Pulled By: jansel

fbshipit-source-id: f1ce11a69b7d97f26c9e2741c6acf9c513a84467
2021-01-22 15:05:24 -08:00
Ansley Ussery
7494f0233a snake_case FX IR names (#50876)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50876

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D26002640

Pulled By: ansley

fbshipit-source-id: 4de8a63ef227ae3d46fab231f739c8472289ca4d
2021-01-21 22:25:57 -08:00
Ansley Ussery
4ac489091a Improve call provenance during GraphModule scripting (#50538)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50538

Test Plan: Imported from OSS

Reviewed By: pbelevich, SplitInfinity

Differential Revision: D25935403

Pulled By: ansley

fbshipit-source-id: 2baf5e0ba0fa3918e645fc713a9e80d10bbc84e5
2021-01-21 12:03:19 -08:00
James Reed
5205cc1c62 [FX] Fix NoneType annotation in generated code (#50777)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50777

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25966026

Pulled By: jamesr66a

fbshipit-source-id: 8e36521eee03eade7e1b602e801229c085b03488
2021-01-19 23:16:58 -08:00
James Reed
38c45bdd2d [FX] Fix tracing a free function with embedded constant (#50639)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50639

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25934142

Pulled By: jamesr66a

fbshipit-source-id: de9053d4f92a7a2f4f573378837ff5ae78e539b1
2021-01-19 19:20:34 -08:00
Jason Ansel
3344f06130 [FX] Fix using fx.wrap as a decorator (#50677)
Summary:
`torch.fx.wrap()` could not be used as a decorator as the docstring claimed because it returned None.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50677

Test Plan: Added `test_wrapped_via_decorator` which used to fail with `'NoneType' object is not callable` and now passes

Reviewed By: jamesr66a

Differential Revision: D25949313

Pulled By: jansel

fbshipit-source-id: 02d0f9adeed812f58ec94c94dd4adc43578f21ce
2021-01-19 13:42:15 -08:00
James Reed
0291f35b37 [FX] Make len traceable and scriptable with wrap (#50184)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50184

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D25819832

Pulled By: jamesr66a

fbshipit-source-id: ab16138ee26ef2f92f3478c56f0db1873fcc5dd0
2021-01-15 17:46:53 -08:00
Ansley Ussery
4c97ef8d77 Create subgraph rewriter (#49540)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49540

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25869707

Pulled By: ansley

fbshipit-source-id: 93d3889f7ae2ecc5e8cdd7f4fb6b0446dbb3cb31
2021-01-12 16:32:13 -08:00
James Reed
d390e3d8b9 [FX] Make graph target printouts more user-friendly (#50296)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50296

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25855288

Pulled By: jamesr66a

fbshipit-source-id: dd725980fc492526861c2ec234050fbdb814caa8
2021-01-11 11:45:20 -08:00
James Reed
a7e92f120c [FX} Implement wrap() by patching module globals during symtrace (#50182)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50182

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D25819730

Pulled By: jamesr66a

fbshipit-source-id: 274f4799ad589887ecf3b94f5c24ecbe1bc14b1b
2021-01-11 11:01:15 -08:00
James Reed
eb8003d8e9 [FX] Remove extraneous newlines at end of code (#50117)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/50117

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D25791847

Pulled By: jamesr66a

fbshipit-source-id: 9c0b296e117e6bcf69ed9624ad0b243fa3db0f76
2021-01-06 15:47:37 -08:00
Brandon Lin
c51455a7bb [FX] fix Graph python_code return type annotation (#49931)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49931

This fixes #49932. The `maybe_return_annotation` was not being passed by reference, so it was never getting modified.

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D25725582

Pulled By: esqu1

fbshipit-source-id: 4136ff169a269d6b98f0b8e14d95d19e7c7cfa71
2021-01-04 19:55:33 -08:00
James Reed
67d0c18241 [FX] Try to make it more clear that _update_args_kwargs should not be called (#49745)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49745

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25682177

Pulled By: jamesr66a

fbshipit-source-id: 4910577541c4d41e1be50a7aa061873f061825b6
2020-12-22 15:20:02 -08:00
Hui Guo
e2e44bb10a [Issue #46210] added torch.fx.len() to provide support for len(); added a test case for torch.fx.len() (#49532)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49532

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D25608804

Pulled By: huiguoo

fbshipit-source-id: 93ac02ab57db5d200d92443062286c34782ec0ef
2020-12-18 16:43:57 -08:00
James Reed
fb755ad33e [FX] Emit named tuple construction node when NamedTuple appears as an arg (#49553)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49553

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25618577

Pulled By: jamesr66a

fbshipit-source-id: 042f742f9ca02e59bbceda97bfcf47f9bac07873
2020-12-18 14:10:17 -08:00
James Reed
80f7510d92 [FX] Fix create_arg for NamedTuple (#48986)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48986

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D25387156

Pulled By: jamesr66a

fbshipit-source-id: 0d38c43e02088fb7afb671683c88b6e463fe7c76
2020-12-10 15:32:04 -08:00
Lu Fang
212ec07cb7 Support torchbind as attribute in torch.fx symbolic tracing (#48732)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48732

add support for ScriptObject as attributes in symbolic trace.

Test Plan: OSS CI

Reviewed By: jamesr66a

Differential Revision: D25116185

fbshipit-source-id: c61993c84279fcb3c91f1d44fb952a8d80d0e552
2020-12-04 16:21:44 -08:00
James Reed
998c4cac9a [FX] Add Node.all_input_nodes (#48270)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/48270

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D25100241

Pulled By: jamesr66a

fbshipit-source-id: f742f5a13debebb5be37f7c0045c121f6eaff1d5
2020-11-19 19:53:28 -08:00
Vasiliy Kuznetsov
dea2337825 torch.Assert: make it torch.jit.script'able (#47399) (#47973)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47973

Currently torch.Assert is not scriptable, which makes it not very useful for production code. According to jamesr66a , moving this to c++ op land will help with scriptability. This PR implements the change.

Note: with the current code the Assert is scriptable but the Assert is a no-op after being scripted. Would love suggestions on how to address that (can be in future PR).

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_scriptable
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
```

Reviewed By: supriyar

Differential Revision: D24974299

Pulled By: vkuzo

fbshipit-source-id: 20d4f4d8ac20d76eee122f2cdcdcdcaf1cda3afe
2020-11-16 11:46:12 -08:00
Vasiliy Kuznetsov
ee995d33bd rename torch.Assert to torch._assert (#47763) (#47972)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47972

Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```

Reviewed By: supriyar

Differential Revision: D24974298

Pulled By: vkuzo

fbshipit-source-id: 24ded93a7243ec79a0375f4eae8a3db9b787f857
2020-11-16 11:43:27 -08:00
Wang Xu
0dbff184e9 change file name to snake style (#47914)
Summary:
Change Partitioner.py file name to partitioner.py
Change GraphManipulation.py file name to graph_manipulation.py
Move test_replace_target_nodes_with() to test_fx_experimental.py
Remove the unnecessary argument in size_based_partition() in Partitioner class

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47914

Reviewed By: gcatron

Differential Revision: D24956653

Pulled By: scottxu0730

fbshipit-source-id: 25b65be7dc7d64e90ffdc59cf394446fee83c3e6
2020-11-14 01:29:25 -08:00
Richard Zou
e5da3b6097 Revert D24891767: rename torch.Assert to torch._assert
Test Plan: revert-hammer

Differential Revision:
D24891767 (a8ca042ec0)

Original commit changeset: 01c7a5acd83b

fbshipit-source-id: cd2271467151b578185758723fcd23f69051d3a3
2020-11-13 08:35:05 -08:00
Richard Zou
4cec19b56a Revert D24740727: torch.Assert: make it torch.jit.script'able
Test Plan: revert-hammer

Differential Revision:
D24740727 (b787e748f0)

Original commit changeset: c7888e769c92

fbshipit-source-id: 1e097bd9c0f8b04bea0e0346317a126b42a3dc4f
2020-11-13 08:31:40 -08:00
Vasiliy Kuznetsov
b787e748f0 torch.Assert: make it torch.jit.script'able (#47399)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47399

Currently torch.Assert is not scriptable, which makes it not very useful for production code. According to jamesr66a , moving this to c++ op land will help with scriptability. This PR implements the change.

Note: with the current code the Assert is scriptable but the Assert is a no-op after being scripted. Would love suggestions on how to address that (can be in future PR).

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_scriptable
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
```

Imported from OSS

Reviewed By: eellison

Differential Revision: D24740727

fbshipit-source-id: c7888e769c921408a3020ca8332f4dae33f2bc0e
2020-11-13 00:02:19 -08:00
Vasiliy Kuznetsov
a8ca042ec0 rename torch.Assert to torch._assert (#47763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47763

Changing the name due to the discussion in
https://github.com/pytorch/pytorch/pull/47399.

Test Plan:
```
python test/test_utils.py TestAssert.test_assert_true
python test/test_fx.py TestFX.test_symbolic_trace_assert
python test/test_fx_experimental.py
```

Imported from OSS

Reviewed By: ezyang

Differential Revision: D24891767

fbshipit-source-id: 01c7a5acd83bf9c962751552780930c242134dd2
2020-11-12 23:59:34 -08:00