Commit Graph

180 Commits

Author SHA1 Message Date
James Reed
214951bc6b [FX] Make split_module preserve proper placeholder names (#74736)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74736

Previously, `split_module` would incorrectly carry over the `name` of placeholders rather than their `target`:

Original GraphModule

```
def forward(self, x, **kwargs):
    _kwargs = kwargs
    getitem = _kwargs['foo'];  _kwargs = None
    add = x + getitem;  x = getitem = None
    return add
```

After splitting:

```
def forward(self, x, _kwargs):
    submod_0 = self.submod_0(_kwargs);  _kwargs = None
    submod_1 = self.submod_1(x, submod_0);  x = submod_0 = None
    return submod_1
```

Notice that `**kwargs` is turned into `_kwargs`, which is incorrect and we lose the kwarg expansion behavior. This patch switches the erroneous code in `split_module`, resulting in the correct split code being emitted:

Original GraphModule

```
def forward(self, x, **kwargs):
    _kwargs = kwargs
    getitem = _kwargs['foo'];  _kwargs = None
    add = x + getitem;  x = getitem = None
    return add
```

After splitting:

```
def forward(self, x, **kwargs):
    _kwargs = kwargs
    submod_0 = self.submod_0(_kwargs);  _kwargs = None
    submod_1 = self.submod_1(x, submod_0);  x = submod_0 = None
    return submod_1
```

Test Plan: Imported from OSS

Reviewed By: VitalyFedyunin

Differential Revision: D35137361

Pulled By: jamesr66a

fbshipit-source-id: 46d079cfe16093c293fc268404fb8bc86ffcf583
(cherry picked from commit a020066281856184621561a8672eb57f5de31e92)
2022-03-25 23:36:27 +00:00
David Berard
15c98700ed Add CPU slow test job (#73748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73748

This adds CPU-only slow test jobs, which previously would never run.

Includes fixes/skips for slow tests which fail (they need to be skipped now because they used to never run)

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D34628803

Pulled By: davidberard98

fbshipit-source-id: c090ab7bf7bda9e24ec5cdefa6fd35c6310dbac0
(cherry picked from commit 06f7a94a57cc7023e9c5442be8298d20cd011144)
2022-03-23 21:17:27 +00:00
James Reed
a8d9fbb021 [FX] Make immutable_list and immutable_dict work with pytrees (#73766)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73766

Test Plan: Imported from OSS

Reviewed By: zou3519, Chillee

Differential Revision: D34630217

Pulled By: jamesr66a

fbshipit-source-id: f23420deaeed7e54d5e6759b486ca4a02243a7b3
(cherry picked from commit 8854c60e60e79b144077f3021d305ea3d06a2a21)
2022-03-04 19:35:41 +00:00
James Reed
dae7ed179f [FX] Make module getattr wrapper proxy buffers (#73612)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/73612

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D34568113

Pulled By: jamesr66a

fbshipit-source-id: 95a7106cf6ce45999c1b3c06b34965e725961771
(cherry picked from commit 54841e028478ea641fb4d7895f726553b8b48353)
2022-03-03 04:32:49 +00:00
Ke Wen
d14de3139a [PyTorch FX] Return mapping of qualified names from split_module() (#73564)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73564

While maintaining API backward compatibility, add an optional output parameter to split_module() that returns a mapping from the new qualified names in the modules after split to the old qualified names in the original module

Test Plan:
1. Added a test (test_split_qualname_mapping) to test_fx_experimental.py to check the returned qualname mapping
```
$ python test_fx_experimental.py
...
Ran 1084 tests in 73.464s
OK (skipped=531, expected failures=4)
```
2. Ask test_fx.py to accept split_module's new signature
```
$ python test_fx.py --accept
```

Reviewed By: jamesr66a

Differential Revision: D34541792

fbshipit-source-id: e8ec7e77ec884e4db7cad0c0593e31861c76e42d
(cherry picked from commit d2e5a95a353ee5fb52cdba065f127489e9df47ae)
2022-03-02 23:32:54 +00:00
Peter Bell
e8d226cd9a Remove some unnecessary python functional wrappers (#61608)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61608

See #61544 for an example of issues created by functional wrappers. In this
case, these are directly wrapping the native function with no added
functionality. One exception was `bilinear` which was just missing the default
argument in C++, but was otherwise the same.

I've kept the symbol `torch.functional.istft` because it looks like public API,
but it could just as easily be moved to `_torch_docs.py`.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D31401361

Pulled By: albanD

fbshipit-source-id: 162b74d0b2d4f2e5c4834687a94541960cefdd52
(cherry picked from commit 700cd73ca1)
2022-02-01 16:59:26 +00:00
Jason Ansel
7d613ab1d6 Fix indentation typo in test_fx_experimental.py (#71885)
Summary:
These tests were not actually running as they were defined in the local scope of another test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71885

Reviewed By: scottxu0730

Differential Revision: D33806251

Pulled By: jansel

fbshipit-source-id: 48a2d7b472f160759ef55e6fff1f8890511e3345
(cherry picked from commit 9ae14efb25)
2022-01-28 00:41:12 +00:00
XiaobingSuper
b8679ee1fc fix conv+bn folding issue when bn hasn't running states (#71259)
Summary:
Doing conv+bn folding which bn hasn't a running stats, there have error for JIT and FX path:

```
import torch

import torch.nn as nn

import torch.fx.experimental.optimization as optimization

class M(nn.Module):
    def __init__(self):
        super(M, self).__init__()
        self.conv = nn.Conv2d(32, 64, 3, stride=2)
        self.bn = nn.BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)

    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        return x

x = torch.randn([1, 32, 50, 50])

model = M().eval()

'''
# jit path
with torch.no_grad():
    traced = torch.jit.trace(model, x).eval()
    traced = torch.jit.freeze(traced)
'''

# FX path
fused_model = optimization.fuse(model)
```

expected result:
1. JIT path
```
Traceback (most recent call last):
  File "bn_test.py", line 27, in <module>
    traced = torch.jit.freeze(traced)
  File "/home/xiaobinz/miniconda3/envs/pytorch-master/lib/python3.8/site-packages/torch/jit/_freeze.py", line 119, in freeze
    run_frozen_optimizations(out, optimize_numerics, preserved_methods)
  File "/home/xiaobinz/miniconda3/envs/pytorch-master/lib/python3.8/site-packages/torch/jit/_freeze.py", line 167, in run_frozen_optimizations
    torch._C._jit_pass_optimize_frozen_graph(mod.graph, optimize_numerics)
RuntimeError: Expected Tensor but got None
```
2. FX path
```
Traceback (most recent call last):
  File "bn_test.py", line 31, in <module>
    model = optimization.fuse(model, inplace=True)
  File "/home/xiaobinz/miniconda3/envs/pytorch-master/lib/python3.8/site-packages/torch/fx/experimental/optimization.py", line 71, in fuse
    fused_conv = fuse_conv_bn_eval(conv, bn)
  File "/home/xiaobinz/miniconda3/envs/pytorch-master/lib/python3.8/site-packages/torch/nn/utils/fusion.py", line 11, in fuse_conv_bn_eval
    fuse_conv_bn_weights(fused_conv.weight, fused_conv.bias,
  File "/home/xiaobinz/miniconda3/envs/pytorch-master/lib/python3.8/site-packages/torch/nn/utils/fusion.py", line 23, in fuse_conv_bn_weights
    bn_var_rsqrt = torch.rsqrt(bn_rv + bn_eps)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'float'
```

This PR will fix this issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71259

Reviewed By: anjali411

Differential Revision: D33595049

Pulled By: davidberard98

fbshipit-source-id: 0fe56bb2bb25d6d54ebc53789d2ad22458da9012
(cherry picked from commit 5672c08378)
2022-01-18 22:12:41 +00:00
James Reed
de902b5d02 [FX] Add a default_value arg to Graph.placeholder and fix split_module (#71016)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71016

I found out that `split_module` doesn't preserve default values for arguments. In trying to fix that, I noticed that `Graph.placeholder` doesn't make it easy to add a default argument when making a placeholder. This PR addresses both of those issues

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D33482218

Pulled By: jamesr66a

fbshipit-source-id: 57ebcdab25d267333fb1034994e08fc1bdb128ee
2022-01-12 14:03:17 -08:00
Horace He
df6eb9bbab Fixed to_folder not saving dtype (#69983)
Summary:
As above.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69983

Reviewed By: pbelevich, ngimel

Differential Revision: D33466529

Pulled By: Chillee

fbshipit-source-id: 2d2f0ad5b8e2492aba4c19fa034c8b6c0848a568
2022-01-06 22:15:56 -08:00
anjali411
4a6a5d1630 OpInfos for torch.{flatten, column_stack} (#69237)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69237

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D32988956

Pulled By: anjali411

fbshipit-source-id: b7f5c537ff9731f56232aa5647910f03edf4582a
2021-12-16 17:50:58 -08:00
Richard Zou
620a1fcb55 OpInfos for: normal, bernoulli, multinomial (#66358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66358

Test Plan: - run tests

Reviewed By: mruberry

Differential Revision: D31551695

Pulled By: zou3519

fbshipit-source-id: cf1b43118a0414a1af9ece9ae8c0598b2701aa0a
2021-12-14 06:59:38 -08:00
Kushashwa Ravi Shrimali
2cb385dd6e OpInfo for nn.functional.dropout2d, revise sample inputs for dropout (#67891)
Summary:
Earlier, we were only testing for inputs with the shape of `(5,)` for `nn.functional.dropout`, but since it's used a lot - I feel it's a good idea to test for a few more shapes including scalars. This PR:

1. Revises sample inputs for `nn.functional.dropout`
2. Adds an OpInfo for `nn.functional.dropout2d`.

A note regarding the documentation:

Looks like `nn.functional.dropout2d` also supports inputs of shape `(H, W)` apart from `(N, C, H, W) / (C, H, W)` but the [documentation](https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html#torch.nn.Dropout2d) doesn't mention that (`H, W` case). Should that be revised or am I missing anything here? (Filed an issue here: https://github.com/pytorch/pytorch/issues/67892)

```python
# A 2D tensor is a valid input for Dropout2d
In [11]: tensor = torch.randn((3, 4), device='cpu', dtype=torch.float32)
In [12]: dropout2d = torch.nn.Dropout2d(p=0.5)

In [13]: dropout2d(tensor)
Out[13]:
tensor([[-0.1026, -0.0000, -0.0000, -0.0000],
        [-1.5647,  0.0000, -0.0000, -0.5820],
        [-0.0000, -3.2080,  0.1164, -3.6780]])
```

Issue Tracker: https://github.com/pytorch/pytorch/issues/54261

cc: mruberry zou3519

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67891

Reviewed By: mrshenli

Differential Revision: D32628527

Pulled By: mruberry

fbshipit-source-id: 4c9b89550f1d49526e294378ce107eba9f29cabb
2021-12-08 08:54:16 -08:00
Nikita Vedeneev
c236247826 OpInfo tests for (svd|pca)_lowrank (#69107)
Summary:
As per title.

While working on this I have discovered several issues with these methods related to grad instabilities. I will file them and link here later. These were quite painful to force to pass all the tests with these discovered issues, sorry for the delay, mruberry!

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69107

Reviewed By: zou3519

Differential Revision: D32920341

Pulled By: mruberry

fbshipit-source-id: 15b33e2b46acdcbff8a37d8e43e381eb55d1a296
2021-12-07 19:50:12 -08:00
Saketh Are
6a4fa86026 Add OpInfos for misc nn.functional operators (#68922)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68922

Reviewed By: Chillee

Differential Revision: D32842301

Pulled By: saketh-are

fbshipit-source-id: b7166faefb64668fc76cca6c528501b0d360c43b
2021-12-03 17:03:02 -08:00
Saketh Are
a07ffe8d0e Add OpInfos for combinations, cartesian_prod, sum_to_size, ldexp, as_strided (#68853)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68853

Reviewed By: davidberard98

Differential Revision: D32811147

Pulled By: saketh-are

fbshipit-source-id: 941dcf949072f8d10faf4d5a0fa0ef409ac6a7db
2021-12-02 21:22:56 -08:00
Kshiteej K
e5e0c19882 OpInfo : embedding_bag (#67252)
Summary:
Adds OpInfo for `embedding_bag`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67252

Reviewed By: VitalyFedyunin

Differential Revision: D32462157

Pulled By: zou3519

fbshipit-source-id: 70303349a718720c4fa47519fa94ae900e052939
2021-12-01 07:00:50 -08:00
Nikita Shulga
14dc9759f2 Revert D32650384: OpInfos for torch.{flatten, column_stack}
Test Plan: revert-hammer

Differential Revision:
D32650384 (aceb46e4ce)

Original commit changeset: 9ead83b378d0

fbshipit-source-id: 3ef281e536b1f21a6f13c6c51309021cf92b53b2
2021-11-24 14:55:26 -08:00
anjali411
aceb46e4ce OpInfos for torch.{flatten, column_stack} (#67555)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67555

Test Plan: Imported from OSS

Reviewed By: cpuhrsch

Differential Revision: D32650384

Pulled By: anjali411

fbshipit-source-id: 9ead83b378d0ece60569e1a0fc7d8849f89566b3
2021-11-24 10:25:37 -08:00
anjali411
c7d5e0f53f OpInfos for torch.atleast_{1d, 2d, 3d} (#67355)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67355

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32649416

Pulled By: anjali411

fbshipit-source-id: 1b42e86c7124427880fff52fbe490481059da967
2021-11-24 09:55:39 -08:00
Saketh Are
5d300e761d Add OpInfos for parcel Activation Functions I (#68521)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68521

Reviewed By: jbschlosser

Differential Revision: D32606625

Pulled By: saketh-are

fbshipit-source-id: acf98a07c45bce95b1470bf9856577426265f3d1
2021-11-22 20:01:35 -08:00
Saketh Are
030ee34216 Add OpInfo for torch.nonzero (#67459)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67459

Reviewed By: davidberard98

Differential Revision: D32453687

Pulled By: saketh-are

fbshipit-source-id: e7ed5601686d88407bf67bca0f75304b30fa7ac5
2021-11-16 11:10:43 -08:00
Richard Zou
d4ae789655 OpInfos for new_blah functions and some _like functions (#67357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67357

This PR adds OpInfos for:
- new_ones, new_zeros, new_full, new_empty
- rand_like, randint_like

I forgot to add the _like functions in a previous PR, so here they are.

Test Plan: - wait for tests

Reviewed By: mruberry

Differential Revision: D31969533

Pulled By: zou3519

fbshipit-source-id: 236d70d66e82f1d6f8e5254b55ca2a37b54c9494
2021-11-11 07:21:23 -08:00
Saketh Are
b24c34426f Add OpInfo for torch.unique and torch.unique_consecutive (#67529)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67529

Reviewed By: pbelevich

Differential Revision: D32045941

Pulled By: saketh-are

fbshipit-source-id: fefea1ddabcd3c4b40e9374b991410626437cdb4
2021-10-30 08:33:41 -07:00
kshitij12345
6c985b57ff OpInfo : nn.functional.embedding (#66997)
Summary:
Adds OpInfo for `nn.functional.embedding`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66997

Reviewed By: mrshenli

Differential Revision: D31859799

Pulled By: zou3519

fbshipit-source-id: bbca860df4fbc243751f5fa81658231866c31d2e
2021-10-25 08:06:32 -07:00
Saketh Are
33790c4e06 Implement histogramdd on CPU (#65318)
Summary:
Implements `torch.histogramdd` analogous to `numpy.histogramdd`.

Builds on https://github.com/pytorch/pytorch/pull/58780, generalizing the existing `torch.histogram` kernel to handle D-dimensional inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65318

Reviewed By: soulitzer

Differential Revision: D31654555

Pulled By: saketh-are

fbshipit-source-id: 14b781fac0fd3698b052dbd6f0fda46e50d4c5f1
2021-10-21 16:09:31 -07:00
Jeffrey Wan
94f4b22df9 Revert D31761594: [pytorch][PR] opinfo : nn.functional.embedding
Test Plan: revert-hammer

Differential Revision:
D31761594 (ed5633d0c5)

Original commit changeset: d24f44728d04

fbshipit-source-id: 72574918300a7982430a0ceb772c9a24de525050
2021-10-20 09:17:16 -07:00
kshitij12345
ed5633d0c5 opinfo : nn.functional.embedding (#66622)
Summary:
Adds opinfo for `nn.functional.embedding`

Few cases where `numerical` gradient doesn't match (gradcheck fails)

```python
import torch

try:
    t = torch.randn(2, 1, dtype=torch.float64, requires_grad=True)
    idx = torch.tensor([0, 1])
    torch.autograd.gradcheck(lambda idx, t : torch.nn.functional.embedding(idx, t, padding_idx=1), (idx, t, ))
except Exception as e:
    print("PADDING IDX:", e)

try:
    t = torch.ones(2, 1, dtype=torch.float64, requires_grad=True)
    idx = torch.tensor([0, 1])
    torch.autograd.gradcheck(lambda idx, t : torch.nn.functional.embedding(idx, t, max_norm=1.), (idx, t, ))
except Exception as e:
    print("MAX NORM:", e)

try:
    t = torch.randn(2, 1, dtype=torch.float64, requires_grad=True)
    idx = torch.tensor([0, 1, 1])
    torch.autograd.gradcheck(lambda idx, t : torch.nn.functional.embedding(idx, t, scale_grad_by_freq=True), (idx, t, ))
except Exception as e:
    print("SCALE GRAD BY FREQUENCY:", e)

try:
    t = torch.randn(2, 1, dtype=torch.float64, requires_grad=True)
    idx = torch.tensor([0, 1])
    torch.autograd.gradcheck(lambda idx, t : torch.nn.functional.embedding(idx, t, sparse=True), (idx, t, ))
except Exception as e:
    print("SPARSE", e)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66622

Reviewed By: gchanan

Differential Revision: D31761594

Pulled By: zou3519

fbshipit-source-id: d24f44728d049e6276d6c3165aa1fba458214959
2021-10-20 06:33:55 -07:00
Jane Xu
9ea3424747 Set test owner for fx (#66807)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66807

Reviewed By: jamesr66a

Differential Revision: D31736722

Pulled By: janeyx99

fbshipit-source-id: 5ffcb02a858137211bff1eabf158001dcb0359a6
2021-10-18 12:25:38 -07:00
Pearu Peterson
472a6f2787 Strided masked reductions: sum, amax. Testing of masked reductions. (#65990)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65990

cc nikitaved pearu cpuhrsch IvanYashchuk

Test Plan: Imported from OSS

Reviewed By: zou3519

Differential Revision: D31729532

Pulled By: albanD

fbshipit-source-id: 855a6bb2a7c6e75c780a64ce23c0f29321f0e511
2021-10-18 11:10:32 -07:00
Gary Miguel
543b7fb942 [JIT] Fix type annotations of pooling modules (#65847)
Summary:
All of the pooling modules except MaxUnpool and LPPool return either a
Tensor or [Tensor, Tensor]. The current type annotations are inaccurate,
and prevent scripting the module if return_indices is set as True in the
module.

There's not a great way to make this agree with mypy because the
overload is dependent on the value of return_indices, an attribute.

I tried changing the annotations from `Tensor` to
`Union[Tensor, Tuple[Tensor, Tensor]]`, but that breaks a bunch of uses
that have return_indices=False.
For example, this breaks:
4e94e84f65/torch/nn/modules/container.py (L139)

Also clean up how test names were being constructed in test_jit, since
otherwise we were getting name collisions when there were two tests on
the same nn.Module.

Fixes https://github.com/pytorch/pytorch/issues/45904

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65847

Reviewed By: ZolotukhinM

Differential Revision: D31462517

Pulled By: eellison

fbshipit-source-id: 6f9e8df1be6c75e5e1e9bae07cf3ad3603ba59bd
2021-10-14 10:59:19 -07:00
Richard Zou
d810e738b9 OpInfo for *_like functions (#65941)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65941

OpInfos for: empty_like, zeros_like, ones_like, full_like, randn_like

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452625

Pulled By: zou3519

fbshipit-source-id: 5e6c45918694853f9252488d62bb7f4ccfa1f1e4
2021-10-14 09:14:51 -07:00
Richard Zou
5d4452937d OpInfos for some Tensor dtype conversion methods (#64282)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64282

OpInfos for:
- Tensor.bfloat16, Tensor.bool, Tensor.bypte, Tensor.char
- Tensor.double, Tensor.float, Tensor.half, Tensor.int
- Tensor.short, Tensor.long

None of these are supported by TorchScript. Also, the OpInfo autograd
test runner assumes that the operation is not allowed to change the
dtype of the argument, so only Tensor.double has
`supports_autograd=True` (in theory Tensor.bfloat16, Tensor.float,
Tensor.half should be differentiable).

Test Plan: - run tests

Reviewed By: dagitses

Differential Revision: D31452627

Pulled By: zou3519

fbshipit-source-id: b7f272e558558412c47aefe947af7f060dfb45c5
2021-10-14 09:13:30 -07:00
lezcano
82a216c45b Add tensor.{adjoint(),H,mT,mH} methods and properties (#64179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64179

This PR follows the discussion in https://github.com/pytorch/pytorch/issues/45063#issuecomment-904431478

Fixes https://github.com/pytorch/pytorch/issues/45063

cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D30730483

Pulled By: anjali411

fbshipit-source-id: 821d25083f5f682450f6812bf852dc96a1cdf9f2
2021-10-13 07:44:43 -07:00
kshitij12345
7f6580a868 OpInfo: nn.functional.conv2d (#65233)
Summary:
Reland : https://github.com/pytorch/pytorch/issues/63517
Reference: https://github.com/pytorch/pytorch/issues/54261

Reference: facebookresearch/functorch#78

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65233

Reviewed By: malfet

Differential Revision: D31025538

Pulled By: zou3519

fbshipit-source-id: b1cd38c22f4cb8eedd3f958e02dd7410dcbb8d8d
2021-09-21 09:26:23 -07:00
Michael Suo
ecfc784e67 Revert D30993855: [pytorch][PR] OpInfo: nn.functional.conv2d
Test Plan: revert-hammer

Differential Revision:
D30993855 (873255c6d9)

Original commit changeset: 7402f99addb4

fbshipit-source-id: b0539daa195dc6a3739bce5c264cb2177b7721ff
2021-09-17 10:32:02 -07:00
James Reed
cf7409e184 [FX] Move graph_manipulation and param_fetch out of experimental and into passes (#65183)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65183

ghstack-source-id: 138309655

Test Plan: waitforsadcastle

Reviewed By: protonu

Differential Revision: D31007630

fbshipit-source-id: 77d14b284737aabbe2b9e6394177a0c2e40aafba
2021-09-17 09:32:40 -07:00
kshitij12345
873255c6d9 OpInfo: nn.functional.conv2d (#63517)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Reference: https://github.com/facebookresearch/functorch/issues/78

Mostly inspired from https://github.com/pytorch/pytorch/issues/62882

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63517

Reviewed By: heitorschueroff

Differential Revision: D30993855

Pulled By: zou3519

fbshipit-source-id: 7402f99addb4ef8f19c2ce1a09ed9006e737cc7e
2021-09-16 14:27:36 -07:00
soulitzer
4bf7959de2 Remove run_functional_checks from test_autograd and create necessary OpInfos (#64993)
Summary:
OpInfo tracker: https://github.com/pytorch/pytorch/issues/54261

 - Eliminate duplicated testing logic in test_autograd
 - Moved tests that rely on this testing logic to use OpInfos
   - `cat` already has OpInfo (no action needed)
   - Created OpInfo for `block_diag` and `broadcast_tensors`

Running into some FX errors. Added op to skip-list and created an issue here: https://github.com/pytorch/pytorch/issues/64997
Both `block_diag` and `broadcast_tensors` are variadic, so skipping `test_variant_consistency_jit` (from comments on other OpInfos, it looks like JIT does not support variadic tensors)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64993

Reviewed By: jbschlosser

Differential Revision: D30961736

Pulled By: soulitzer

fbshipit-source-id: e169305384a683acae1178c4e12e9e214a67226a
2021-09-15 12:45:38 -07:00
Philip Meier
32c5da8cd2 add OpInfo for torch.nn.functional.dropout (#62315)
Summary:
Addresses facebookresearch/functorch#78.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62315

Reviewed By: mruberry

Differential Revision: D30932765

Pulled By: zou3519

fbshipit-source-id: 481c67b59a966b4d640973d252b3e392d8db728e
2021-09-15 07:18:04 -07:00
kshitij12345
2c351c76e0 [special] Alias igamma, igammac to special.gammaninc, special.gammaincc (#61902)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/50345

Also added relevant OpInfo

TODO:
* [x] Check rendered docs gammainc : https://docs-preview.pytorch.org/61902/special.html#torch.special.gammainc
* [x] Check rendered docs gammaincc: https://docs-preview.pytorch.org/61902/special.html#torch.special.gammaincc

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61902

Reviewed By: ngimel

Differential Revision: D30761428

Pulled By: mruberry

fbshipit-source-id: 06a16432873357958d53364f12a4e91c29779d26
2021-09-07 15:31:26 -07:00
Anirudh Dagar
1a1fb31cfa Support torch.concat alias, add cat OpInfo & remove OpInfo test_out skips {cat, stack, hstack, vtack, dstack} (#62560)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/61767

## Changes

- [x] Add `torch.concat` alias to `torch.cat`
- [x] Add OpInfo for `cat`/`concat`
- [x] Fix `test_out` skips (Use `at::native::resize_output` or `at::native::resize_output_check`)
  - [x] `cat`/`concat`
  - [x] `stack`
  - [x] `hstack`
  - [x] `dstack`
  - [x] `vstack`/`row_stack`
- [x] Remove redundant tests for `cat`/`stack`

~I've not added `cat`/`concat` to OpInfo `op_db` yet, since cat is a little more tricky than other OpInfos (should have a lot of tests) and currently there are no OpInfos for that. I can try to add that in a subsequent PR or maybe here itself, whatever is suggested.~
**Edit**: cat/concat OpInfo has been added.

**Note**: I've added the named tensor support for `concat` alias as well, maybe that's out of spec in `array-api` but it is still useful for consistency in PyTorch.

Thanks to krshrimali for guidance on my first PR :))

cc mruberry rgommers pmeier asmeurer leofang AnirudhDagar asi1024 emcastillo kmaehashi heitorschueroff krshrimali

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62560

Reviewed By: saketh-are

Differential Revision: D30762069

Pulled By: mruberry

fbshipit-source-id: 6985159d1d9756238890488a0ab3ae7699d94337
2021-09-06 23:57:18 -07:00
James Reed
538647fe1f [WIP][FX] BC guarantees for 1.10 (#63888)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63888

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D30523133

Pulled By: jamesr66a

fbshipit-source-id: b04cc0d842a74862f42ecba98b757310cd2ec7b0
2021-08-30 19:56:46 -07:00
Philip Meier
99203580a9 Updates internal assert_allclose callsites in favor of assert_close (#61841)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61841

Redo of #60863.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D30408145

Pulled By: mruberry

fbshipit-source-id: 0b34ebc7f23ba38ecd89640b61d8aca59b7eab58
2021-08-19 12:50:41 -07:00
Shiyan Deng
8e0998ca70 Move fx2trt and oss_acc_tracer to oss (#63101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63101

Move internal fx2trt to torch/fx/experimental/fx2trt and merge the two TRT interpreter we have right now. cc: mortzur as this might affect uru exporting script.

Move oss_acc_tracer to torch/fx/experimental/fx_acc.

Test Plan: CI

Reviewed By: jerryzh168

Differential Revision: D30257909

fbshipit-source-id: 4e374965fbf88d72e91844d9e9b6ff9b98f467d1
2021-08-15 11:53:36 -07:00
Shen Li
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
Zsolt Dollenstein
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
Kushashwa Ravi Shrimali
7e1f01d4c0 Alias for polygamma (#59691)
Summary:
See https://github.com/pytorch/pytorch/issues/50345

cc: mruberry kshitij12345

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59691

Reviewed By: gchanan

Differential Revision: D29707514

Pulled By: mruberry

fbshipit-source-id: 40c15e1fda3d9f7013977b0f36a77b228dda6aa5
2021-07-16 00:06:27 -07:00
Akifumi Imanishi
4d9fd8958b Support __rand__, __ror__ and __rxor__ (#59240)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58120.

This PR implements `torch.Tensor.{__rand__/__ror__/__rxor__}` for the compatibility with NumPy’s interface.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59240

Reviewed By: ngimel

Differential Revision: D29482304

Pulled By: mruberry

fbshipit-source-id: 13789202c1d8dddf8658a45381aeedcc31e2f603
2021-07-07 13:34:14 -07:00
kshitij12345
dfd2edc025 [special] add zeta (#59623)
Summary:
Reference https://github.com/pytorch/pytorch/issues/50345

`zeta` was already present in the codebase to support computation of `polygamma`.

However, `zeta` only had `double(double, double)` signature **for CPU** before the PR (which meant that computation `polygamma` were always upcasted to `double` for zeta part).

With this PR, float computations will take place in float and double in double.

Have also refactored the code and moved the duplicate code from `Math.cuh` to `Math.h`

**Note**: For scipy, q is optional, and if it is `None`, it defaults `1` which corresponds to Reimann-Zeta. However, for `torch.specia.zeta`, I made it mandatory cause for me it feels odd without `q` this is Reimann-Zeta and with `q` it is the general Hurwitz Zeta. I think sticking to just general made more sense as passing `1` for q sounds trivial.

Verify:
* [x] Docs https://14234587-65600975-gh.circle-artifacts.com/0/docs/special.html#torch.special.zeta

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59623

Reviewed By: ngimel

Differential Revision: D29348269

Pulled By: mruberry

fbshipit-source-id: a3f9ebe1f7724dbe66de2b391afb9da1cfc3e4bb
2021-06-24 00:00:12 -07:00
Jordan Fix
7ed07e2a7d [NormalizeArgs] Retain node.meta (#60449)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60449

After normalizing args, still retain each node's `meta`

Test Plan: Added unit test.

Reviewed By: gcatron

Differential Revision: D29293179

fbshipit-source-id: 432b409790041fa4d6e759f7b46a8bee363497b0
2021-06-23 03:31:53 -07:00
Hangchen Yu
9fbbab88da [fx-acc] Saturate host by replicating partitions onto idle devices (#60064)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60064

This implements a host saturation optimization to maximize the utilization of the available devices.
It uses a greedy heuristic to replicate all partitions on the used devices to another set of idle devices with enough memory.

The added unittest shows an example as follows:

```
partition_0: 192 bytes; partition_1: 48 bytes
dev_0: 200 bytes, [partition_0]
dev_1: 200 bytes, [partition_1]
dev_2: 100 bytes,
dev_3: 100 bytes,
dev_4: 200 bytes,
dev_5: 100 bytes
```

Before host saturation, `partition_0` is assigned to dev_0 and `partition_1` is assigned to dev_1.
After host saturation, `partition_0` is replicated to dev_4 simply because it's the only device that can hold all partitions on dev_0. `partition_1` is replicated to dev_2 because it has minimal but large enough memory to hold all partitions on dev_1.

Test Plan:
```
buck test mode/opt //caffe2/test:test_fx_experimental -- --exact 'caffe2/test:test_fx_experimental - test_saturate_host (test_fx_experimental.TestFXExperimental)'

Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/8444249343103429
    ✓ ListingSuccess: caffe2/test:test_fx_experimental - main (1.322)
    ✓ Pass: caffe2/test:test_fx_experimental - test_saturate_host (test_fx_experimental.TestFXExperimental) (1.322)
Summary
  Pass: 1
  ListingSuccess: 1
```

An e2e test will be added to `test_fx_glow.py` in a followup diff.

Reviewed By: gcatron

Differential Revision: D29039998

fbshipit-source-id: 57518aadf668f7f05abd6ff73224c16b5d2a12ac
2021-06-15 23:04:46 -07:00
Hangchen Yu
95257e8a62 [fx-acc] Fix wrong device assignment in find_single_partition (#60056)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60056

Previously we put the whole graph as a single partition onto a device with maximum memory if possible, but the code assumed that the first logical device always has the maximum memory.

This diff fixes this issue and updates the unittest to reflect such a corner case.

Test Plan:
```
buck test mode/opt //caffe2/test:test_fx_experimental -- --exact 'caffe2/test:test_fx_experimental - test_find_single_partition (test_fx_experimental.TestFXExperimental)'

Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/6473924507772744
    ✓ ListingSuccess: caffe2/test:test_fx_experimental - main (1.357)
    ✓ Pass: caffe2/test:test_fx_experimental - test_find_single_partition (test_fx_experimental.TestFXExperimental) (1.206)
Summary
  Pass: 1
  ListingSuccess: 1

```

Reviewed By: gcatron

Differential Revision: D29118715

fbshipit-source-id: cac6a1f0d2f47717446dcc80093bbcf362663859
2021-06-15 19:36:38 -07:00
Hangchen Yu
f232b052a6 [fx-acc][easy] Format FX experimental partitioner code (#60030)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60030

As titled. Non-functional re-format.

Test Plan: NA

Reviewed By: gcatron

Differential Revision: D29038449

fbshipit-source-id: a7c94eaab86850ef57b51ec66bfe8ea0e68d2dc8
2021-06-15 16:14:33 -07:00
Protonu Basu
a2e56fa0dc Adding users of a node to the serialized JSON. (#59357)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59357

Adding users of a node to the serialized JSON. Illustrated in the example:

JSON:
P419734894

Examples:
    {
      "shape": "[7]",
      "dtype": "torch.float16",
      "stride": "[1]",
      "is_quantized": false,
      "target": "conv.bias",
      "op_code": "get_attr",
      "name": "conv_bias",
      "args": [],
      "kwargs": {},
      "users": [
        {
          "is_node": true,
          "name": "to_dtype"
        }
      ]
    }

    {
      "target": "output",
      "op_code": "output",
      "name": "output",
      "args": [
        {
          "is_node": true,
          "name": "fba_layout_transform_1",
          "shape": "[3, 7, 12, 12]",
          "dtype": "torch.float16",
          "stride": "[1008, 144, 12, 1]",
          "is_quantized": false
        }
      ],
      "kwargs": {},
      "users": []
    }

Test Plan: buck test //caffe2/test:test_fx_experimental

Reviewed By: gcatron, jfix71

Differential Revision: D28857487

fbshipit-source-id: a3bac6bdb21ce10ba4a0d170c809aef13e6174a6
2021-06-06 23:15:32 -07:00
kshitij12345
da972afdcd OpInfo: to_sparse (#59445)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59445

Reviewed By: ngimel

Differential Revision: D28920866

Pulled By: mruberry

fbshipit-source-id: ba8d3071d9937096288b69511000eeb007f53434
2021-06-05 19:13:58 -07:00
Akifumi Imanishi
0a5bfa9919 Support __rmod__ (#58476)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58035.

This PR implements `torch.Tensor.__rmod__` and `torch.remainder(scalar, tensor)` for the compatibility with NumPy’s interface.
(cc: mruberry, rgommers, emcastillo, kmaehashi)

TODO:
  - [x] Update `tensor_binary_op` in test/test_binary_ufuncs.py after https://github.com/pytorch/pytorch/issues/58216 is merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58476

Reviewed By: ngimel

Differential Revision: D28776810

Pulled By: mruberry

fbshipit-source-id: 74f8aea80f439ef2cc370333524e39971eeb7bf4
2021-06-05 16:19:24 -07:00
krshrimali
ef40757de3 OpInfo: zero_ (#58731)
Summary:
See https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58731

Reviewed By: ngimel

Differential Revision: D28784083

Pulled By: mruberry

fbshipit-source-id: f06de8045afd3728b1fedc014c091d8fd1955a9f
2021-05-30 21:49:29 -07:00
kshitij12345
445e838210 OpInfo: resize_, resize_as_ (#59176)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59176

Reviewed By: ngimel

Differential Revision: D28780083

Pulled By: mruberry

fbshipit-source-id: 472584e8faa4cb1031908df097849d2d4167fdf5
2021-05-30 18:53:17 -07:00
kshitij12345
d68df54269 OpInfo: fill_ (#59138)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59138

Reviewed By: ngimel

Differential Revision: D28776451

Pulled By: mruberry

fbshipit-source-id: 2e8e9f1805ec7d900223ea749a4a0b86a1bedb54
2021-05-29 00:35:02 -07:00
kshitij12345
c9af4c2636 OpInfo: where (#58349)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58349

Reviewed By: mrshenli

Differential Revision: D28744220

Pulled By: mruberry

fbshipit-source-id: 893a2fb88a48a60df75c7d6e2f58a42ca949daa7
2021-05-28 18:22:03 -07:00
kshitij12345
f9e8dc005a OpInfo: clone, contiguous (#58390)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58390

Reviewed By: soulitzer

Differential Revision: D28567821

Pulled By: mruberry

fbshipit-source-id: bcf42cb4a9a57d8a15a76819b8a9e2df97cf00be
2021-05-22 18:25:31 -07:00
Heitor Schueroff
9ac0bd23a2 Fix bug in test_fx_experimental codegen (#58587)
Summary:
This PR fixes a bug in test_fx_experimental where code generated for ops with kwarg-only Tensor parameters would fail to execute because they would be called as positional parameters.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58587

Reviewed By: ailzhang

Differential Revision: D28548365

Pulled By: heitorschueroff

fbshipit-source-id: 8f1746053cbad1b11e817b0099db545d8dd22232
2021-05-20 07:49:08 -07:00
Akifumi Imanishi
3113a1de4a Fix some tensor operators to return NotImplemented for invalid inputs (#58216)
Summary:
Same as https://github.com/pytorch/pytorch/issues/57934. (cc/ albanD)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58216

Reviewed By: ailzhang

Differential Revision: D28494886

Pulled By: albanD

fbshipit-source-id: 380205867ee1cde90e1c6fcfe2a31749e1243530
2021-05-19 13:09:57 -07:00
James Reed
7b73fdf597 [FX] Fix retracing wrapped functions (#58061)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58061

Test Plan: Imported from OSS

Reviewed By: yuhc

Differential Revision: D28358801

Pulled By: jamesr66a

fbshipit-source-id: c7c9a8a80e5bfe1eb1f6d2cf858ac7e57153a860
2021-05-17 19:50:16 -07:00
Shiyan Deng
bcacf91a71 [fx_glow]Add Support for importing quantized linear in FXIRImporter (#57483)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57483

Pull Request resolved: https://github.com/pytorch/glow/pull/5622

Quantized linear has packed parameters. We want to unpack it so that it would be easier for graph optimization and importer to deal with the weight and bias. A customized remapping function is used to unpack quantized linear and map it to acc_op.linear.

Test Plan: `buck test glow/fb/fx/nnpi_importer:test_importer`

Reviewed By: gcatron, jfix71, khabinov

Differential Revision: D27451237

fbshipit-source-id: e46e961734788fd5333e227ca6143fd37c33204e
2021-05-14 18:48:31 -07:00
Horace He
84d8e3b0f6 [FX] Finished prepare_for_inference API for release (#58293)
Summary:
Added an ability to configure which passes to run.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58293

Reviewed By: bdhirsh

Differential Revision: D28435948

Pulled By: Chillee

fbshipit-source-id: dfc7f1ef6b38e6f49c2423a5efe8477a645171d0
2021-05-14 14:10:07 -07:00
Alban Desmaison
5e83c62a9e Revert D28351931: [pytorch][PR] Fix some tensor operators to return NotImplemented for invalid inputs
Test Plan: revert-hammer

Differential Revision:
D28351931 (35521a2629)

Original commit changeset: 985457a44dba

fbshipit-source-id: 10724c219e53648f10a70719e25bcf774c6c7852
2021-05-12 13:58:03 -07:00
Akifumi Imanishi
35521a2629 Fix some tensor operators to return NotImplemented for invalid inputs (#57934)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/57719.

This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.

cc/ mruberry: The first commit of this PR is the same as 1d209db1cc excepts the commit message.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57934

Reviewed By: mruberry

Differential Revision: D28351931

Pulled By: albanD

fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
2021-05-12 11:03:23 -07:00
kshitij12345
ff982ef73d OpInfo: reshape, reshape_as and minor clean-up (#57460)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57460

Reviewed By: nairbv

Differential Revision: D28151675

Pulled By: anjali411

fbshipit-source-id: 2b3bcadab3ff5d1761b2922b63afd70a354e785c
2021-05-12 06:05:21 -07:00
Ilqar Ramazanli
8b816e9010 To implement gradient for Pytorch (#54617)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/56129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54617

Reviewed By: anjali411

Differential Revision: D28057452

Pulled By: iramazanli

fbshipit-source-id: 9bd86679282d34f5e5393e6447121586517eb4f0
2021-05-11 18:52:20 -07:00
kshitij12345
9e6b7e6e6e OpInfo: expand and expand_as (#57606)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57606

Reviewed By: albanD

Differential Revision: D28249191

Pulled By: mruberry

fbshipit-source-id: d985ab4e8a99b116c45953e621092929a9a8028e
2021-05-07 02:50:00 -07:00
kshitij12345
154eca0309 OpInfo: ravel, view, view_as (#56910)
Summary:
Reference: https://github.com/pytorch/pytorch/issues/54261

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56910

Reviewed By: ngimel

Differential Revision: D28141867

Pulled By: mruberry

fbshipit-source-id: bff49d40d7e3bb36bc83d1405bd77f5529eeffe9
2021-05-02 22:10:36 -07:00
Yukio Siraichi
ce4449918a Port reverse binary ops to OpInfo (#56471)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54296
Tracking Issue https://github.com/pytorch/pytorch/issues/54261

**Summary:**
- `rsub` (aten function) was already ported
- Ported tests for its dunder version: `__rsub__`
- Ported tests for the other dunder functions: `__radd__`, `__rmul__`, `__rdiv__`, `__rpow__`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56471

Reviewed By: ngimel

Differential Revision: D28142843

Pulled By: mruberry

fbshipit-source-id: 3d1bd88a4f124774f48d33a7ca7bfc7f796360df
2021-05-02 16:01:12 -07:00
Horace He
786b0a8091 [FX] fix normalization issues with lists of tensors (#57004)
Summary:
Fixes issue with lists of tensors not being normalized correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/57004

Reviewed By: jamesr66a

Differential Revision: D28034559

Pulled By: Chillee

fbshipit-source-id: f935f0b73a8356acd8a2ae93fcfc0417f0eab224
2021-04-27 20:02:00 -07:00
Heitor Schueroff
57e37080cd Added OpInfo for torch.einsum (#56276)
Summary:
Adds OpInfo testing for torch.einsum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56276

Reviewed By: mruberry

Differential Revision: D27967095

Pulled By: heitorschueroff

fbshipit-source-id: 60524273d2ca885e7eeb932db3e7fd697ae5ca8e
2021-04-27 07:39:38 -07:00
iramazanli
3e006fc57e Adding hsplit,vsplit and dsplit methods (#53536)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53536

Reviewed By: albanD

Differential Revision: D27938880

Pulled By: iramazanli

fbshipit-source-id: f741119517783ec2bafa296622ee518b587dd127
2021-04-26 09:39:09 -07:00
Jordan Fix
4ef8205104 [fx][normalize] Allow for args to be left as args (#55995)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55995

Normalization is kind of broken currently. But making default arguments visible still appears to work, and is nice functionality to still be able to rely on/use. Adds an option to `NormalizeArgs`'s `__init__` called `normalize_to_only_use_kwargs` which defaults to true, which if set to false will keep using the same signature as provided, but additionally set kwargs in kwargs.

Test Plan: Added test to `test_fx_experimental`.

Reviewed By: 842974287

Differential Revision: D27759448

fbshipit-source-id: 620061fcf46d8549ac70b62aede8b6740aee3778
2021-04-24 08:15:17 -07:00
Horace He
0df239e550 [FX] Make arg normalization a method on Node and not a pass (also augment tests to be exhaustive) (#55992)
Summary:
Commandeered from https://github.com/pytorch/pytorch/pull/54563

Primary changes from first PR:
1. Refactored primary `normalize_function` logic into `operator_schemas.py` so that non-FX users can use it.
2. Refactored tests a bit, and added a path to call `normalize_function` directly.
3. Moved check for `boolean_dispatch` so that `torch.lu` also gets properly handled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55992

Reviewed By: mruberry

Differential Revision: D27774396

Pulled By: Chillee

fbshipit-source-id: 7f65632e1d608e4abd55aec5ccbfdc3f67f52b8e
2021-04-22 03:53:41 -07:00
Jordan Fix
5eadc243f3 Preserve node meta info in split_module (#56212)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56212

The current design doesn't make it easy to use `node.copy()`. Explicitly copy over the node's meta.

Test Plan: Updated `test_subgraph_creation` in `test_fx_experimental`

Reviewed By: jamesr66a

Differential Revision: D27808477

fbshipit-source-id: 7fe7b6428c830307dbd1e395f16fa2774936d3b3
2021-04-16 18:02:50 -07:00
James Reed
2236f43da0 [FX] Put tensor metadata into a NamedTuple in ShapeProp (#55930)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55930

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27741730

Pulled By: jamesr66a

fbshipit-source-id: 0a0a1b94beed6c482add9e9551f316f3b4220ab2
2021-04-13 22:21:50 -07:00
Yukio Siraichi
93bf0ae6fc Remove legacy constructor calls from pytorch codebase. (#54142)
Summary:
Follow up from https://github.com/pytorch/pytorch/issues/53889
Related to https://github.com/pytorch/pytorch/issues/47112

Removing every occurrence of the legacy constructor call present in PyTorch at:
- _docs_
- _benchmarks_
- _test_
- _caffe2_
- _CONTRIBUTING.md_

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54142

Reviewed By: ngimel

Differential Revision: D27699450

Pulled By: mruberry

fbshipit-source-id: 530aa3f5746cc8bc1407d5d51b2bbd8075e30546
2021-04-11 15:45:17 -07:00
Shiyan Deng
43ede4c2e3 Add Per Tensor Quantization Support to FXIRImporter (#55405)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55405

Pull Request resolved: https://github.com/pytorch/glow/pull/5516

Allows FXIRImport to import quantized model.

This diff doesn't include the supports for per-channel weights, linear and conv. Will address them in the next diff.

Test Plan: buck test glow/fb/fx/nnpi_importer:test_importer

Reviewed By: jackm321, jfix71

Differential Revision: D27313543

fbshipit-source-id: bf5c96ef5f2ff1835c09db981e0ceefaec56dd5b
2021-04-09 10:49:48 -07:00
James Reed
bcb4583170 [FX] Add a metadata dict to Node and switch shapeprop to use that (#54926)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54926

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D27417801

Pulled By: jamesr66a

fbshipit-source-id: 68a5155120a235065f58aa64ba1a6a97818dd0c1
2021-03-31 14:36:54 -07:00
Horace He
24bfcd537e [FX] Added FX prepare_for_inference for Intel CPUs (#53805)
Summary:
Part of https://github.com/pytorch/pytorch/issues/48209

Taken from the docstring:
 Performs a set of optimization passes to optimize a model for the purposes of inference. Specifically, the passes that are run are:
    1. Conv/BN fusion
    2. Dropout removal
    3. MKL layout optimizations

The third optimization takes a function `use_mkl_heuristic` that's used to determine whether a subgraph should be explicity run in MKL layout.

I implemented 2 heuristics:
1. Does it in MKL if the subgraph is larger than 2.
2. Benchmarks each subgraph with MKL layout and without, and keeps the subgraph if it's faster.

### Batch size of 10 and multi-threaded.

Results with the second heuristic are generally as strong as the "jit.freeze" version, except in `densenet` and `vgg`, where it's faster, likely due to the heuristic being better. With the first heuristic, there are some notable gaps, particularly on `inception_v3` and `alexnet`.

```
model         Eager      FX         FX Auto   jit.mkldnn
------------  ---------  ---------  ---------  ---------  -
custom        0.195614   0.14686    0.15929    0.156442   6
resnet18      0.172012   0.114007   0.119678   0.12945    6
resnet50      0.486463   0.294308   0.299518   0.318121   6
densenet161   0.955309   0.893502   0.882798   1.29315    6
inception_v3  0.38454    0.307076   0.239513   0.233083   6
googlenet     0.229388   0.237486   0.170458   0.174106   6
shufflenet    0.0513613  0.0286739  0.0292908  0.0267209  6
alexnet       0.0709602  0.0768137  0.0660831  0.0650399  6
vgg16         1.053993   0.9013264  0.9360212  1.082820   6
mobilenet     0.12264    0.0970935  0.0936568  0.106314   6
mnasnet       0.0989875  0.0412083  0.0424499  0.0472336  6
resnext       0.476811   0.315428   0.314422   0.343156   6
```

For single-threaded (still running...)
```
model             eager         FX    FX auto        mkl    threads
------------  ---------  ---------  ---------  ---------  ---------
custom        0.0401415  0.259863   0.0263152  0.200667           1
resnet18      0.499931   0.382113   0.383711   0.396335           1
resnet50      1.10353    0.911865   0.923645   0.992125           1
densenet161   2.20158    2.39421    2.08204    2.30124            1
inception_v3  0.79161    0.849207   0.703546   0.724492           1
googlenet     0.66896    0.820965   0.515927   0.529414           1
shufflenet    0.0987308  0.0689343  0.0629298  0.0617193          1
alexnet       0.198795   0.19862    0.19325    0.211934           1
vgg16         3.744      3.2499     3.28503    3.31576            1
mobilenet     0.152725   0.14505    0.135555   0.159754           1
mnasnet       0.141983   0.089406   0.089599   0.0956167          1
resnext       1.13778    0.97016    0.955417   0.965376           1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53805

Reviewed By: gmagogsfm

Differential Revision: D27424611

Pulled By: Chillee

fbshipit-source-id: a39137159de962fba7ca15121dfa9e78c1e01223
2021-03-31 10:15:01 -07:00
James Reed
c656a5befa [FX] Normalize Python operators to torch. ops when called with Tensors (#54236)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54236

Test Plan: Imported from OSS

Reviewed By: zdevito

Differential Revision: D27149411

Pulled By: jamesr66a

fbshipit-source-id: fe9c468f7c84c254dbb1b70163d08b343725861a
2021-03-25 22:27:49 -07:00
James Reed
a27f46bbe3 [FX] Experimental type annotation pass using Python signatures (#53831)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53831

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D26982804

Pulled By: jamesr66a

fbshipit-source-id: 17db9f71e729206f29ee231e34723d9616f128b7
2021-03-17 20:43:17 -07:00
Jordan Fix
1053c96693 [GraphModule] Back out changes to module root version of __init__ (#53791)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/53791

Reviewed By: houseroad

Differential Revision: D26970869

fbshipit-source-id: 80684516f57fd2d1aca794f17fe488b2fe2b2f64
2021-03-10 23:18:56 -08:00
Jordan Fix
3b0e4a6ed4 [GraphModule] Improve buffer registration during init (#53444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53444

GraphModule construction has two options when constructing the base nn.Module: a dict of names to attrs to assign to the GraphModule, or another nn.Module to copy attrs from.

- For the dict case, add logic to explicitly register `nn.Tensors` that are not `nn.Parameter` as buffers on the GraphModule, else fall back to `__setattr__`.
- For the other `nn.Module` case, update so that it checks in the other module whether the attr to copy in is a buffer, and register it as such, else fall back to `__setattr__`.

Test Plan: Added tests for fetching params and buffers from a GraphModule using both dict and module `__init__`s

Reviewed By: jamesr66a

Differential Revision: D26860055

fbshipit-source-id: 8d9999f91fef20aaa10969558006fc356247591f
2021-03-09 21:05:01 -08:00
Ansley Ussery
85109ce427 Support submodule manipulation in GraphModule (#52358)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52358

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26759260

Pulled By: ansley

fbshipit-source-id: 25d2b9124a7d957704f1700a45dca143aaed391d
2021-03-04 14:52:35 -08:00
Michael Suo
ecf3ca00d8 [fx] Separate globals assignment from code generation (#51974)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51974

Right now, when an FX `Graph` references an external object, we will emit
code like:

    import foo
    def forward(input: foo.bar.baz):
        ...

This is problematic in a world with `torch.package`, since then name
`foo.bar.baz` may reference a name from any number of packages.

This PR lays the groundwork for FX-package integration by separating the
resolution of external references from the genration of the function
code.

When generating a Graph's Python source, we keep track of all external
references and assign them unique names. At the end, we have a
dictionary mapping names -> actual objects. This becomes the `globals`
namespace we pass to `exec` when installing the forward function in a
`GraphModule`. This is nice because we can always be sure that `exec` is
seeing the same objects that were referenced from the `Graph`, no import
statements needed.

At serialization time, we use a `ModuleEnv` to resolve the globals dict
to a set of import statements that can be run to reprodce the `global`
namespace. This is only used on serialiation/deserialization, and those
functions are expected to check that the import statements are producing
the correct results.

Concretely, the code above will now look like:

    from foo.bar import baz as foo_bar_baz
    def forward(input: foo_bar_baz):
        ...

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D26340593

Pulled By: suo

fbshipit-source-id: fe247f75205d0a03fd067bdd0f95491e8edf1436
2021-02-23 13:48:03 -08:00
James Reed
f7a3634466 [WIP][FX] Normalize torch.nn.functional calls (#51816)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51816

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26290764

Pulled By: jamesr66a

fbshipit-source-id: 9c05ff1b7c6f0ab8a13516f7cc2fe279980ebe5d
2021-02-17 15:18:03 -08:00
James Reed
a1c5eba4bd [FX] Move some heavily used passes out of experimental (#51392)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51392

Test Plan: Imported from OSS

Reviewed By: Chillee

Differential Revision: D26161172

Pulled By: jamesr66a

fbshipit-source-id: 04bfe606555bdf1988f527231d4de2e0196e6b37
2021-02-01 19:02:26 -08:00
Garret Catron
0e8e739a9f Move AcceleratedGraphModule out of graph_manipulation. (#51220)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51220

testing with OS this time...

Reviewed By: jfix71

Differential Revision: D26105140

fbshipit-source-id: b4b7a8f0f4cc8f96f9f8b270277a71061d5e5e84
2021-01-28 02:39:12 -08:00
Nikita Shulga
57484103be Revert D25675618: Move AcceleratedGraphModule out of graph_manipulation.
Test Plan: revert-hammer

Differential Revision:
D25675618 (c8a24ebe54)

Original commit changeset: 55636bb2d3d6

fbshipit-source-id: 7b196f7c32830061eca9c89bbcb346cdd66a211e
2021-01-26 15:31:18 -08:00
Garret Catron
c8a24ebe54 Move AcceleratedGraphModule out of graph_manipulation.
Test Plan:
buck test //caffe2/test:test_fx_experimental
buck test //glow/fb/fx_nnpi_importer:test_importer

Reviewed By: jfix71

Differential Revision: D25675618

fbshipit-source-id: 55636bb2d3d6102b400f2044118a450906954083
2021-01-26 12:39:49 -08:00
Meghan Lele
11cdb910b4 [fx] Add matrix multiplication fusion pass (#50151)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50151

**Summary**
This commit adds a graph transformation pass that merges several matrix
multiplications that use the same RHS operand into one large matrix
multiplication. The LHS operands from all of the smaller matrix multiplications
are concatenated together and used as an input in the large matrix multiply,
and the result is split in order to obtain the same products as the original
set of matrix multiplications.

**Test Plan**
This commit adds a simple unit test with two matrix multiplications that share
the same RHS operand.

`python test/test_fx_experimental.py -k merge_matmul -v`

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D25809409

Pulled By: SplitInfinity

fbshipit-source-id: fb55c044a54dea9f07b71aa60d44b7a8f3966ed0
2021-01-06 21:49:37 -08:00
Natalia Gimelshein
ad7d208ba5 Revert D25239967: [fx] Add matrix multiplication fusion pass
Test Plan: revert-hammer

Differential Revision:
D25239967 (9b7f3fa146)

Original commit changeset: fb99ad25b7d8

fbshipit-source-id: 370167b5ade8bf2b3a6cccdf4290ea07b8347c79
2021-01-05 23:22:26 -08:00
Meghan Lele
9b7f3fa146 [fx] Add matrix multiplication fusion pass (#50120)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50120

This commit adds a graph transformation pass that merges several matrix
multiplications that use the same RHS operand into one large matrix
multiplication. The LHS operands from all of the smaller matrix multiplications
are concatenated together and used as an input in the large matrix multiply,
and the result is split in order to obtain the same products as the original
set of matrix multiplications.

Test Plan:
This commit adds a simple unit test with two matrix multiplications that share
the same RHS operand.

`buck test //caffe2/test:fx_experimental`

Reviewed By: jamesr66a

Differential Revision: D25239967

fbshipit-source-id: fb99ad25b7d83ff876da6d19dc4abd112d13001e
2021-01-05 19:37:08 -08:00
Shiyan Deng
107c31f2f5 Add a pass to fetch attributes of nn.Module to fx.node (#47935)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47935

Fetch the parameters that are needed for lowering from nn.Module to fx.node for leaf_modules.

Test Plan: A test `test_fetch` is added to test_fx_experimental.py.

Reviewed By: jfix71

Differential Revision: D24957142

fbshipit-source-id: a349bb718bbcb7f543a49f235e071a079da638b7
2020-12-08 18:06:37 -08:00