Commit Graph

403 Commits

Author SHA1 Message Date
BowenBao
40de6b80ee [ONNX] Add infra for quantized model export and support quantized mobilenet v3 (#72215)
* Add infrastructure and helper functions to enable future work for other quantized operators and models.
* Add export for quantized operators needed by torchvision mobilenet v3 large.
    * ATen namespace: hardsigmoid, flatten, adaptive_avg_pool, quantize_per_tensor, dequantize.
    * Quantized namespace: conv2d, conv2d_relu, hardswish, add, mul.
* Numerous bug fixes, in unpack_quantized_weight.cpp, symbolic functions, and unit test.

Co-authored-by: BowenBao <bowbaomicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73102
2022-02-23 06:22:58 +00:00
BowenBao
cc2aad2ef2 [ONNX] Add symbolic for torch.addcmul (#72126)
* Add addcmul op

* Remove required_grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73101
2022-02-22 22:48:18 +00:00
shubhambhokare1
671c8a459a [ONNX] Add pixel_unshuffle support in opset 9
Current we are unable to utilize ONNX's SpaceToDepth operator due to the lack of the mode_s attribute, hence we add an alternative symbolic in opset 9 to support pixel_unshuffle

- Adds support for pixel_unshuffle in opset9
- Adds support for dynamic input shapes for pixel_shuffle and pixel_unshuffle
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72449
2022-02-19 00:15:16 +00:00
BowenBao
5843fea94d [ONNX] Add export support for linalg norm (#66575)
* Add matrix_norm

* Add vector norm

* Fixe flake

* Fixe flake

* nit fixes

* Nit fixes

* Restructure and add comments

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72987
2022-02-18 18:30:16 +00:00
BowenBao
32f6a1e2a2 [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
Co-authored-by: David Fan <jiafamicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72986
2022-02-18 18:27:26 +00:00
ganler
3d8b6d3361 fix: onnx PReLU unidirectional broadcasting
Fixes https://github.com/pytorch/pytorch/issues/70570

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70571
2022-02-16 22:28:08 +00:00
Ryan Spring
4f8b986e28 Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: VitalyFedyunin

Differential Revision: D33894937

Pulled By: jbschlosser

fbshipit-source-id: b65e8fb6ea66168af8f34f45ed50e92737a33851
(cherry picked from commit 6e986f91a9)
2022-02-14 03:40:32 +00:00
BowenBao
7884c2bbe2 [ONNX] Add Concat to Scalar type analysis JIT pass (#69227) (#69548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69548

* Add Concat to Scalar type analysis pass

By using scalar type analysis for Concat, the exported model can do
automatic type promotion for Concat nodes, including mixed fp16 and fp32
inputs, for example.

Unit tests based on the original PR https://github.com/pytorch/pytorch/pull/24378/

* Fix UTs

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994268

Pulled By: malfet

fbshipit-source-id: 0deab88b0bb1e396770690af27730accb64fcf63
(cherry picked from commit a99322cadf)
2022-02-11 22:05:15 +00:00
BowenBao
308de30abc [ONNX] Support embedding_renorm ONNX export
Composite using ONNX operators for same logic from here 0a07488ed2/aten/src/ATen/native/Embedding.cpp (L153)

Replaced #72560
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72738
2022-02-11 22:02:22 +00:00
BowenBao
03afd86295 [ONNX] Fix lstm reshape shape inference regression
Fixes #72399
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72532
2022-02-11 19:40:22 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
Steven Troxler
730fef25c7 Convert type comments to annotations in caffe2/test/onnx/ (#72632)
Summary:
Convert type comments in caffe2/test/onnx/

Produced by running:
```
python -m  libcst.tool codemod convert_type_comments.ConvertTypeComment caffe2/test/onnx/
```
from the parent directory.

One question is whether we actually want to scrap type comment here. There are some jit tests where we're explicitly aiming to validate py2-style type comments; I don't think this test is one of those cases but if I'm misreading it I can close the PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72632

Reviewed By: msaroufim

Differential Revision: D34112196

Pulled By: stroxler

fbshipit-source-id: a3d18cb36e7eeb4af9be781e98776bf24b96b854
(cherry picked from commit 9301019e51)
2022-02-11 00:37:29 +00:00
Nikita Shulga
74c44ba9d6 Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33850228 (23d03025dc)

Original commit changeset: 3cc33fb298e4

Original Phabricator Diff: D33850228 (23d03025dc)

fbshipit-source-id: 9436e7df73c2b2e2011f321674f24973316d3692
(cherry picked from commit c9efb58223)
2022-01-31 17:44:19 +00:00
Ryan Spring
23d03025dc Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: cpuhrsch

Differential Revision: D33850228

Pulled By: jbschlosser

fbshipit-source-id: 3cc33fb298e480d7ecc5c67716da019d60c6ab33
(cherry picked from commit 3a53b3e94f)
2022-01-31 17:07:45 +00:00
Joel Schlosser
cb823d9f07 Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33744717 (f499ab9cef)

Original commit changeset: d64532a562ed

Original Phabricator Diff: D33744717 (f499ab9cef)

fbshipit-source-id: 396c3f63de5865f894dbc353d0790a01a624be93
(cherry picked from commit e9fb2d1db1)
2022-01-28 18:35:01 +00:00
Ryan Spring
f499ab9cef Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: mikaylagawarecki

Differential Revision: D33744717

Pulled By: jbschlosser

fbshipit-source-id: d64532a562ed53247bb4fa52bb16722634d5c187
(cherry picked from commit 4713dd9cca)
2022-01-28 16:59:09 +00:00
BowenBao
840459a269 [ONNX] Relax constant_fold gather with indices rank > 1 (#68140) (#68493)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68493

Fixes #66786.

`index_select` only supports `index` of 1-D tensor. `ONNX::Gather` allows `index` to have rank `q`. Abort constant folding `ONNX::Gather` if `index` rank is larger than 1.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483826

Pulled By: msaroufim

fbshipit-source-id: a8e8389d85287a859d32abf8d8d98852290b0a03

Co-authored-by: BowenBao <bowbao@microsoft.com>
2022-01-10 11:55:02 -08:00
BowenBao
4b47047dae [ONNX] Add support for shrink ops (#66969) (#68492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68492

* Initial commit

* Fix flake issue

* Add test tags

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483827

Pulled By: msaroufim

fbshipit-source-id: 41c623712524465b877d0fe0e2f4001d475bf2ce
2022-01-10 11:38:31 -08:00
hwangdeyu
c76c6e9bd3 [ONNX] Add BFloat16 type support when export to ONNX (#66788)
Summary:
- PyTorch and ONNX has supported BFloat16, add this to unblock some mixed-precision training model.
- Support PyTorch TNLG model to use BFloat16 tensors for the inputs/outputs of the layers that run on the NPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66788

Reviewed By: jansel

Differential Revision: D32283510

Pulled By: malfet

fbshipit-source-id: 150d69b1465b2b917dd6554505eca58042c1262a
2021-12-14 12:23:32 -08:00
BowenBao
3f02ad09ec [ONNX] shapeValueMap: Represent symbolic shape as value (#68203) (#69545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69545

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994272

Pulled By: malfet

fbshipit-source-id: 77cbdd78d01712faf4f9703549a2833340954509

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-12-09 22:00:46 -08:00
Deyu Huang
d32efe8bc2 [ONNX] Remove the argument use_external_data_format of export() method entirely. (#67080) (#67811)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67811

* remove the argument use_external_data_format of export() method entirely

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181302

Pulled By: malfet

fbshipit-source-id: 4bc1448b7487bb9dfdad4e36008ff5b227fd64a3

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-11-15 17:20:04 -08:00
Deyu Huang
48c8de45b0 [ONNX] Remove the argument example_outpus of export() method entirely. (#67082) (#67809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67809

* remove the argument example_outpus of export() method entirely

[ONNX] Follow-up: Remove the argument example_outpus of export() method entirely. (#67629)

* Resolve CI failure

* remove test after removing example_outputs

[ONNX] Follow-up: Follow-up: Remove the argument example_outpus of export() method entirely (#67719)

Removing unused import, resolving flake error.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181305

Pulled By: malfet

fbshipit-source-id: ba00547b7cb455ace86606b1bda643c02bdcfa1b

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-11-12 17:06:26 -08:00
Gary Miguel
f57c63032e [ONNX] Fix reciprocal when input is not floating point (#67471) (#67808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67808

torch.reciprocal implicitly casts the inputs to float, and ONNX
Reciprocal requires floating point inputs.

Also separate the reciprocal test from other tests, and test different
input types.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181307

Pulled By: malfet

fbshipit-source-id: 3e1109b3c85a49c51dc713656a900b4ee78c8340
2021-11-08 14:37:07 -08:00
Gary Miguel
958d517643 [ONNX] Fix new_full and full_like for Python 3.9 (#67124) (#67806)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67806

Previously new_full would fail with errors like:
`TypeError: only integer tensors of a single element can be converted to an index`

And full_like would trigger warnings like:
`DeprecationWarning: an integer is required (got type float).  Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python.`

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181301

Pulled By: malfet

fbshipit-source-id: 2cf262cfef36c18e7b2423efe1e1d4fa3438f0ba

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:03 -08:00
Gary Miguel
37688148ae [ONNX] Support opset 15 (#67121) (#67805)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805

Also fix Reduce ops on binary_cross_entropy_with_logits

The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.

This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181304

Pulled By: malfet

fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:00 -08:00
Bowen Bao
ead59b5ff3 [ONNX] Suppress ort warnings in onnx related test (#67054) (#67804)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67804

Improve readability of test logs by suppressing ort warnings logging for onnx related test.

Reducing ONNX CI test log binary size:
linux-xenial-py3.6-clang7-onnx-test1: 12443 KB -> 6958 KB
linux-xenial-py3.6-clang7-onnx-test2: 16884 KB -> 5778 KB

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181308

Pulled By: malfet

fbshipit-source-id: 11cf165dc212d061606590e96c08c6e021135f74

Co-authored-by: BowenBao<bowbao@microsoft.com>
2021-11-08 14:35:20 -08:00
Bowen Bao
02e35ce17b [ONNX] Update onnx function export with comments and clean up (#66817) (#67803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67803

* Addresses comments from #63589

[ONNX] remove torch::onnx::PRODUCER_VERSION (#67107)

Use constants from version.h instead.
This simplifies things since we no longer have to update
PRODUCER_VERSION for each release.

Also add TORCH_VERSION to version.h so that a string is available for
this purpose.

[ONNX] Set `ir_version` based on opset_version. (#67128)

This increases the odds that the exported ONNX model will be usable.
Before this change, we were setting the IR version to a value which may
be higher than what the model consumer supports.

Also some minor clean-up in the test code:
* Fix string replacement.
* Use a temporary file so as to not leave files around in the test
  current working directory.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181306

Pulled By: malfet

fbshipit-source-id: 02f136d34ef8f664ade0bc1985a584f0e8c2b663

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-11-05 10:35:35 -07:00
Bowen Bao
02a78bdba7 [ONNX] Support conv-bn fusion in blocks (#66152) (#67272)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67272

* Support conv-bn fusion in nested blocks

* avoid running script tests twice

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962513

Pulled By: malfet

fbshipit-source-id: 3ee79426542f9049cf62ac7b0c1be9d60ae6d014
2021-10-28 08:02:46 -07:00
Shubham Bhokare
d9a5668983 [ONNX] Add dim argument to all symbolic (#66093) (#67270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67270

* Add dim argument to all symbolic

* All symbolic depends on any symbolic

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962518

Pulled By: malfet

fbshipit-source-id: f7ee05cf4eff5880fc508154267e060952b5b42d
2021-10-27 13:46:31 -07:00
Jane Xu
5347dab851 Set test owners for onnx tests (#66860)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66860

Reviewed By: malfet

Differential Revision: D31964696

Pulled By: janeyx99

fbshipit-source-id: 4e77d1bda92d9107ca0b90a06d24fa4477ceaffa
2021-10-27 12:50:45 -07:00
Nikita Shulga
0bc9928f31 [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66147

Symbolic: dynamic input for OneHot, bool for Einsum

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424094

fbshipit-source-id: 76bea22b29c93d1621c597fe7ab59deb3685087f

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-22 13:46:24 -07:00
Nikita Shulga
2c0fe338da [ONNX] Modify softplus symbolic to support beta!=1 (#65001) (#66146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66146

* Modify softplus symbolic to support beta!=1

* Remove parse args

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424096

fbshipit-source-id: 971af54a28141737ccb17510ada03b0651be2a63
2021-10-22 13:46:22 -07:00
Nikita Shulga
a0fc14c20f [ONNX] Add diagonal symbolic (#64454) (#66144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66144

* Add logic and tests

* minor edits

* Eliminate expand ops

* Fix flake and editing

* Modified errant message

* Add overrun check

* Add overrun descriptions

* Remove emptyline

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424095

fbshipit-source-id: 5b8ef6ac21c32d43c3dbc8e51e1ef30bffb19c25
2021-10-22 13:46:18 -07:00
Nikita Shulga
b18c298f24 ONNX: Delete or document skipped ORT tests (#64470) (#66143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66143

Delete test_list_remove. There's no point in testing conversion of
this model since TorchScript doesn't support it.

Add a link to an issue tracking test_embedding_bag_dynamic_input.

[ONNX] fix docs (#65379)

Mainly fix the sphinx build by inserting empty before
bulleted lists.

Also some minor improvements:
Remove superfluous descriptions of deprecated and ignored args.
The user doesn't need to know anything other than that they are
deprecated and ignored.

Fix custom_opsets description.

Make indentation of Raises section consistent with Args section.

[ONNX] publicize func for discovering unconvertible ops (#65285)

* [ONNX] Provide public function to discover all unconvertible ATen ops

This can be more productive than finding and fixing a single issue at a
time.

* [ONNX] Reorganize test_utility_funs

Move common functionality into a base class that doesn't define any
tests.

Add a new test for opset-independent tests. This lets us avoid running
the tests repeatedly for each opset.

Use simple inheritance rather than the `type()` built-in. It's more
readable.

* [ONNX] Use TestCase assertions rather than `assert`

This provides better error messages.

* [ONNX] Use double quotes consistently.

[ONNX] Fix code block formatting in doc (#65421)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424093

fbshipit-source-id: 4ced841cc546db8548dede60b54b07df9bb4e36e
2021-10-22 13:46:16 -07:00
Nikita Shulga
136abf5aff [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66141

* Update aten::sum symbolic for dtype

* Remove nesting and modify opeartor tests

* Fix expect files

[ONNX] Fix expect files added in #64289 (#65356)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424091

fbshipit-source-id: d4af21e9f0d7e1c68bf6ef2f3e385db84b4c53f3
2021-10-22 13:46:12 -07:00
Gary Miguel
2506baf9c2 [ONNX] move CheckerError from torch.onnx.utils to torch.onnx (#66644)
Summary:
This moves it to where the user would expect it to be based on the
documentation and all the other public classes in the torch.onnx module.

Also rename it from ONNXCheckerError, since the qualified name
torch.onnx.ONNXCheckerError is otherwise redundant.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66644

Reviewed By: malfet

Differential Revision: D31662559

Pulled By: msaroufim

fbshipit-source-id: bc8a57b99c2980490ede3974279d1124228a7406
2021-10-15 10:38:56 -07:00
Gary Miguel
543b7fb942 [JIT] Fix type annotations of pooling modules (#65847)
Summary:
All of the pooling modules except MaxUnpool and LPPool return either a
Tensor or [Tensor, Tensor]. The current type annotations are inaccurate,
and prevent scripting the module if return_indices is set as True in the
module.

There's not a great way to make this agree with mypy because the
overload is dependent on the value of return_indices, an attribute.

I tried changing the annotations from `Tensor` to
`Union[Tensor, Tuple[Tensor, Tensor]]`, but that breaks a bunch of uses
that have return_indices=False.
For example, this breaks:
4e94e84f65/torch/nn/modules/container.py (L139)

Also clean up how test names were being constructed in test_jit, since
otherwise we were getting name collisions when there were two tests on
the same nn.Module.

Fixes https://github.com/pytorch/pytorch/issues/45904

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65847

Reviewed By: ZolotukhinM

Differential Revision: D31462517

Pulled By: eellison

fbshipit-source-id: 6f9e8df1be6c75e5e1e9bae07cf3ad3603ba59bd
2021-10-14 10:59:19 -07:00
Eli Uriegas
09b90612c4 .github: Enable onnx tests (#66513)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66513

These were missed in the migration of onnx to github actions.

Adds ort tests with 2 shards for the onnx workflow

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31599433

Pulled By: seemethere

fbshipit-source-id: 73dce0d3017c4280e64f0c8578e2be7ef6a168d6
2021-10-13 13:14:02 -07:00
Jane Xu
7c2f53b363 [BE] set pretrained=False for onnx tests (#66312)
Summary:
Addresses this network risk mitigation mentioned in https://github.com/pytorch/pytorch/issues/65439#issuecomment-924627239.

I didn't include any mobile app/benchmarking changes because I think the pretrained matters there.

I ended up removing the changes in test_utils because those were sensitive to the pretrained variable.

I am saving the quantization test changes for another PR because they are currently disabled.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66312

Reviewed By: ejguan

Differential Revision: D31542992

Pulled By: janeyx99

fbshipit-source-id: 57b4f70247af25cc96c57abd9e689c34641672ff
2021-10-11 08:29:11 -07:00
Edward Yang
11bc435622 Allow registration of custom symbolics for prim namespace (#64460) (#66139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66139

[ONNX] Add prim::PythonOp check back in export.cpp (#64944)

Add prim::PythonOp check back in export.cpp

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424102

fbshipit-source-id: 6d2eef767fab846ed79ea509e97b714072bac9f4

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-08 07:41:06 -07:00
Edward Yang
9b09a5f7ba [ONNX] Enable scripting tests (#64780) (#66138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66138

* Scripting tests

* Fixed scripting tests for lower opsets

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424099

fbshipit-source-id: 67095b7ac67b9da986961788392aa92c95cf11f2
2021-10-08 07:41:03 -07:00
BowenBao
4af47eb3a7 [ONNX] Update slice process shape to support rank only inference (#65782) (#66149)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66149

Updated logic will be able to infer rank of slice output, when only rank is known for slice input. Enables cases where `ConstantValueMap::HasRank(input)` is `True`, while `ConstantValueMap::HasShape(input)` is `False`.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31423232

Pulled By: ezyang

fbshipit-source-id: 516e3916aa71afda2b10e44620636e42ed837236

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-10-08 07:39:40 -07:00
BowenBao
d39790340d [ONNX] Enable export of __xor_ (#64042) (#64581)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64581

* Enbale xor

* Update test_pytorch_onnx_onnxruntime.py

* Update symbolic_opset9.py

* Update symbolic_opset9.py

* Update test_pytorch_onnx_onnxruntime.py

* Update symbolic_opset9.py

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919598

Pulled By: malfet

fbshipit-source-id: 044e55d0697da0050f26a6ceccd1517493d7e8a6
2021-09-30 21:09:01 -07:00
BowenBao
e598ba2ef3 [ONNX] Fix inplace fill_ dtype export mismatch (#64233) (#64580)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64580

Append `type_as` after convert `fill_` to `full_like` without dtype argument.

BowenBao <bowbao@microsoft.com>

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919599

Pulled By: malfet

fbshipit-source-id: f174977ced8f2c991b0615b65ff7c23fecf301c2
2021-09-30 21:08:59 -07:00
BowenBao
89cbe6229d [ONNX] Update doc and error message for indexing export (#64290) (#64579)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64579

Added suggested workarounds into indexing section of onnx export documentation.
Update indexing export warning message with link to documentation.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919603

Pulled By: malfet

fbshipit-source-id: 7fe65cb5aa7de4f7d93ff05011ba22f5adb27811

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-09-30 21:08:56 -07:00
BowenBao
d4ff344fae [ONNX] Fix remainder export (#64230) (#64578)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64578

* Fix remainder export for edge case when input is negative. New export relies on true_divide export.
* Simplified true_divide export. Cleaned up redundant code which is handled by scalar type analysis pass. Removed dependency on `onnx::Where`, thus supports opset 7 & 8.

Fixes #60179

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919601

Pulled By: malfet

fbshipit-source-id: 0f78621c0ac3bdb6bf4225e049ba5f470dc8ab12

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-09-30 21:08:54 -07:00
BowenBao
0f0ef4fe64 Add onnx test for batched_nms (#53175) (#64381)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64381

* Added new ONNX test for batched_nms

* Update test according to PR in torchvision

* Update test/onnx/test_pytorch_onnx_onnxruntime.py

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919602

Pulled By: malfet

fbshipit-source-id: edfb5b9f75077429f7f242fd6ac06d962968dfba

Co-authored-by: Bowen Bao <imbowenbao@outlook.com>
2021-09-30 21:08:52 -07:00
BowenBao
7e15f2ddaa [ONNX] Fix gather squeeze axis in constant folding (#63588) (#64379)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64379

* Fix gather squeeze axis in constant folding

* mypy

* fix indent

* address comments

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919604

Pulled By: malfet

fbshipit-source-id: 90edb054491433a0da2fe82324ac7c12f1ef062b
2021-09-30 21:08:50 -07:00
BowenBao
2d61009f4a [ONNX] Fix input sequence for pad op (#60554) (#64377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64377

* Fix for input primitive sequence

* Test mypy

* Fix for tracing tuples

* Fix for extra inputs

* flake8

* Rebase

* Fix for tracing tuples

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919606

Pulled By: malfet

fbshipit-source-id: a718c4a12cda77b968cb636acd7aa63d7b5ba326
2021-09-30 21:08:45 -07:00
BowenBao
f17ee368b3 Fix empty size constant creation (#63607) (#64376)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64376

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919608

Pulled By: malfet

fbshipit-source-id: 0e789e8470ce0f130148df764ce77f6d4fd0a274
2021-09-30 21:08:43 -07:00