Commit Graph

209 Commits

Author SHA1 Message Date
BowenBao
40de6b80ee [ONNX] Add infra for quantized model export and support quantized mobilenet v3 (#72215)
* Add infrastructure and helper functions to enable future work for other quantized operators and models.
* Add export for quantized operators needed by torchvision mobilenet v3 large.
    * ATen namespace: hardsigmoid, flatten, adaptive_avg_pool, quantize_per_tensor, dequantize.
    * Quantized namespace: conv2d, conv2d_relu, hardswish, add, mul.
* Numerous bug fixes, in unpack_quantized_weight.cpp, symbolic functions, and unit test.

Co-authored-by: BowenBao <bowbaomicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73102
2022-02-23 06:22:58 +00:00
BowenBao
32f6a1e2a2 [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
Co-authored-by: David Fan <jiafamicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72986
2022-02-18 18:27:26 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
BowenBao
cf70466970 [ONNX] Improve scope inference in function extraction
Cover more cases of scope inferencing where consecutive nodes don't have valid scope information. Usually these nodes are created in some pass where authors forgot to assign meaningful scope to them.
* One rule of `InferScope` is to check if the current node's outputs' users share the same scope. Recursively run `InferScope` on the user nodes if they are missing scope as well. Since the graph is SSA, the depth is finite.
* Fix one pass that missed scope information for a new node.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71897
2022-01-31 23:58:53 +00:00
hwangdeyu
c76c6e9bd3 [ONNX] Add BFloat16 type support when export to ONNX (#66788)
Summary:
- PyTorch and ONNX has supported BFloat16, add this to unblock some mixed-precision training model.
- Support PyTorch TNLG model to use BFloat16 tensors for the inputs/outputs of the layers that run on the NPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66788

Reviewed By: jansel

Differential Revision: D32283510

Pulled By: malfet

fbshipit-source-id: 150d69b1465b2b917dd6554505eca58042c1262a
2021-12-14 12:23:32 -08:00
Gary Miguel
eb22d06e5e [ONNX] Use human readable enum for dtype scalars (#66822) (#67807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67807

Also make quoting of string literals consistent.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181309

Pulled By: malfet

fbshipit-source-id: e1053701e3589f0310d8b5ef920359c03c6713f0
2021-11-08 14:37:05 -08:00
Gary Miguel
37688148ae [ONNX] Support opset 15 (#67121) (#67805)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805

Also fix Reduce ops on binary_cross_entropy_with_logits

The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.

This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181304

Pulled By: malfet

fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:00 -08:00
BowenBao
d4ff344fae [ONNX] Fix remainder export (#64230) (#64578)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64578

* Fix remainder export for edge case when input is negative. New export relies on true_divide export.
* Simplified true_divide export. Cleaned up redundant code which is handled by scalar type analysis pass. Removed dependency on `onnx::Where`, thus supports opset 7 & 8.

Fixes #60179

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919601

Pulled By: malfet

fbshipit-source-id: 0f78621c0ac3bdb6bf4225e049ba5f470dc8ab12

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-09-30 21:08:54 -07:00
BowenBao
2aa19f33c6 [ONNX] Fix for batchnorm training op mode (#52758) (#62760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62760

* Rebase

# Conflicts:
#	torch/csrc/jit/passes/onnx/eval_peephole.cpp

# Conflicts:
#	test/onnx/test_utility_funs.py
#	torch/onnx/symbolic_opset9.py

* Update symbolic_opset12.py

* Update test.sh
# Conflicts:
#	.jenkins/caffe2/test.sh

* Merge

* Fix utility tests

# Conflicts:
#	test/onnx/test_pytorch_onnx_onnxruntime.py
#	test/onnx/test_utility_funs.py

* Fix for comment

* Enable BN tests

* Fix for test

* Update test_pytorch_onnx_onnxruntime.py

* Update test_pytorch_onnx_onnxruntime.py

* Update test_utility_funs.py

* Update test_pytorch_onnx_onnxruntime.py

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349060

Pulled By: msaroufim

fbshipit-source-id: 93312c17607974731c17099ae181acb6e4c1c409
2021-08-18 13:29:07 -07:00
BowenBao
3a7bbf5fb7 [ONNX] Add support for opset14 in PT-ONNX exporter (#59486) (#62758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62758

* Add initial changes for opset14

* Fixed flake

* Add onnx submodule changes and removed utility func tests

* Add updated batchNorm symbolic

* Add triu/tril symbolics

* Fix lint

* Fixed test failures

* Add reshape with allowzero

* Added tests/refactored opset versioning

* Bump onnxruntime version

* Fix clang/lint failures

* Add reshape shape inference for opset 14

* Changes for allowzero

* Fix lint/clang and test failures

* Updated PR

* Flake fixes

* Fix flake

* Remove new_jit_api tests

* Add opset14 models

* Update allowzero

* Fix test failures

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349063

Pulled By: msaroufim

fbshipit-source-id: 54724246149b01a2f627c43d7396253a7e9c9eb9

Co-authored-by: Shubham Bhokare <sbhokare@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-08-18 13:29:01 -07:00
Peter Lin
8d7786ada6 Simplify hardswish ONNX export graph. (#60080)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58301

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60080

Reviewed By: suo

Differential Revision: D30002939

Pulled By: SplitInfinity

fbshipit-source-id: 8b4ca6f62d51b72e9d86534592e3c82ed6608c9d
2021-08-05 11:15:14 -07:00
BowenBao
34075e2c8b [ONNX] Fix the issue of converting empty list to sequence. (#58651) (#61558)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61558

When we construct an empty list by python list comprehension, we need to avoid converting the node without inputs to onnx::Concat in shape_type_inference.cpp and peephole.cpp because it will create an invalid Concat node which doesn't have inputs.

In addition, update the code to avoid passing a Sequence input to an onnx::Cast node which doesn't accept Sequence data type as an input.

Add tests for the validation as well.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767989

Pulled By: SplitInfinity

fbshipit-source-id: f97f172ff20eebda4c3744c7a934df36716f12a2

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-07-21 15:10:31 -07:00
BowenBao
d9dc94406f [ONNX] Add linspace symbolic (#58854) (#60246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60246

* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494911

Pulled By: SplitInfinity

fbshipit-source-id: bddff18a90f8a78121c8ecdd1dafc15c69962d66

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-07-08 16:29:26 -07:00
BowenBao
4ccfa3ffeb [ONNX] Fix sum export with attribute keepdims (#59316) (#60245)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60245

Fix after b9bdb07a0261ab5a0b1038f290fa03af6ce0415f. Improving previous fix on two aspects
* Not only checks 0 on first dimension for empty tensor.
* Do not assume empty tensor when shape is not accessible.

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494917

Pulled By: SplitInfinity

fbshipit-source-id: 02587c3c3be0510312c1a1959f28cab12d81812d

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-07-08 16:29:24 -07:00
BowenBao
5d00c374dd [ONNX] Sum empty tensor could not be exported to ONNX successfully. (#58141) (#59537)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59537

PyTorch sum over empty tensor gives 0, while ONNX produces an error.

torch.sum will be translated into onnx::ReduceSum op. Per the definition of ReduceSum, update the keepdims attribute for this scenario.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046604

Pulled By: SplitInfinity

fbshipit-source-id: 6f5f3a66cb8eda8b5114b8474dda6fcdbae73469

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-06-15 12:24:16 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
Meghan Lele
0d5527de7a Back out "Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"" (#58923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58923

Original commit changeset: c54597b2048e
ghstack-source-id: 129842041

Test Plan: Sandcastle and OSS CI.

Reviewed By: snisarg

Differential Revision: D28432555

fbshipit-source-id: 2a9ec22cc004c7c6979f1cc8f3124b833cdc6634
2021-05-26 13:29:07 -07:00
Meghan Lele
c034bce979 Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"
Summary: Original commit changeset: 833dac7c71f2

Test Plan:
```
buck test mode/dev //pytext/fb/assistant/lite/test:test -- --exact
'pytext/fb/assistant/lite/test:test - test_export_bytes_model_to_caffe2
(pytext.fb.assistant.lite.test.test.TestExport)'
```

Reviewed By: jeanm

Differential Revision: D28431840

fbshipit-source-id: 0f1d530034404421a5d51691173e1cc0ee16fdd6
2021-05-14 13:45:49 -07:00
BowenBao
bfe7728f18 [ONNX] Process const folding progressively when converts to ONNX (#54569) (#57601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57601

This PR automatically solves onnx const attribute issue in PR https://github.com/pytorch/pytorch/pull/53784.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393525

Pulled By: SplitInfinity

fbshipit-source-id: 833dac7c71f24a88af62d5dd2be0a702ed34d053

Co-authored-by: David <jiafa@microsoft.com>
2021-05-13 13:42:51 -07:00
BowenBao
9e56314d2c onnx.symbolic_helper.parse_args: document and clean up (#56956) (#57598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57598

Add a doc string to explain what it does and how to use it.

Remove hack around a bug in Python 2's functools.wrap().
Python 2 is no longer supported.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393519

Pulled By: SplitInfinity

fbshipit-source-id: aae8c5e7b49e2ad2d24a0e86f8ba47f1cd080e46

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-13 13:42:46 -07:00
Peter Bell
33eea146ee torch.clamp with tensor min and max (#52695)
Summary:
Fixes gh-2793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52695

Reviewed By: mruberry

Differential Revision: D27395977

Pulled By: ezyang

fbshipit-source-id: f86aa240feb034d42e4c45447e72218f6a773c24
2021-05-03 12:56:16 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
BowenBao
75995e4bf6 [ONNX] Add support for hann_window operator. (#54587) (#56163)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56163

* [ONNX] Improve index_put symbolic to handle singular Bool updates (#53690)

Adds support for cases where the updates to the index_put node is a single Bool value, such as the case shown below

```
mask[indices] = True
```

Fixes #53507

* [ONNX] Support primitive type input/outputs and attributes (#53550)

Support primitive type attributes. Needed for Silero model.

* [ONNX] Fix if output shape mismatch error & Fix graph input directly used as output (#53219)

Fix if output shape mismatch error & Fix graph input directly used as output

* Add support for hann_window operator.

* [ONNX] Replace decomposeLinear pre process pass with a symbolic (#53077)

Replace decomposeLinear pre process pass with a symbolic

* Add a test case for dtype is None.

* Resolve flake8 issue.

* Remove one unused test case.

* Add support for hann_window operator.

* Add a test case for dtype is None.

* Remove one unused test case.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866145

Pulled By: SplitInfinity

fbshipit-source-id: e0b43df9ecd1a95cd7ac297213aba453bbaf2913

Co-authored-by: Shubham Bhokare <32080845+shubhambhokare1@users.noreply.github.com>
Co-authored-by: Negin Raoof <neginmr@utexas.edu>
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Co-authored-by: Ksenija Stanojevic <KsenijaS@users.noreply.github.com>
2021-04-20 22:59:31 -07:00
BowenBao
a6a811f23a [ONNX] Add repeat_interleave symbolic (#52855) (#53312)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53312

- Add support for aten::repeat_interleave
- NOTE: Also adds fix for cases with split op where input tensor sizes are not known but _outputs is provided

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922422

Pulled By: SplitInfinity

fbshipit-source-id: 5362d0d8ccfdc14c15e1ae73fd70c4c113f823e6
2021-03-12 02:49:34 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Shubham Bhokare
49a923c8b5 [ONNX] Update LayerNorm symbolic to handle autocasting (#52199) (#52350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52350

When onnx export creates a 0-dim tensor of constant type, this action overrides the type promotion logic as quoted in #9515. In order to prevent this from happening this PR adds the following functionality.
If the data type is a floating point type, it is converted to a 0-dim double tensor, else it is converted to a 0-dim tensor of its original type

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D26490325

Pulled By: SplitInfinity

fbshipit-source-id: 4c47c69c9b6523d2e45b74c2541d6d8ca7e28fc9
2021-02-19 10:57:15 -08:00
BowenBao
1c7d966432 Update error message that displays when encountering an op unsupported for ONNX export. (#51387) (#51522)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51522

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203121

Pulled By: SplitInfinity

fbshipit-source-id: 5920995b735cecb500b12948b8ad91803e576dcb
2021-02-04 12:44:22 -08:00
BowenBao
9191b639ba [ONNX] Enable remaining failed tests in opset13 (#50806) (#51518)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51518

* enable remaining test in opset13

* add comments for error version test info

* fix comments:opset12 unbind problem

* add ignore[no-redef]

* fix format

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203122

Pulled By: SplitInfinity

fbshipit-source-id: e7d95bd2ce13f79f11965be82f640379cd55ff0f

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-02-04 12:44:04 -08:00
BowenBao
3f185ac18e [ONNX] Export get/set attribute nodes (#50768) (#51517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517

Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203116

Pulled By: SplitInfinity

fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
2021-02-04 12:43:59 -08:00
BowenBao
1829268e7f [ONNX] Improve error message for parse_arg in symbolic functions (#50512) (#51516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51516

previous error message looks like this
```
RuntimeError: Unexpected node type: onnx::Gather
```
now
```
RuntimeError: Expected node type 'onnx::Constant' for argument 'groups' of node 'conv1d', got 'onnx::Gather'.
```

Repro example:
```python
    torch.jit.script
    def conv(x, w):
        return F.conv1d(x, w, groups=x.shape[0])

    class Net(nn.Module):
        def forward(self, x, w):
            return conv(x, w)

    model = Net()

    x = torch.randn(8, 8, 512)
    w = torch.randn(8, 1, 3)
    torch.onnx.export(model,
                        (x, w),
                        "file.onnx",
                        opset_version=12)
```

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203118

Pulled By: SplitInfinity

fbshipit-source-id: 607b22f4cba4baa24154f197914b6817449ab9f8
2021-02-04 12:43:54 -08:00
BowenBao
84e9bff85d [ONNX] Replace optional parameters of Resize with placeholder for ops13. (#50574) (#50954)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50954

* Replace optional parameters of Resize with placeholder for ops13.

* Use common methods to handle different versions.

* Correct flake8 issue.

* Update per comments.

* Add something to trigger CI again.

* Trigger another round of CI.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050882

Pulled By: SplitInfinity

fbshipit-source-id: aea6205a1ba4a0621fe1ac9e0c7d94b92b6d8f21
2021-01-27 17:49:07 -08:00
BowenBao
1723ab53c4 [ONNX] Update Reducesum operator for opset 13 (#50532) (#50907)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50907

* udpate symbolic for squeeze/unsqueeze

* update c++ unsqueeze/squeeze creation

* clang format

* enable tests

* clang format

* remove prints

* remove magic number

* add helper function

* fix build issue

* update opset9 symbolic with helper function

* fix utility test

* fix prim_fallthrough opset skip

* enable reducesum opset 13

* enable embedding_bag which contain reducesum op

* add ReduceSum helper

* remove block_listed_operators

* remove local test code

* remove embedding_bag() in opset13 file

* remove unuse import

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050888

Pulled By: SplitInfinity

fbshipit-source-id: 88307af6a7880abf94eac126ec1638e962de8c1f

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-01-27 17:48:45 -08:00
BowenBao
7e4c956955 [ONNX] Support opset13 Squeeze and Unsqueeze (#50150) (#50906)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50906

In opset 13, squeeze/unsqueeze is updated to take axes as input, instead of attribute.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050883

Pulled By: SplitInfinity

fbshipit-source-id: 7b5faf0e016d476bc75cbf2bfee6918d77e8aecd
2021-01-27 17:48:40 -08:00
Spandan Tiwari
aeefe2ce31 [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (https://github.com/pytorch/pytorch/issues/49270)
- Symbolic function for torch.square (https://github.com/pytorch/pytorch/issues/49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (https://github.com/pytorch/pytorch/issues/49783) …
- [ONNX] Enable export af aten::__derive_index (https://github.com/pytorch/pytorch/issues/49514) …
- [ONNX] Update symbolic for unfold (https://github.com/pytorch/pytorch/issues/49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (https://github.com/pytorch/pytorch/issues/49798)
- [ONNX] Enable opset 13 ops (https://github.com/pytorch/pytorch/issues/49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (https://github.com/pytorch/pytorch/issues/50119)
- [ONNX] Add a post-pass for If folding (https://github.com/pytorch/pytorch/issues/49410)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
2021-01-13 13:51:21 -08:00
BowenBao
e5a98c5ab0 [ONNX] Remove usage of isCompleteTensor() in symbolic functions (#48162)
Summary:
`isCompleteTensor()` only returns true when both scalar type and shape is present. All dimensions in the shape must be static. This high requirement is unnecessary for many use cases such as when only rank or scalar type needs to be known.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48162

Reviewed By: malfet

Differential Revision: D25340823

Pulled By: bzinodev

fbshipit-source-id: 1fef61f44918f4339dd6654fb725b18cd58d99cf
2020-12-09 11:37:19 -08:00
Guilherme Leobas
34cc77a811 Torch onnx (#48980)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

This is a follow up PR of https://github.com/pytorch/pytorch/issues/45258 and https://github.com/pytorch/pytorch/issues/48782

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48980

Reviewed By: zhangguanheng66

Differential Revision: D25399823

Pulled By: ezyang

fbshipit-source-id: 798055f4abbbffecdfab0325884193c81addecec
2020-12-08 19:41:44 -08:00
Edward Yang
88ebf6f894 Revert D25304229: [pytorch][PR] Add type annotations to torch.onnx.* modules
Test Plan: revert-hammer

Differential Revision:
D25304229 (8bc6023d7a)

Original commit changeset: b01b21ddbf86

fbshipit-source-id: bc3308176e2c70423f29f694e9db94828213e7d6
2020-12-07 11:58:03 -08:00
Guilherme Leobas
8bc6023d7a Add type annotations to torch.onnx.* modules (#48782)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

This is a follow up PR of https://github.com/pytorch/pytorch/issues/45258

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48782

Reviewed By: heitorschueroff

Differential Revision: D25304229

Pulled By: ezyang

fbshipit-source-id: b01b21ddbf86f908ca08173e68b81fb25851bc81
2020-12-07 08:23:02 -08:00
shubhambhokare1
5fd61de99e [ONNX] Added hardswish symbolic in opset 9 (#48423)
Summary:
Adds support for torch.nn.Hardswish operator in Export

Fixes https://github.com/pytorch/pytorch/issues/43665

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48423

Reviewed By: heitorschueroff

Differential Revision: D25309868

Pulled By: bzinodev

fbshipit-source-id: f5583eb01b1b0e8f0bc95d5054941dd29605d6a5
2020-12-03 23:22:21 -08:00
David
befab0d9d4 [ONNX] Cast Gather index to Long if needed (#47653)
Summary:
Onnx op Gather index need be int32 or int64. However, we don't have this Cast in our converter.
Therefore, it fails the following UT (for opset 11+)
`seq_length.type().scalarType()` is None, so `_arange_cast_helper()` cannot treat it as all integral, then it will cast all to float. Then this float value will be used as Gather index, hence it throws error in ORT about float type index.
The fix is that we need cast Gather index type to Long if it is not int/long.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47653

Reviewed By: heitorschueroff

Differential Revision: D25298056

Pulled By: mruberry

fbshipit-source-id: 05e3a70ccfd74612233c63ec5bb78e060b211909
2020-12-03 09:34:59 -08:00
Mike Ruberry
6299c870ee Revert D25254920: [pytorch][PR] Add type annotations to torch.onnx.* modules
Test Plan: revert-hammer

Differential Revision:
D25254920 (40a2dd7e1e)

Original commit changeset: dc9dc036da43

fbshipit-source-id: c17cb282ebf90ecbae4023aa63ecbb443a87037d
2020-12-02 02:25:31 -08:00
Guilherme Leobas
40a2dd7e1e Add type annotations to torch.onnx.* modules (#45258)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

Still need to resolve a few mypy issues before a review. In special, there is an error which I don't know how to solve, see:
```python
torch/onnx/utils.py:437: error: Name 'is_originally_training' is not defined  [name-defined]
        if training is None or training == TrainingMode.EVAL or (training == TrainingMode.PRESERVE and not is_originally_training):
```

`is_originally_training` is used but never defined/imported on [`torch/onnx/utils.py`](ab5cc97fb0/torch/onnx/utils.py (L437)),

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45258

Reviewed By: zhangguanheng66

Differential Revision: D25254920

Pulled By: ezyang

fbshipit-source-id: dc9dc036da43dd56b23bd6141e3ab92e1a16e3b8
2020-12-01 20:41:39 -08:00
Ksenija Stanojevic
79f8582289 [ONNX] Add export of aten::is_floating point (#46442)
Summary:
Add export of aten::is_floating point

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46442

Reviewed By: mrshenli

Differential Revision: D24566156

Pulled By: bzinodev

fbshipit-source-id: 91ea95e2c4d4866e2ef51bffe07461de2e31c110
2020-11-09 18:02:47 -08:00
Bowen Bao
e26c1726cf [ONNX] Fix scripting rand/randn/where (#45793)
Summary:
- rand/randn: the type signature of int[] is different in scripting, thus failing the check.
- where: scripting produces dynamic cases which are supported by `unbind` export of higher opsets.
- test_list_pass: this test fails when using new scripting api, should be fixed by https://github.com/pytorch/pytorch/issues/45369

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45793

Reviewed By: mrshenli

Differential Revision: D24566096

Pulled By: bzinodev

fbshipit-source-id: 6fe0925c66dee342106d71c9cbc3c95cabe639f7
2020-11-09 12:39:31 -08:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
BowenBao
57c18127dc [ONNX] Update div export to perform true divide (#44831)
Summary:
related https://github.com/pytorch/pytorch/issues/43787

Now that PyTorch div is actually performing true divide, update onnx export code to stay consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44831

Reviewed By: eellison

Differential Revision: D23880316

Pulled By: bzinodev

fbshipit-source-id: 3bb8db34142ac4fed4039295ad3c4cb79487987f
2020-09-28 13:53:43 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
BowenBao
43406e218a [ONNX] Update ONNX shape inference (#43929)
Summary:
* Support sequence type (de)serialization, enables onnx shape inference on sequence nodes.
* Fix shape inference with block input/output: e.g. Loop and If nodes.
* Fix bugs in symbolic discovered by coverage of onnx shape inference.
* Improve debuggability: added more jit logs. For simplicity, the default log level, when jit log is enabled, will not dump ir graphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43929

Reviewed By: albanD

Differential Revision: D23674604

Pulled By: bzinodev

fbshipit-source-id: ab6aacb16d0e3b9a4708845bce27c6d65e567ba7
2020-09-14 15:36:19 -07:00
shubhambhokare1
da11d932bc [ONNX] Update arange op to support out argument (#43777)
Summary:
Update arange op to support out argument

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43777

Reviewed By: albanD

Differential Revision: D23674583

Pulled By: bzinodev

fbshipit-source-id: 6fb65e048c6b1a551569d4d2a33223522d2a960c
2020-09-14 14:56:17 -07:00
David Reiss
7d78a6fcdd Update interpolate to use new upsample overloads (#43025)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43025

- Use new overloads that better reflect the arguments to interpolate.
- More uniform interface for upsample ops allows simplifying the Python code.
- Also reorder overloads in native_functions.yaml to give them priority.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37177

ghstack-source-id: 106938111

Test Plan:
test_nn has pretty good coverage.

Relying on CI for ONNX, etc.

Didn't test FC because this change is *not* forward compatible.

To ensure backwards compatibility, I ran this code before this change

```python
def test_func(arg):
    interp = torch.nn.functional.interpolate
    with_size = interp(arg, size=(16,16))
    with_scale = interp(arg, scale_factor=[2.1, 2.2], recompute_scale_factor=False)
    with_compute = interp(arg, scale_factor=[2.1, 2.2])
    return (with_size, with_scale, with_compute)

traced_func = torch.jit.trace(test_func, torch.randn(1,1,1,1))

sample = torch.randn(1, 3, 7, 7)
output = traced_func(sample)

assert not torch.allclose(output[1], output[2])

torch.jit.save(traced_func, "model.pt")
torch.save((sample, output), "data.pt")
```

then this code after this change

```python
model = torch.jit.load("model.pt")
sample, golden = torch.load("data.pt")
result = model(sample)
for r, g in zip(result, golden):
    assert torch.allclose(r, g)
```

Reviewed By: AshkanAliabadi

Differential Revision: D21209991

fbshipit-source-id: 5b2ebb7c3ed76947361fe532d1dbdd6faa3544c8
2020-09-11 09:59:14 -07:00
BowenBao
08126c9153 [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628)
Summary:
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

* Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`.
* Moved helper functions from `export.cpp` to a common place for re-use.
* This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`.
* Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40628

Reviewed By: mrshenli

Differential Revision: D22709746

Pulled By: bzinodev

fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
2020-08-30 18:35:46 -07:00
neginraoof
cd0bab8d8d [ONNX] Where op (#41544)
Summary:
Extending where op export

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41544

Reviewed By: malfet

Differential Revision: D23279515

Pulled By: bzinodev

fbshipit-source-id: 4627c95ba18c8a5ac8d06839c343e06e71c46aa7
2020-08-28 18:15:01 -07:00
Deepak Velmurugan
c9f125bf70 Black to Block for various files (#42913)
Summary:
Fixes  https://github.com/pytorch/pytorch/issues/41735 #41736 https://github.com/pytorch/pytorch/issues/41737 #41738 all areas where black is mentioned is replaced to block

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42913

Reviewed By: houseroad

Differential Revision: D23112873

Pulled By: malfet

fbshipit-source-id: a515b56dc2ed20aa75741c577988d95f750b364c
2020-08-25 17:43:31 -07:00
BowenBao
a6c8730045 [ONNX] Add preprocess pass for onnx export (#41832)
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference https://github.com/pytorch/pytorch/issues/40628.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
2020-08-06 20:34:12 -07:00
Ksenija Stanojevic
9b0393fcf1 [ONNX]Fix export of flatten (#40418)
Summary:
Shape is passed to _reshape_to_tensor as a Constant and cannot infer shape of the input when model is exported with dynamic axes set. Instead of a Constant pass output of a subgraph Shape-Slice-Concat to compute the shape for the Reshape node in _reshape_to_tensor function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40418

Reviewed By: hl475

Differential Revision: D22480127

Pulled By: houseroad

fbshipit-source-id: 11853adb6e6914936871db1476916699141de435
2020-07-10 13:06:25 -07:00
Yael Dekel
6e4f501f1a Improve error message for Pad operator (#39651)
Summary:
In issue https://github.com/pytorch/pytorch/issues/36997 the user encountered a non-meaningful error message when trying to export the model to ONNX. The Pad operator in opset 9 requires the list of paddings to be constant. This PR tries to improve the error message given to the user when this is not the case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/39651

Reviewed By: hl475

Differential Revision: D21992262

Pulled By: houseroad

fbshipit-source-id: b817111c2a40deba85e4c6cdb874c1713312dba1
2020-07-06 20:26:02 -07:00
Yael Dekel
766889b6bf ONNX: fix bug in export of ops involving torch.bool type (#40006)
Summary:
When an op involves creating a tensor of a certain type (such as torch.ones(...)), the tracer creates a `prim::Constant` node with an integer value representing the type. The mapping from the torch type to integers maps:
```
torch.complex32 -> 8
torch.complex64 -> 9
torch.complex128 -> 10
torch.bool -> 11
```
However, when the ONNX exporter maps back the integer to torch type, 10 is mapped to bool, 9 is mapped to complex128 and 8 is mapped to complex64.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40006

Reviewed By: hl475

Differential Revision: D22158019

Pulled By: houseroad

fbshipit-source-id: 42fbd6b56566017ff03382c4faf10d30ffde3802
2020-06-22 09:57:25 -07:00
Negin Raoof
b7b99ab0c8 [ONNX] Remove Aten ops from ONNX export (#37239)
Summary:
This PR adds a new operator export type to exporter: ONNX_FALLTHROUGH
This new type allows ops that are not supported to pass through.
This PR also removes all aten ops in ONNX operator export type mode.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37239

Reviewed By: hl475

Differential Revision: D21440509

Pulled By: houseroad

fbshipit-source-id: 38b826677cf3431ea44868efebefe1ff51c9aa75
2020-05-29 21:20:14 -07:00
Jerry Zhang
70f375becf [quant] ConvPackedParams with TorchBind (#35923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35923

(Note: this ignores all push blocking failures!)

Test Plan:
tbd

Imported from OSS

Differential Revision: D20957089

fbshipit-source-id: 74d8bd628ccba64e902ea6ebabc2b883924050b0
2020-05-05 20:18:36 -07:00
Lara Haidar
728c7dcea3 ONNX Update training ops and training amenable export API (#35567)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35567

Reviewed By: hl475

Differential Revision: D20715339

Pulled By: houseroad

fbshipit-source-id: ad88097e76b169035ab5814b769dc1bed54c6008
2020-03-29 23:14:25 -07:00
Alban Desmaison
45e1be9762 Revert D19710370: [pytorch][PR] ONNX Update training ops and training amenable export API
Test Plan: revert-hammer

Differential Revision:
D19710370

Original commit changeset: e5e79d385529

fbshipit-source-id: d0114dc561a3415869805d3fbf43b92730bbcf54
2020-03-27 06:51:05 -07:00
Lara Haidar
025a0abe5a ONNX Update training ops and training amenable export API (#32950)
Summary:
- Update Dropout and Batchnorm in opset 12 : https://github.com/onnx/onnx/pull/2568
- Update api logic for exporting to ONNX training amenable models
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32950

Reviewed By: hl475

Differential Revision: D19710370

Pulled By: houseroad

fbshipit-source-id: e5e79d38552936966662c41d39ddf33be1ba3e35
2020-03-27 00:39:39 -07:00
anjali411
c73e97033a Added type promotion logic for complex numbers (#34093)
Summary:
Issue: https://github.com/pytorch/pytorch/issues/33780
After this PR:
1. dtype promotion logic will correctly work for ops involving complex scalars
2. added alias for complex64 (cfloat) and complex128 (cdouble)
3. added an internal function get_complex_default_dtype (consciously not exposed in public API)
   - sets the default complex dtype to be double if default_dtype is set to double, else float https://github.com/pytorch/pytorch/pull/34093#discussion_r392350224
>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex64)

>>> torch.set_default_dtype(torch.float64)
>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex128)

>>> 1j + torch.ones(2)
tensor([(1.0000 + 1.0000j), (1.0000 + 1.0000j)], dtype=torch.complex128)

>>> torch.tensor(1j) + torch.ones(2,2)
tensor([[(1.0000 + 1.0000j), (1.0000 + 1.0000j)],
        [(1.0000 + 1.0000j), (1.0000 + 1.0000j)]], dtype=torch.complex128)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34093

Differential Revision: D20537125

Pulled By: anjali411

fbshipit-source-id: 05fb1f81b8ba039d0b698cdd2c0bbf8b0ce0b767
2020-03-25 09:12:21 -07:00
Mike Ruberry
9c4683e8e3 Revert D20312366: [pytorch][PR] Added type promotion logic for complex numbers
Test Plan: revert-hammer

Differential Revision:
D20312366

Original commit changeset: 90f00a1a916d

fbshipit-source-id: 4510739a888b2eec5d8a72e792998ac46da6d82a
2020-03-19 05:55:57 -07:00
anjali411
c8f665dcb6 Added type promotion logic for complex numbers (#34093)
Summary:
Issue: https://github.com/pytorch/pytorch/issues/33780
After this PR:
1. dtype promotion logic will correctly work for ops involving complex scalars
2. torch.ComplexFloatTensor, torch.ComplexDoubleTensor works
3. added alias for complex64 (cfloat) and complex128 (cdouble)
4. added an internal function get_complex_default_dtype (consciously not exposed in public API)

>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex64)

>>> torch.set_default_dtype(torch.float64)
>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex128)

>>> 1j + torch.ones(2)
tensor([(1.0000 + 1.0000j), (1.0000 + 1.0000j)], dtype=torch.complex128)

>>> torch.tensor(1j) + torch.ones(2,2)
tensor([[(1.0000 + 1.0000j), (1.0000 + 1.0000j)],
        [(1.0000 + 1.0000j), (1.0000 + 1.0000j)]], dtype=torch.complex128)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34093

Differential Revision: D20312366

Pulled By: anjali411

fbshipit-source-id: 90f00a1a916d9c8eeda101eb6e9d250fce569815
2020-03-18 23:36:13 -07:00
Brian Stark
afa8cbf8c2 Modifed randNLike for scripting (#32830)
Summary:
the rand N like function had required args which were not being used.
As such modified the method signature to give default values so when
scripting does not provide these arguments which are not even being
used, no error is thrown.

Additionally modified the const checker for handling prim::Constant as
well
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32830

Reviewed By: hl475

Differential Revision: D19731715

Pulled By: houseroad

fbshipit-source-id: a3cacb3977eecb88b122e0ceb654fdbf1c8286c1
2020-02-06 18:19:42 -08:00
Lara
4502d8c391 Interpolate Float [] support in ONNX (#32554)
Summary:
The PR https://github.com/pytorch/pytorch/pull/31791 adds support for float[] constant, which affects some cases of ONNX interpolate support.
This PR adds float[] constants support in ONNX, updates interpolate in ONNX, and re-enable the disabled tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32554

Reviewed By: hl475

Differential Revision: D19566596

Pulled By: houseroad

fbshipit-source-id: 843f62c86126fdf4f9c0117b65965682a776e7e9
2020-02-04 16:14:40 -08:00
neginraoof
e03e4f3a2d [ONNX] Add einsum export (#32716)
Summary:
Adding symbolic for onnx einsum as part of opset 12
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32716

Reviewed By: hl475

Differential Revision: D19626168

Pulled By: houseroad

fbshipit-source-id: d8cc8af5f05f36aca3cd55dead602261ccdfec51
2020-02-03 12:56:50 -08:00
neginraoof
ffc8e255c4 Sort export w/ negative axes (#31971)
Summary:
Fixing export of Sort on negative axes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31971

Reviewed By: hl475

Differential Revision: D19325874

Pulled By: houseroad

fbshipit-source-id: 18ab2bf39221970c8ab65a1355f5759f88faa54f
2020-01-15 15:13:23 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
Supriya Rao
91c6d2e51c Add support for quantized operator conversion from PT to C2 via ONNX (#29694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29694

This PR adds preliminary support required to be able to run quantized pytorch models on a C2 backend.
For quantized ops we use a custom domain name 'caffe2' to register the ops if they are in the "quantized" namespace.
The change also adds JIT pass to unpack the quantized weights and insert the unpacked values into the graph.
The actual tensor values are looked up from the params dict.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2.py TestQuantizedOps

Imported from OSS

Reviewed By: houseroad

Differential Revision: D18467130

fbshipit-source-id: 53ebd8c43935f7d7e74305dad6c231a2247df176
2019-11-18 12:12:40 -08:00
Lara
2acca09e1a Add Support for ONNX scripting Interpolate with missing shape (#29489)
Summary:
- Add support for missing case where interpolate is exported with missing shape information in scripting
- Add warnings
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29489

Reviewed By: hl475

Differential Revision: D18438872

Pulled By: houseroad

fbshipit-source-id: d01f833bec0cc4e881ddc18e7054d22f54e9886b
2019-11-11 21:20:14 -08:00
eellison
e01fc56ecb move type inference for arange into c++ (#27629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17662

I'm not sure if `arange` needs to be in python_arg_parser at all, given the schemas in native_functions.yaml. In any case this at least fixes the dytpe mismatch.

In follow up PRs I will try to handle some of the other ops that do type inference at the python level, like randint.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27629

Differential Revision: D17885939

Pulled By: eellison

fbshipit-source-id: f97a8bc722b7ab77de1c42a992e49a4a3175ad60
2019-11-11 11:26:21 -08:00
Negin Raoof
ebc216a076 Opset 11 updates (#28225)
Summary:
This PR contains:
1- pad updates for opset11 symbolic
2- Updated avg_pool for opset11
3- TopK updates for opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28225

Reviewed By: hl475

Differential Revision: D18282928

Pulled By: houseroad

fbshipit-source-id: aff2cabca9a155a9b475e35fed69a678544d6669
2019-11-04 12:16:12 -08:00
Sergei Nikolaev
1e2049c566 #26426 fixed (#28715)
Summary:
This is the fix for reverted https://github.com/pytorch/pytorch/issues/26426
houseroad bddppq soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28715

Reviewed By: hl475

Differential Revision: D18146731

Pulled By: houseroad

fbshipit-source-id: 247366451a6334e84df82d00339521f797b33130
2019-11-01 12:53:01 -07:00
Junjie Bai
d37c2d7c8d Revert D17495965: TensorRT 6.0 support and PyTorch->ONNX->TRT6 unit test
Test Plan: revert-hammer

Differential Revision:
D17495965

Original commit changeset: 3e8dbe8943f5

fbshipit-source-id: d47fcbec22b0d61df41d7dbf15cfdde196ac818f
2019-10-25 13:58:16 -07:00
Sergei Nikolaev
4996e3aca2 TensorRT 6.0 support and PyTorch->ONNX->TRT6 unit test (#26426)
Summary:
This PR makes Caffe2 compatible with TensorRT 6. To make sure it works well, new unit test is added. This test checks PyTorch->ONNX->TRT6 inference flow for all classification models from TorhchVision Zoo.
Note on CMake changes: it has to be done in order to import onnx-tensorrt project. See https://github.com/pytorch/pytorch/issues/18524 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26426

Reviewed By: hl475

Differential Revision: D17495965

Pulled By: houseroad

fbshipit-source-id: 3e8dbe8943f5a28a51368fd5686c8d6e86e7f693
2019-10-25 13:01:57 -07:00
Lara
d762ad09df Enable Interpolate Tests for ONNX Opset 11 (#28560)
Summary:
- Enable tests for Interpolate in opset 11 for nearest and linear2d modes (linear1d/3d not implemented yet)
- Fix bugs found after enabling tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28560

Reviewed By: hl475

Differential Revision: D18110680

Pulled By: houseroad

fbshipit-source-id: 7f8811e40dc5cedaba6389460dcca52daa048f5f
2019-10-24 14:21:13 -07:00
neginraoof
95922c90b5 Export update for arange and _dim_arange (#26875)
Summary:
Export arange and _dim_arange using onnx::range in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26875

Reviewed By: hl475

Differential Revision: D17623848

Pulled By: houseroad

fbshipit-source-id: 41f0066ca1c42882ccc051a3ee5448dca25ee5d2
2019-10-17 13:55:45 -07:00
Lara
735463f210 ONNX Export Scripted Interpolate Op (#27566)
Summary:
We currently support exporting traced interpolate ops to ONNX.

Scripting interpolate op invokes aten::__interpolate in the Torch IR (instead of aten::upsample_[mode][dim]d), which we do not support yet.
This PR implements the ONNX symbolic for __interpolate() to support exporting interpolate in scripting scenarios.

Related open issue: https://github.com/pytorch/pytorch/issues/25807
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27566

Reviewed By: hl475

Differential Revision: D17817731

Pulled By: houseroad

fbshipit-source-id: e091793df503e2497f24821cf2954ff157492c75
2019-10-16 11:22:22 -07:00
Negin Raoof
3d2c90131a opset 11 updates (#27578)
Summary:
Opset 11 updates:
- Enabled ORT tests for updated ops in opset 11
- Updated index_copy and index_fill symbolic for opset 11 to modify onnx::Scatter -> onnx::ScatterElemets
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27578

Reviewed By: hl475

Differential Revision: D17852462

Pulled By: houseroad

fbshipit-source-id: c88747804054d0f3455f2c58fd1d8725e0b2f803
2019-10-11 16:18:40 -07:00
Lara Haidar
2093fac4ee ONNX Export ConstantOfShape with default dtype (#27577)
Summary:
Exporting a scripted module to ONNX, with ops like torch.zeros(), fails when the dtype is not specified.
This PR adds support to exporting scripted torch.zeros() ops (and similar ops) without specifying the dtype (dtype will default to float).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27577

Reviewed By: hl475

Differential Revision: D17822318

Pulled By: houseroad

fbshipit-source-id: b2d4300b869e782a9b72534fea1263eb83744953
2019-10-09 17:05:35 -07:00
Negin Raoof
c874dd91a7 export remainder (#24410)
Summary:
Added ONNX export support for torch.remainder and torch.fmod
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24410

Reviewed By: hl475

Differential Revision: D17466791

Pulled By: houseroad

fbshipit-source-id: afe6519e5f370824e3b4a45b69036a7260fb72cf
2019-10-03 20:15:20 -07:00
Negin Raoof
d93fc64776 Update export for topk and sort (#25739)
Summary:
updated export for topk and sort as part of opset11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25739

Reviewed By: hl475

Differential Revision: D17467131

Pulled By: houseroad

fbshipit-source-id: 653be138455728ec8e9bb81ae63dd7ce0c4d0793
2019-10-02 12:20:30 -07:00
Lara
d396c7332a Update ONNX Export for Interpolate in Opset 11 (#26778)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26778

- Add support for linear and cubic interpolate in opset 11.
- Add support for 1d and 3d interpolate in nearest mode for opset 7 and 8.
- Add tests for all cases of interpolate in ORT tests (nearest/linear/cubic, 1d/2d/3d, upsample/downsample).
Original PR resolved: https://github.com/pytorch/pytorch/pull/24805

Reviewed By: hl475

Differential Revision: D17564911

Pulled By: houseroad

fbshipit-source-id: 591e1f5b361854ace322eca1590f8f84d29c1a5d
2019-09-25 05:43:20 -07:00
Edward Yang
1bb895e1c1 Revert D17330801: [pytorch][PR] Update ONNX Export for Interpolate in Opset 11
Test Plan: revert-hammer

Differential Revision:
D17330801

Original commit changeset: 1bdefff9e72f

fbshipit-source-id: dff07477403170c27260f736ab6e6010f0deca9f
2019-09-24 18:56:45 -07:00
Lara
de3d4686ca Update ONNX Export for Interpolate in Opset 11 (#24805)
Summary:
- Add support for linear and cubic interpolate in opset 11.
- Add support for 1d and 3d interpolate in nearest mode for opset 7 and 8.
- Add tests for all cases of interpolate in ORT tests (nearest/linear/cubic, 1d/2d/3d, upsample/downsample).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24805

Reviewed By: hl475

Differential Revision: D17330801

Pulled By: houseroad

fbshipit-source-id: 1bdefff9e72f5e70c51f4721e1d7347478b7505b
2019-09-24 16:29:57 -07:00
Lara
c79d116a7d Update ONNX Export for Gather and Scatter for Opset 11
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24790

Reviewed By: hl475

Differential Revision: D17159723

Pulled By: houseroad

fbshipit-source-id: a63bb7c681120de85588dafecd03f04742dde8b7
2019-09-23 17:13:25 -07:00
Zachary DeVito
bdc57d3833 Merge ProfiledTensorType and TensorType (#24284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24284

This PR finishes the unification of all Tensor types into a single object.
ProfiledTensorType is renamed to TensorType and the old TensorType is
deleted.

Notes:
* Fixes bug in merge for VaryingShape by changing its representation to an
 optional list of optional ints.
* Removes ProfiledTensorType::create(type) invocations that can now
  simply be expect calls on tensor type.

Test Plan: Imported from OSS

Differential Revision: D16794034

Pulled By: zdevito

fbshipit-source-id: 10362398d0bb166d0d385d74801e95d9b87d9dfc
2019-08-20 13:01:28 -07:00
Zachary DeVito
0cbd7fa46f remove CompleteTensorType
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24169

Test Plan: Imported from OSS

Reviewed By: jamesr66a

Differential Revision: D16765329

Pulled By: zdevito

fbshipit-source-id: 88560cefba635c3d586a3e4dee67f9b1d901a642
2019-08-15 13:31:34 -07:00
neginraoof
3574d9ff70 updated pixel_shuffle in opset 11 to use depthToSpace
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23739

Differential Revision: D16800355

Pulled By: bddppq

fbshipit-source-id: 1502c5b7ec1495286bad17b6ffa359cf995f78fb
2019-08-15 11:37:44 -07:00
Zachary DeVito
c2549cb8d3 Remove DimensionedTensorType (#24077)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24077

This replaces all uses of DimensionedTensorType with ProfiledTensorType.
For places where we propagate shape information, we still follow the
dimension-only propagation rules, meaning that even if full size information
is known on inputs the outputs will only have dimension information.

This fixes several bugs in existing implentations that this change uncovered:
* requires_grad was not propgated correctly across loops
* requires_grad on ProfiledTensorType returned false when requires_grad information
  is unknown but the conservative result is true
* some equality code on ProfiledTensorType contained bugs.

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D16729581

Pulled By: zdevito

fbshipit-source-id: bd9f823c1c6b1d06a236a1b5b2b2fcdf0245edce
2019-08-13 10:05:47 -07:00
Nikolay Korovaiko
3d15ee1b34 Remove more uses of DimensionedTensorType
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23060

Differential Revision: D16460391

Pulled By: Krovatkin

fbshipit-source-id: b50ee87d22ad18b8cbfff719b199ea876ef172f1
2019-08-01 21:19:28 -07:00
liqunfu
83d6c6be07 ONNX export for index_select (#21866)
Summary:
ONNX export for index_select
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21866

Reviewed By: zrphercule

Differential Revision: D16471345

Pulled By: houseroad

fbshipit-source-id: 745c23ba8a3223b5ec59b924df7358a36a92518c
2019-07-26 13:56:15 -07:00
BowenBao
a35136dd73 Add support for onnx tensor index export (#21716)
Summary:
Support exporting
* Standard tensor indexing like
```
x = torch.ones(4, 5)
ind = torch.tensor([0, 1])

return x[ind]
```
* [Advanced indexing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing) like
```
x = torch.ones(4,5,6,7,8)
ind1 = torch.tensor([0, 1])
ind2 = torch.tensor([[3], [2]])
ind3 = torch.tensor([[2, 2], [4, 5]])

return x[2:4, ind1, None, ind2, ind3, :]
```
It would be ideal if ONNX can natively support indexing in future opsets, but for opset <= 10 it will always need this kind of workarounds.

There are still various limitations, such as not supporting advanced indexing with negative indices, not supporting mask indices of rank > 1, etc. My feeling is that these are less common cases that requires great effort to support using current opset, and it's better to not make the index export more cumbersome than it already is.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21716

Reviewed By: zrphercule

Differential Revision: D15902199

Pulled By: houseroad

fbshipit-source-id: 5f1cc687fc9f97da18732f6a2c9dfe8f6fdb34a6
2019-07-23 17:11:28 -07:00
BowenBao
b3147bc674 PyTorch export to ONNX Opset 7 and 8 - Cont (#22421)
Summary:
This is an extension to the original PR https://github.com/pytorch/pytorch/pull/21765

1. Increase the coverage of different opsets support, comments, and blacklisting.
2. Adding backend tests for both caffe2 and onnxruntime on opset 7 and opset 8.
3. Reusing onnx model tests in caffe2 for onnxruntime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22421

Reviewed By: zrphercule

Differential Revision: D16225518

Pulled By: houseroad

fbshipit-source-id: 01ae3eed85111a83a0124e9e95512b80109d6aee
2019-07-12 14:52:48 -07:00
Brian Vaughan
97a604ef57 Rereapply optional ScalarType interface changes that were reverted in D16079809 (#22456)
Summary:
re-apply changes reverted in:
https://github.com/pytorch/pytorch/pull/22412

Also change log_softmax to take positional arguments. Long-term we do want the kwarg-only interface, but seems to currently be incompatible with jit serialization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22456

Differential Revision: D16097159

Pulled By: nairbv

fbshipit-source-id: 8cb73e9ca18fc66b35b873cf4a574b167a578b3d
2019-07-03 20:03:25 -07:00
Lara Haidar
7ca7edc307 ONNX Export LayerNorm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22265

Reviewed By: zrphercule

Differential Revision: D16076268

Pulled By: houseroad

fbshipit-source-id: 29b4ecab2fa0dc7250c9d1ad6924903181a66ab2
2019-07-02 09:37:07 -07:00
Lu Fang
de84104059 Lint ONNX Related Code (#22423)
Summary:
Lint the code
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22423

Differential Revision: D16086518

Pulled By: houseroad

fbshipit-source-id: c6e5143f42c73a70beeaa2e089df4164f6265c32
2019-07-01 21:44:16 -07:00
Wanchao Liang
dff2c07183 Manual revert of D16012838
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22412

Reviewed By: nairbv, houseroad

Differential Revision: D16079809

fbshipit-source-id: ee0d805ff7a2bc5f98bcc65f90b8199751c840f6
2019-07-01 19:58:21 -07:00
Brian Vaughan
7707dee761 Re apply optional ScalarType changes (#22237)
Summary:
This is (mostly) the re-application of:
https://github.com/pytorch/pytorch/pull/21088

which was reverted due to an issue conflicting with changes in:
https://github.com/pytorch/pytorch/pull/22104
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22237

Differential Revision: D16012838

Pulled By: nairbv

fbshipit-source-id: 35f4a73c97ab68b4e2648aca96b2176f07b5a883
2019-06-26 13:36:25 -07:00
Michael Suo
e016a424ef Revert D15944971: [pytorch][PR] merge interfaces that have an optional scalartype parameter
Differential Revision:
D15944971

Original commit changeset: 53473c370813

fbshipit-source-id: a18158b448cb8993b12e1a3bf2c2a3e0d6df6b10
2019-06-24 09:41:33 -07:00
Brian Vaughan
142361a7e4 merge interfaces that have an optional scalartype parameter (#21088)
Summary:
This change is backwards incompatible in *C++ only* on mean(), sum(), and prod() interfaces that accepted either of:
```
Tensor sum(IntArrayRef dim, bool keepdim=false) const;
Tensor sum(IntArrayRef dim, ScalarType dtype) const;
```
but now to specify both the dim and dtype will require the keepdim parameter:
```
Tensor sum(IntArrayRef dim, bool keepdim=false, c10::optional<ScalarType> dtype=c10::nullopt) const;
```

[xla ci]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21088

Reviewed By: ailzhang

Differential Revision: D15944971

Pulled By: nairbv

fbshipit-source-id: 53473c370813d9470b190aa82764d0aea767ed74
2019-06-24 07:17:58 -07:00
Lara
34aee933f9 ONNX Export Interpolate (Resize) for opset version 10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21434

Reviewed By: zrphercule

Differential Revision: D15777197

Pulled By: houseroad

fbshipit-source-id: 517b06a54a234ffdb762401e83f5a732023ed259
2019-06-19 13:40:27 -07:00
Lara
cc85c3dbbc ONNX Export Slice and Flip ops for opset 10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20533

Reviewed By: zrphercule

Differential Revision: D15579713

Pulled By: houseroad

fbshipit-source-id: 91f3ac0cb14ef226f980362b0013b6b92cb8b8da
2019-06-07 10:03:26 -07:00
Iurii Zdebskyi
ff0d00f921 Updated scalar type to onnx mapping (#21095)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21095
ghimport-source-id: 32a79eace02216de9170f163027b1aa93756b821

Differential Revision: D15546175

Pulled By: izdeby

fbshipit-source-id: 4e47c8538aaf30b4af198baac7279133e4d74b36
2019-05-30 17:11:12 -07:00
BowenBao
fa189641b5 Add export for __and__ & __or__ (#17894)
Summary:
In onnx spec, the supported input/output type for `And` and `Or` is `Bool` only.
Thus in exporting, cast to/from `Bool` is inserted for input/output.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17894

Reviewed By: zrphercule

Differential Revision: D15103148

Pulled By: houseroad

fbshipit-source-id: 3e1068ea236c743260d42882fb11f0e3a21707e6
2019-05-16 13:52:06 -07:00
Lara Haidar
f4d9bfaa4d Support Exports to Multiple ONNX Opset (#19294)
Summary:
Support exporting multiple ONNX opsets (more specifically opset 10 for now), following the proposal in https://gist.github.com/spandantiwari/99700e60919c43bd167838038d20f353.
And add support for custom ops (merge with https://github.com/pytorch/pytorch/pull/18297).

This PR will be followed by another PR containing the changes related to testing the ops for different opsets.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19294

Reviewed By: zrphercule

Differential Revision: D15043951

Pulled By: houseroad

fbshipit-source-id: d336fc35b8827145639137bc348ae07e3c14bb1c
2019-05-10 18:37:12 -07:00