Commit Graph

157 Commits

Author SHA1 Message Date
BowenBao
bd4902d81f [ONNX] Add Squeeze/Unsqueeze dynamic dimensions support when opset >= 13 (#71158)
* Add Squeeze/Unsqueeze dynamic axes support when opset >= 13

Co-authored-by: hwangdeyu <dejack953outlook.com>
Co-authored-by: Gary Miguel <garymmgarymm.org>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73104
2022-02-23 06:41:15 +00:00
BowenBao
40de6b80ee [ONNX] Add infra for quantized model export and support quantized mobilenet v3 (#72215)
* Add infrastructure and helper functions to enable future work for other quantized operators and models.
* Add export for quantized operators needed by torchvision mobilenet v3 large.
    * ATen namespace: hardsigmoid, flatten, adaptive_avg_pool, quantize_per_tensor, dequantize.
    * Quantized namespace: conv2d, conv2d_relu, hardswish, add, mul.
* Numerous bug fixes, in unpack_quantized_weight.cpp, symbolic functions, and unit test.

Co-authored-by: BowenBao <bowbaomicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73102
2022-02-23 06:22:58 +00:00
BowenBao
5843fea94d [ONNX] Add export support for linalg norm (#66575)
* Add matrix_norm

* Add vector norm

* Fixe flake

* Fixe flake

* nit fixes

* Nit fixes

* Restructure and add comments

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72987
2022-02-18 18:30:16 +00:00
BowenBao
308de30abc [ONNX] Support embedding_renorm ONNX export
Composite using ONNX operators for same logic from here 0a07488ed2/aten/src/ATen/native/Embedding.cpp (L153)

Replaced #72560
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72738
2022-02-11 22:02:22 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
BowenBao
eb4238fc26 Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68490

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

 Co-authored-by: Nikita Shulga <nshulga@fb.com>

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483781

Pulled By: malfet

fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124
(cherry picked from commit 5fb1eb1b19)
2022-02-10 03:26:48 +00:00
vfdev-5
3da2e09c9b Added antialias flag to interpolate (CPU only, bilinear) (#65142)
Summary:
Description:
- Added antialias flag to interpolate (CPU only)
  - forward and backward for bilinear mode
  - added tests

### Benchmarks

<details>
<summary>
Forward pass, CPU. PTH interpolation vs PIL
</summary>

Cases:
- PTH RGB 3 Channels, float32 vs PIL RGB uint8 (apply vs pears)
- PTH 1 Channel, float32 vs PIL 1 Channel Float

Code: https://gist.github.com/vfdev-5/b173761a567f2283b3c649c3c0574112

```
# OMP_NUM_THREADS=1 python bench_interp_aa_vs_pillow.py

Torch config: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.1
  - NVCC architecture flags: -gencode;arch=compute_75,code=sm_75
  - CuDNN 8.0.5
  - Build settings: BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=1, USE_CUDNN=1, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=0, USE_OPENMP=ON,

Num threads: 1
[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (320, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.9                |          3.1
      channels_last non-contiguous torch.float32  |                2.6                |          3.6

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (460, 220) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                3.4                |          4.0
      channels_last non-contiguous torch.float32  |                3.4                |          4.8

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 96) -------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                1.6                |          1.8
      channels_last non-contiguous torch.float32  |                1.6                |          1.9

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (1200, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                9.0                |          11.3
      channels_last non-contiguous torch.float32  |                8.9                |          12.5

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 1200) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.1                |          1.8
      channels_last non-contiguous torch.float32  |                2.1                |          3.4

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (320, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.2               |          1.0

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (460, 220) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.4               |          1.3

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 96) ---------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              719.9              |         599.9

Times are in microseconds (us).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (1200, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               3.7               |          3.5

Times are in milliseconds (ms).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 1200) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              834.4              |         605.7

Times are in microseconds (us).

```

</details>

Code is moved from torchvision: https://github.com/pytorch/vision/pull/4208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65142

Reviewed By: mrshenli

Differential Revision: D32432405

Pulled By: jbschlosser

fbshipit-source-id: b66c548347f257c522c36105868532e8bc1d4c6d
2021-11-17 09:10:15 -08:00
Gary Miguel
eb22d06e5e [ONNX] Use human readable enum for dtype scalars (#66822) (#67807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67807

Also make quoting of string literals consistent.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181309

Pulled By: malfet

fbshipit-source-id: e1053701e3589f0310d8b5ef920359c03c6713f0
2021-11-08 14:37:05 -08:00
BowenBao
d4ff344fae [ONNX] Fix remainder export (#64230) (#64578)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64578

* Fix remainder export for edge case when input is negative. New export relies on true_divide export.
* Simplified true_divide export. Cleaned up redundant code which is handled by scalar type analysis pass. Removed dependency on `onnx::Where`, thus supports opset 7 & 8.

Fixes #60179

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919601

Pulled By: malfet

fbshipit-source-id: 0f78621c0ac3bdb6bf4225e049ba5f470dc8ab12

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-09-30 21:08:54 -07:00
BowenBao
2d61009f4a [ONNX] Fix input sequence for pad op (#60554) (#64377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64377

* Fix for input primitive sequence

* Test mypy

* Fix for tracing tuples

* Fix for extra inputs

* flake8

* Rebase

* Fix for tracing tuples

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919606

Pulled By: malfet

fbshipit-source-id: a718c4a12cda77b968cb636acd7aa63d7b5ba326
2021-09-30 21:08:45 -07:00
Nikita Shulga
340531f2e0 [ONNX] Do not use numpy in ONNX opsets (#65188)
Summary:
Replace `torch.tensor([numpy.arange(a, b, c)])` with `torch.arange(a, b, c).unsqueeze(0)`
Replace `tuple(numpy.add(a, b))` with `tuple( x + y for (x, y) in zip(a, b)`

As `numpy` is an optional dependency, it shouldn't be used in PyTorch core by default

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65188

Reviewed By: mruberry

Differential Revision: D31009490

Pulled By: malfet

fbshipit-source-id: 528e48f055bf9ac1de1fd7e94c0be41915df9a0b
2021-09-17 11:28:44 -07:00
BowenBao
6512838fab [ONNX] Enhance shape (two changes merged) (#64585)
Summary:
Enhanced shape inference by introducing typeReliableMap.
[ONNX] exporter changes for torch hub models (https://github.com/pytorch/pytorch/issues/62856)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64585

Reviewed By: ezyang

Differential Revision: D30870418

Pulled By: msaroufim

fbshipit-source-id: 87a294799cb87d649d1d13b6114a5cfbac9be15c

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-09-15 13:02:19 -07:00
BowenBao
db0771b05d [ONNX] Update repeat_interleave for dynamic repeats (#59979) (#62764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62764

Fixes #58733

- Support dynamic interleave for cases with dynamic repeat values
- Moved repeat_interleave symbolic from opset 11 to opset 13, as sequence as output types for loop outputs is needed for this change

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375179

Pulled By: msaroufim

fbshipit-source-id: 787f96bf91d124fd0483761088c5f4ae930d96a9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-08-20 12:46:54 -07:00
BowenBao
3a7bbf5fb7 [ONNX] Add support for opset14 in PT-ONNX exporter (#59486) (#62758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62758

* Add initial changes for opset14

* Fixed flake

* Add onnx submodule changes and removed utility func tests

* Add updated batchNorm symbolic

* Add triu/tril symbolics

* Fix lint

* Fixed test failures

* Add reshape with allowzero

* Added tests/refactored opset versioning

* Bump onnxruntime version

* Fix clang/lint failures

* Add reshape shape inference for opset 14

* Changes for allowzero

* Fix lint/clang and test failures

* Updated PR

* Flake fixes

* Fix flake

* Remove new_jit_api tests

* Add opset14 models

* Update allowzero

* Fix test failures

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349063

Pulled By: msaroufim

fbshipit-source-id: 54724246149b01a2f627c43d7396253a7e9c9eb9

Co-authored-by: Shubham Bhokare <sbhokare@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-08-18 13:29:01 -07:00
BowenBao
6f08ddfc28 [ONNX] Enable aten:normal op and add tests for aten:uniform op. (#60441) (#61560)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61560

1. Add a new symbolic function broadcast_tensors() to support exporting torch.broadcast_tensors() function. This is required by exporting torch.distribution.normal() function.
2. Add a new symbolic function normal() to support exporting torch.distribution.normal() function.
3. Add relative tests for normal and uniform ops as well.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767995

Pulled By: SplitInfinity

fbshipit-source-id: acfe5e7801d00c0df8ca46966bbd6015fed0045e

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-07-21 15:10:35 -07:00
BowenBao
81f95cce59 [ONNX] Extend chunk for dynamic chunk values (#59644) (#60247)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60247

Related to #42785

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494914

Pulled By: SplitInfinity

fbshipit-source-id: 51ddb876d00185e59cfe54a8af5a9c8dd073a09f

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-07-08 16:29:28 -07:00
BowenBao
044b519a80 Symbolic for ReLu6 (#58560) (#59538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59538

Four mealv2 models can export in torch 1.8.1, but fails when torch master introduces relu6 a few months back.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046607

Pulled By: SplitInfinity

fbshipit-source-id: d9cf7050e4ac0dad892441305ffebc19ba84e2be

Co-authored-by: David <jiafa@microsoft.com>
2021-06-15 12:24:17 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
Serhat Yilmaz
4ca4640bae [torch][repeat_interleave] remove stream syncronization if output size is given (#58417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58417

Same as title.

Test Plan:
Rely on CI signal.

Update unit test to exercise new code path as well.

Reviewed By: ngimel

Differential Revision: D28482927

fbshipit-source-id: 3ec8682810ed5c8547b1e8d3869924480ce63dcd
2021-05-22 20:53:28 -07:00
BowenBao
51cd89ecc6 [ONNX] Handle mixed mask, index input for index_put (#57604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57604

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393524

Pulled By: SplitInfinity

fbshipit-source-id: 6c0cd9db981a7e4ece2fdd375a814a13449e1ab0

Co-authored-by: David <jiafa@microsoft.com>
2021-05-13 13:42:56 -07:00
Peter Bell
33eea146ee torch.clamp with tensor min and max (#52695)
Summary:
Fixes gh-2793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52695

Reviewed By: mruberry

Differential Revision: D27395977

Pulled By: ezyang

fbshipit-source-id: f86aa240feb034d42e4c45447e72218f6a773c24
2021-05-03 12:56:16 -07:00
BowenBao
913f1f75b3 Revert "Revert [ONNX] Redesign inplace conversion" (#56675)
Summary:
Adjust how MutationRemover is used to avoid creating aliasDb multiple times for the same graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56675

Reviewed By: pbelevich

Differential Revision: D27945692

Pulled By: SplitInfinity

fbshipit-source-id: a6c548438e88ddee18ef03a6f0461ab9eaaaa829
2021-04-22 22:22:16 -07:00
Nikita Shulga
36828aa0ff Revert D27866138: [ONNX] Redesign inplace conversion (#55033)
Test Plan: revert-hammer

Differential Revision:
D27866138 (24ff92f76d)

Original commit changeset: ab5c9188740c

fbshipit-source-id: b99bf5b12e109089ebd5748c1dc152c6af1cebdb
2021-04-21 21:11:06 -07:00
BowenBao
24ff92f76d [ONNX] Redesign inplace conversion (#55033) (#56173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56173

* Create `InplaceConverter` and `ValueTracker` to keep track of aliases of values throughout the graph. For a given value, a new alias is created every time when there is an inplace operation, SetAttr, or through nested blocks owned by If/Loop nodes.
* Fix bug where controlflow node output types are not set, when the complete node is unable to run ONNX shape inference due to containing non-onnx node.
* Add symbolic for `__not__` ~~and `prim_min`~~(update: moved to a separate PR), and update `index_put` opset9 to support case of assignment without providing indices.
* Bump ORT version in CI test.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866138

Pulled By: SplitInfinity

fbshipit-source-id: ab5c9188740c50f783ceba4d54fda43c26e2fde7
2021-04-21 17:59:11 -07:00
BowenBao
f804b65d4e [ONNX] Update repeat_interleave symbolic (#54312) (#56165)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56165

Add implementation for cases when
- interleaving happens along dim which consist of dynamic axes

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866137

Pulled By: SplitInfinity

fbshipit-source-id: 7fef1b2c614f2e24a677b7ca0886bb37bd0ab479
2021-04-20 23:00:39 -07:00
Kurt Mohler
3fe4718d16 Add padding_idx argument to EmbeddingBag (#49237)
Summary:
This PR adds a `padding_idx` parameter to `nn.EmbeddingBag` and `nn.functional.embedding_bag`. As with `nn.Embedding`'s `padding_idx` argument, if an embedding's index is equal to `padding_idx` it is ignored, so it is not included in the reduction.

This PR does not add support for `padding_idx` for quantized or ONNX `EmbeddingBag` for opset10/11 (opset9 is supported). In these cases, an error is thrown if `padding_idx` is provided.

Fixes https://github.com/pytorch/pytorch/issues/3194

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49237

Reviewed By: walterddr, VitalyFedyunin

Differential Revision: D26948258

Pulled By: jbschlosser

fbshipit-source-id: 3ca672f7e768941f3261ab405fc7597c97ce3dfc
2021-04-14 09:38:01 -07:00
Antonio Cuni
980d6f2589 torch.linalg.det (#53119)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51652.
In particular:
- the main implementation is in `torch.linalg.det` now. `torch.det` is just a deprecated alias to it
- add a new `OpInfo` for `torch.linalg.det`
- remove the old-style tests for `torch.det` (this is similar to what we did for `torch.linalg.slogdet`, see https://github.com/pytorch/pytorch/issues/49194)
- added a `out=` argument to `torch.linalg.det`, but **not** to `torch.det`.

It is worth noting that I had to skip few tests:
- `TestGradientsCuda::test_fn_gradgrad_linalg_det_cuda_float64`. This is not a regression: the functionality is broken also on master, but the test is not executed properly due to https://github.com/pytorch/pytorch/issues/53361.

And the following tests which fails only on ROCm:
- `test_variant_consistency_jit_cuda_{float64,float32}`
- `test_fn_grad_cuda_float64`

I think that the ROCm tests fail because the current linalg.det backward is unstable if the matrix has repeated singular values, see https://github.com/pytorch/pytorch/issues/53364 .

(At the moment of writing some CI jobs are still running but I believe the build will be green, since the only difference wrt the last push is the skip of the ROCm tests)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53119

Reviewed By: H-Huang

Differential Revision: D27441999

Pulled By: mruberry

fbshipit-source-id: 5eab14c4f0a165e0cf9ec626c3f4bb23359f2a9e
2021-04-05 08:45:27 -07:00
Ksenija Stanojevic
7824d8277a [ONNX] Fix export of copy_ operator (#51938) (#54870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54870

copy_operator before going into onnx exporter is being decomposed into aten::expand_as and aten::index_put.
There is a scenario where inputs to copy are not of the same type, but copy op in torch does implicit casting that is not currently reflected inside onnx exporter. This PR is adding casting inside index_put symbolic in case when tensor self is not of the same type as values.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408975

Pulled By: SplitInfinity

fbshipit-source-id: 15022703e76b9c98b02285c06b13d44f3c4a3f00
2021-03-31 21:14:32 -07:00
Shubham Bhokare
ce48b14060 [ONNX] Improve index_put symbolic to handle singular Bool updates (#53690) (#54863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54863

Adds support for cases where the updates to the index_put node is a single Bool value, such as the case shown below

```
mask[indices] = True
```

Fixes #53507

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408977

Pulled By: SplitInfinity

fbshipit-source-id: bcfb55b50ce76b3d4913ffbc16cdef1f98cb7a84
2021-03-31 21:12:53 -07:00
BowenBao
4c1d9e58c2 Fix copy_ export (#53046) (#53310)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53310

Fixes export of torch.copy_

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922424

Pulled By: SplitInfinity

fbshipit-source-id: f509e531f5064d2be7f55e1681813f10f17475d2
2021-03-12 02:49:26 -08:00
BowenBao
57d1df071f [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306

* [ONNX] Fix for sequence of mutations in blocks (#51577)

Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.

* Support inplace operations + indexing

* Clean up old pass for remove mutations

* Add loop test

* Fixes for set attr in loops

* Removing the new jit API flag

* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

* Fix after merge

* clang

* Fix clang

* Fix clang

* Fix warning message.

* Fixes for non-model param attributes

* Fix for caffe2

* Additional test

* clang

* Skip test for lower opsets

* fix clang-tidy

* Update init.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Fix for clang formatting

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922416

Pulled By: SplitInfinity

fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-03-12 02:49:11 -08:00
BowenBao
3f9c803fe8 [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795) (#53304)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53304

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26922417

Pulled By: malfet

fbshipit-source-id: 14ed06158d539e2451c2e5e63ba1b32fb0f75095
2021-03-11 10:30:09 -08:00
BowenBao
25b18bb5d7 [ONNX] Support list remove for onnx export (#51373) (#51526)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51526

* Support aten::Delete
* Refactor prepare_inplace_ops_for_onnx into one pass.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203114

Pulled By: SplitInfinity

fbshipit-source-id: ce940bca54a30c39f4b0810f62b0e7b497508f59
2021-02-04 12:44:40 -08:00
BowenBao
6d47e2cff8 [ONNX] Fix opset 11 ConstantChunk with negative dim (#51396) (#51525)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51525

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203115

Pulled By: SplitInfinity

fbshipit-source-id: d76942f7cc5812c8a1cc16891e4956cc658283d8
2021-02-04 12:44:35 -08:00
BowenBao
84e9bff85d [ONNX] Replace optional parameters of Resize with placeholder for ops13. (#50574) (#50954)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50954

* Replace optional parameters of Resize with placeholder for ops13.

* Use common methods to handle different versions.

* Correct flake8 issue.

* Update per comments.

* Add something to trigger CI again.

* Trigger another round of CI.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050882

Pulled By: SplitInfinity

fbshipit-source-id: aea6205a1ba4a0621fe1ac9e0c7d94b92b6d8f21
2021-01-27 17:49:07 -08:00
BowenBao
1723ab53c4 [ONNX] Update Reducesum operator for opset 13 (#50532) (#50907)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50907

* udpate symbolic for squeeze/unsqueeze

* update c++ unsqueeze/squeeze creation

* clang format

* enable tests

* clang format

* remove prints

* remove magic number

* add helper function

* fix build issue

* update opset9 symbolic with helper function

* fix utility test

* fix prim_fallthrough opset skip

* enable reducesum opset 13

* enable embedding_bag which contain reducesum op

* add ReduceSum helper

* remove block_listed_operators

* remove local test code

* remove embedding_bag() in opset13 file

* remove unuse import

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050888

Pulled By: SplitInfinity

fbshipit-source-id: 88307af6a7880abf94eac126ec1638e962de8c1f

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-01-27 17:48:45 -08:00
BowenBao
7e4c956955 [ONNX] Support opset13 Squeeze and Unsqueeze (#50150) (#50906)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50906

In opset 13, squeeze/unsqueeze is updated to take axes as input, instead of attribute.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050883

Pulled By: SplitInfinity

fbshipit-source-id: 7b5faf0e016d476bc75cbf2bfee6918d77e8aecd
2021-01-27 17:48:40 -08:00
Spandan Tiwari
aeefe2ce31 [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (https://github.com/pytorch/pytorch/issues/49270)
- Symbolic function for torch.square (https://github.com/pytorch/pytorch/issues/49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (https://github.com/pytorch/pytorch/issues/49783) …
- [ONNX] Enable export af aten::__derive_index (https://github.com/pytorch/pytorch/issues/49514) …
- [ONNX] Update symbolic for unfold (https://github.com/pytorch/pytorch/issues/49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (https://github.com/pytorch/pytorch/issues/49798)
- [ONNX] Enable opset 13 ops (https://github.com/pytorch/pytorch/issues/49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (https://github.com/pytorch/pytorch/issues/50119)
- [ONNX] Add a post-pass for If folding (https://github.com/pytorch/pytorch/issues/49410)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
2021-01-13 13:51:21 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
BowenBao
e5a98c5ab0 [ONNX] Remove usage of isCompleteTensor() in symbolic functions (#48162)
Summary:
`isCompleteTensor()` only returns true when both scalar type and shape is present. All dimensions in the shape must be static. This high requirement is unnecessary for many use cases such as when only rank or scalar type needs to be known.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48162

Reviewed By: malfet

Differential Revision: D25340823

Pulled By: bzinodev

fbshipit-source-id: 1fef61f44918f4339dd6654fb725b18cd58d99cf
2020-12-09 11:37:19 -08:00
David
f065087567 [ONNX] Handle dynamic input axes for prim_ConstantChunk (#48176)
Summary:
When converting a model that uses `torch.chunk`, it does not work when we have a dynamic input axes, because `Split` split attr is static for opset 11. Therefore, we convert it using `Slice` (support opset 11+). This PR also handles the cases that the input axes cannot be divided by the number of outputs. Pytorch works a way that fit the first (n-1) outputs for the same dim, and remaining for the last one. Added UT for it.

The existing code on `sequence` `split` cannot be leveraged here, because `start`, `end` of `Slice` are static there, but dynamic here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48176

Reviewed By: bdhirsh

Differential Revision: D25274862

Pulled By: bzinodev

fbshipit-source-id: 7d213a7605ad128aca133c057d6dd86c65cc6de9
2020-12-03 21:59:26 -08:00
Nikita Shulga
44016e66c4 Revert D25097324: [pytorch][PR] [ONNX] Cast Gather index to Long if needed
Test Plan: revert-hammer

Differential Revision:
D25097324 (55fc0e9e53)

Original commit changeset: 42da1412d1b9

fbshipit-source-id: 491994a35a8aaf207dd5905191847171586aa4b7
2020-12-01 20:59:28 -08:00
David
55fc0e9e53 [ONNX] Cast Gather index to Long if needed (#47653)
Summary:
Onnx op Gather index need be int32 or int64. However, we don't have this Cast in our converter.
Therefore, it fails the following UT (for opset 11+)
`seq_length.type().scalarType()` is None, so `_arange_cast_helper()` cannot treat it as all integral, then it will cast all to float. Then this float value will be used as Gather index, hence it throws error in ORT about float type index.
The fix is that we need cast Gather index type to Long if it is not int/long.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47653

Reviewed By: ejguan

Differential Revision: D25097324

Pulled By: bzinodev

fbshipit-source-id: 42da1412d1b972d4d82c17fb525879c2575820c9
2020-11-30 21:36:17 -08:00
BowenBao
6a4d55f23c [ONNX] Enable onnx shape inference in export by default (#46629)
Summary:
* Enable ONNX shape inference by default.
* ONNX could potentially set inferred shape in output instead of value_infos, checking both to be sure.
* Small fix in symbol_map to avoid overlooking dup symbols.
* Fix scalar_type_analysis to be consistent with PyTorch scalar type promotion logic.
* Correctly handle None dim_param from ONNX inferred shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46629

Reviewed By: ailzhang

Differential Revision: D24900171

Pulled By: bzinodev

fbshipit-source-id: 83d37fb9daf83a2c5969d8383e4c8aac986c35fb
2020-11-13 15:09:46 -08:00
David
85c43c3da1 [ONNX] Convert _len based on the first dimension length (#47538)
Summary:
This PR is a bug fix.
As UT shows, for multiple-dimensional tensors, the current conversion for _len returns the total number of the tensors. But it should return the first dimension length, as pytorch _len defines.
Need `Squeeze` op at the end to ensure it outputs a scalar value.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47538

Reviewed By: malfet

Differential Revision: D24870717

Pulled By: bzinodev

fbshipit-source-id: c53c745baa6d2fb7cc1de55a19bd2eedb2ad5272
2020-11-12 20:25:39 -08:00
shubhambhokare1
1abe6e5ad4 [ONNX] Bool inputs to index_put updated symbolic (#46866)
Summary:
Cases with bool inputs to index_put nodes were handled for tracing purposes. This PR adds support for similar situations in scripting

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46866

Reviewed By: malfet

Differential Revision: D24870818

Pulled By: bzinodev

fbshipit-source-id: 2d75ca6f5f4b79d8c5ace337633c5aed3bdc4be7
2020-11-11 12:45:31 -08:00
BowenBao
129b41226e [ONNX] Support nd mask index in opset >= 11 (#45252)
Summary:
Fixes below pattern for opset >= 11

`return tensor[tensor > 0]`

where rank of `tensor` > 1.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45252

Reviewed By: VitalyFedyunin

Differential Revision: D24116945

Pulled By: bzinodev

fbshipit-source-id: 384026cded1eb831bb5469e31ece4fcfb6ae8f2a
2020-10-29 13:32:59 -07:00
BowenBao
b28b5d3c68 [ONNX] Update squeeze test for opset 9 (#45369)
Summary:
Only under static axes does opset 9 supports no-op squeeze when dim is not 1.
Updating the test case where it was setting dynamic axes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45369

Reviewed By: anjali411

Differential Revision: D24280180

Pulled By: bzinodev

fbshipit-source-id: d7cda88ab338a1c41a68052831dcebe739a3843c
2020-10-14 12:53:13 -07:00
Ksenija Stanojevic
6ca03aeb96 [ONNX] Fix flatten operator (#45632)
Summary:
Even when dim is None, there are cases when flatten can be exported.
Also enable test_densenet in scripting mode

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45632

Reviewed By: VitalyFedyunin

Differential Revision: D24116994

Pulled By: bzinodev

fbshipit-source-id: 76da6c073ddf79bba64397fd56b592de850034c4
2020-10-14 12:44:25 -07:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Negin Raoof
6b42ca2d69 [ONNX] Update embedding_bag export (#44693)
Summary:
Export of embedding bag with dynamic list of offsets.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44693

Reviewed By: malfet

Differential Revision: D23831980

Pulled By: bzinodev

fbshipit-source-id: 3eaff1a0f20d1bcfb8039e518d78c491be381e1a
2020-09-30 13:36:40 -07:00
Negin Raoof
95a97e51b5 [ONNX] Improve scripting inplace indexing ops (#44351)
Summary:
Fix a couple of issues with scripting inplace indexing in prepare_inplace_ops_for_onnx pass.
1- Tracing index copy (such as cases lik x[1:3] = data) already applies broadcasting on rhs if needed. The broadcasting node (aten::expand) is missing in scripting cases.

2- Inplace indexing with ellipsis (aten::copy_) is replaced with aten::index_put and then handled with slice+select in this pass.
Support for negative indices for this op added.

Shape inference is also enabled for scripting tests using new JIT API.
A few more tests are enabled for scripting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44351

Reviewed By: ezyang

Differential Revision: D23880267

Pulled By: bzinodev

fbshipit-source-id: 78b33444633eb7ae0fbabc7415e3b16001f5207f
2020-09-28 00:32:36 -07:00
neginraoof
4005afe94b [ONNX] Update narrow for dynamic inputs (#44039)
Summary:
Update narrow for dynamic inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44039

Reviewed By: mruberry

Differential Revision: D23742215

Pulled By: bzinodev

fbshipit-source-id: 0d58d2fe996f91a124af988a9a21ee433e842d07
2020-09-27 15:52:57 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
BowenBao
43406e218a [ONNX] Update ONNX shape inference (#43929)
Summary:
* Support sequence type (de)serialization, enables onnx shape inference on sequence nodes.
* Fix shape inference with block input/output: e.g. Loop and If nodes.
* Fix bugs in symbolic discovered by coverage of onnx shape inference.
* Improve debuggability: added more jit logs. For simplicity, the default log level, when jit log is enabled, will not dump ir graphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43929

Reviewed By: albanD

Differential Revision: D23674604

Pulled By: bzinodev

fbshipit-source-id: ab6aacb16d0e3b9a4708845bce27c6d65e567ba7
2020-09-14 15:36:19 -07:00
Ksenija Stanojevic
f7cfbac89b [ONNX] Update len symbolic (#43824)
Summary:
Update len symbolic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43824

Reviewed By: izdeby

Differential Revision: D23575765

Pulled By: bzinodev

fbshipit-source-id: 0e5c8c8d4a5297f65e2dc43168993350f784c776
2020-09-14 15:00:44 -07:00
shubhambhokare1
da11d932bc [ONNX] Update arange op to support out argument (#43777)
Summary:
Update arange op to support out argument

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43777

Reviewed By: albanD

Differential Revision: D23674583

Pulled By: bzinodev

fbshipit-source-id: 6fb65e048c6b1a551569d4d2a33223522d2a960c
2020-09-14 14:56:17 -07:00
neginraoof
539d029d8c [ONNX] Fix split export using slice (#43670)
Summary:
Fix for exporting split with fixed output shape using slice.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43670

Reviewed By: houseroad

Differential Revision: D23420318

Pulled By: bzinodev

fbshipit-source-id: 09c2b58049fe32dca2f2977d91dd64de6ee9a72f
2020-09-04 10:52:44 -07:00
Spandan Tiwari
1a21c92364 [ONNX] Update in scatter ONNX export when scalar src has different type (#43440)
Summary:
`torch.scatter` allows `src` to be of different type when `src` is a scalar. This requires a an explicit cast op to be inserted in the ONNX graph because ONNX `ScatterElements` does not allow different types. This PR updates the export of `torch.scatter` with this logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43440

Reviewed By: hl475

Differential Revision: D23352317

Pulled By: houseroad

fbshipit-source-id: c9eeddeebb67fc3c40ad01def134799ef2b4dea6
2020-08-27 16:45:37 -07:00
BowenBao
8efa898349 [ONNX] Export split_to_sequence as slice when output number is static (#42744)
Summary:
Optimize exported graph to export slice nodes for aten::split when the number of split outputs are fixed. Previously under some cases these are exported as onnx::SplitToSequence, which is dynamic in tensor output count.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42744

Reviewed By: houseroad

Differential Revision: D23172465

Pulled By: bzinodev

fbshipit-source-id: 11e432b4ac1351f17e48356c16dc46f877fdf7da
2020-08-22 09:11:25 -07:00
BowenBao
da70976e66 [ONNX] Add support for operator add between tensor list (#41888)
Summary:
E.g.
```python
outs = []
outs += [torch.randn(3,4)]
outs = outs + [torch.randn(4,5), torch.randn(5,6)]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41888

Reviewed By: houseroad

Differential Revision: D23172880

Pulled By: bzinodev

fbshipit-source-id: 93865106e3de5908a993e0cfa82f626ba94dab7e
2020-08-20 22:38:23 -07:00
Yael Dekel
3c5e3966f4 [ONNX] Squeeze operator should give an error when trying to apply to a dimension with shape > 1 (#38476)
Summary:
The ONNX spec for the Squeeze operator:

> Remove single-dimensional entries from the shape of a tensor. Takes a parameter axes with a list of axes to squeeze. If axes is not provided, all the single dimensions will be removed from the shape. If an axis is selected with shape entry not equal to one, an error is raised.

Currently, as explained in issue https://github.com/pytorch/pytorch/issues/36796, it is possible to export such a model to ONNX, and this results in an exception from ONNX runtime.

Fixes https://github.com/pytorch/pytorch/issues/36796.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38476

Reviewed By: hl475

Differential Revision: D22158024

Pulled By: houseroad

fbshipit-source-id: bed625f3c626eabcbfb2ea83ec2f992963defa19
2020-08-17 17:41:46 -07:00
Spandan Tiwari
d83cc92948 [ONNX] Add support for scalar src in torch.scatter ONNX export. (#42765)
Summary:
`torch.scatter` supports two overloads – one where `src` input tensor is same size as the `index` tensor input, and second, where `src` is a scalar. Currrently, ONNX exporter only supports the first overload. This PR adds export support for the second overload of `torch.scatter`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42765

Reviewed By: hl475

Differential Revision: D23025189

Pulled By: houseroad

fbshipit-source-id: 5c2a3f3ce3b2d69661a227df8a8e0ed7c1858dbf
2020-08-10 11:45:42 -07:00
BowenBao
a6c8730045 [ONNX] Add preprocess pass for onnx export (#41832)
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference https://github.com/pytorch/pytorch/issues/40628.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
2020-08-06 20:34:12 -07:00
Ksenija Stanojevic
9b0393fcf1 [ONNX]Fix export of flatten (#40418)
Summary:
Shape is passed to _reshape_to_tensor as a Constant and cannot infer shape of the input when model is exported with dynamic axes set. Instead of a Constant pass output of a subgraph Shape-Slice-Concat to compute the shape for the Reshape node in _reshape_to_tensor function.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40418

Reviewed By: hl475

Differential Revision: D22480127

Pulled By: houseroad

fbshipit-source-id: 11853adb6e6914936871db1476916699141de435
2020-07-10 13:06:25 -07:00
Yael Dekel
3fa0b1e325 ONNX: fix bug in export of cumsum operator (#40044)
Summary:
The "cast" operator is currently added after the cumsum operator, but it should be added before, since torch.cumsum supports more types than ONNX (specifically, bool).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40044

Reviewed By: hl475

Differential Revision: D22158013

Pulled By: houseroad

fbshipit-source-id: e6c706572b9b8de880d4d71eaa132744ef01ad4d
2020-06-22 10:11:35 -07:00
Mike Ruberry
32bf63890b Revert D21992267: [pytorch][PR] [ONNX] Export linspace
Test Plan: revert-hammer

Differential Revision:
D21992267

Original commit changeset: 3a4093703570

fbshipit-source-id: 09c8cddd8cac3bb1cfa2b5f1abf2af3c45d8a3b1
2020-06-11 14:46:02 -07:00
neginraoof
7957d83498 [ONNX] Export linspace (#39403)
Summary:
Adding linspace symbolic for opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39403

Reviewed By: hl475

Differential Revision: D21992267

Pulled By: houseroad

fbshipit-source-id: 3a40937035703754045bb5e22ac5edf721453c25
2020-06-11 12:08:19 -07:00
BowenBao
7be9796cc4 [ONNX] Support clamp_min and clamp_max (#37872)
Summary:
clamp_min is used in `torch.nn.functional.normalize`. Update symbolic_opset11 to support with updated clip in onnx opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37872

Reviewed By: hl475

Differential Revision: D21440450

Pulled By: houseroad

fbshipit-source-id: a59cbec3f4d00c3f6654da6a747fbfca59d618f1
2020-05-07 04:39:46 -07:00
neginraoof
e56ba8481e [ONNX] fix size for opset 11 (#35984)
Summary:
Fixing size, as the aten op has updated to support 0 inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35984

Reviewed By: hl475

Differential Revision: D20858214

Pulled By: houseroad

fbshipit-source-id: 8ad0a0174a569455e89da6798eed403c8b162a47
2020-04-05 18:58:54 -07:00
neginraoof
60a3e82c4e [ONNX] Fix for constant folding: Slice, Added ReduceL1 and ReduceL2 (#35280)
Summary:
1- Added support for constant folding onnx::ReduceL1 and onnx::ReduceL2
2- Fixed constant folding for slice as onnx::Slice opset 11 supports negative axes and indices
3- Updated export of select opset 11
4- Separated test environment for test_utility_functions as environment variables could be overwritten by caffe2 quantization tests on CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35280

Reviewed By: hl475

Differential Revision: D20626140

Pulled By: houseroad

fbshipit-source-id: 39667c7852eeaa97d9da23f53da52760d3670ecf
2020-04-01 04:47:47 -07:00
julianmack
ad1091f753 Fixes default dtype value for onnx hardtanh export (opset11) (#35467)
Summary:
Oneline fix to lara-hdr 's PR https://github.com/pytorch/pytorch/pull/30169.

Default `dtype` value should be set when `dtype is None` rather than when `dtype is not None`.

I didn't make an issue for this as such a small change but I have been using this locally in order to export a model with opset 11 (opset 10 still works).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35467

Differential Revision: D20686048

Pulled By: mruberry

fbshipit-source-id: 726a5f9c0711c7a79b171fe98b602cdef27f9b31
2020-03-27 19:15:42 -07:00
Negin Raoof
9823662b43 [ONNX] Export split with list of sizes (#33161)
Summary:
Exporting Split with a dynamic list of split_sizes is not supported.
This PR enables export using onnx SplitToSequence + SequenceAt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33161

Reviewed By: hl475

Differential Revision: D19860152

Pulled By: houseroad

fbshipit-source-id: 300afedc22b01923efb23acd1a3627aa146bb251
2020-02-14 12:46:33 -08:00
Negin Raoof
6249d7302b [ONNX] Fix export for avg_pool with default stride (#33017)
Summary:
If using nn.functional avg_pool, stride is an optional arg. If not provided, it is set to kernel_size.
This PR fixes the export of avg_pool with default stride.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33017

Reviewed By: hl475

Differential Revision: D19759604

Pulled By: houseroad

fbshipit-source-id: b0352db6fbaf427f4cff9ba8a942efdeb39b6f02
2020-02-07 22:46:46 -08:00
Brian Stark
55c382e62b Fixed access to element in size tensor for scripting (#32652)
Summary:
when using scripting, there was an error in attempting to access a
specific element from within the size tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32652

Reviewed By: hl475

Differential Revision: D19610726

Pulled By: houseroad

fbshipit-source-id: bca49927bbe71dbe7e7d7edf301908fe79e089b5
2020-01-29 18:33:46 -08:00
Junjie Bai
5be8dac329 Remove non-ascii character from torch/onnx/symbolic_opset11.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31814

Reviewed By: houseroad

Differential Revision: D19270742

Pulled By: bddppq

fbshipit-source-id: 80800d588e63701d6e1b5838d7ada993f0246a81
2020-01-02 20:54:32 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
neginraoof
0b57b383b1 Im2col export (#30972)
Summary:
Added im2col to opset 11.
This symbolic is used to export torch.nn.Unfold
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30972

Reviewed By: hl475

Differential Revision: D18946921

Pulled By: houseroad

fbshipit-source-id: 13dd0cbae899700df32fd74d6dff1f29033a2b4c
2019-12-20 09:45:45 -08:00
Lara
1d5af9599d Update ONNX Flatten to accept negative indices in opset 11 (#30751)
Summary:
Update ONNX Flatten to accept negative indices in opset 11.
With this change, some cases of flatten do not rely on the input rank being available.
Fixes : https://github.com/pytorch/pytorch/issues/30512 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30751

Reviewed By: hl475

Differential Revision: D18946904

Pulled By: houseroad

fbshipit-source-id: a6fa30a9182fff92211e505a19325525c6112f19
2019-12-12 15:27:54 -08:00
BowenBao
6ab2d1b1a4 Partially support tensor lists in loop/concat/stack (#30126)
Summary:
This is a follow-up PR after https://github.com/pytorch/pytorch/pull/29136 ~~and https://github.com/pytorch/pytorch/pull/29171~~

ONNX::Loop does not support Sequence type as loop-carried dependencies. Only tensors are supported.
This PR adds a pass that converts Sequence loop-carried dependencies to scan_outputs.
In opset 11, only the below pattern is supported.
```
PTIR graph:
 ...
 %res.1 : Tensor[] = prim::ListConstruct()
 %res : Tensor[] = prim::Loop(%11, %22, %res.1)
   block0(%i.1 : Tensor, %res.6 : Tensor[]):
     ...
     %res.3 : Tensor[] = aten::append(%res.6, %17)
     -> (%22, %res.3)
 return (%res.3)

ONNX graph:
 ...
 %res : Tensor = onnx::Loop(%11, %22)
   block0(%i.1 : Tensor):
     ...
     -> (%22, %17)
 %res_seq : Tensor[] = onnx::SplitToSequence[keepdims=0](%res)
 return (%res_seq)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30126

Reviewed By: hl475

Differential Revision: D18946880

Pulled By: houseroad

fbshipit-source-id: 67ee65700513e8a942344a3d647e2e73c19ee3d2
2019-12-11 21:24:41 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
BowenBao
63f1b780ba Support exporting aten::copy_ and aten::index_put to ONNX opset 11 (#26941)
Summary:
- [x] Add more comments and refactor the logic of `ReshapeToAdvancedIndexingFormat`
- [x] Add more description here. Cases that are/aren't supported, and how they are supported.
- [x] Need to merge this PR https://github.com/pytorch/pytorch/issues/27186 to enable testing inplace operators.

We are now supporting exporting aten::copy_ and aten::index_put to ONNX.
Here's a breakdown of the different cases in PyTorch code.

```
# Case 1: Scalar Indices
x[0, 1, 2] = data

# Case 2: Slice Indices
x[1:3, :, ::2] = data

# Case 3: Ellipsis Indices
x[..., 0] = data

# Case 4: Tensor Indices
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
x[ind1, ind2] = data

# Case 5: Mixing all the above cases
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
x[1:3, ind1, ind2, ..., 3] = data
```

Limitations:

Tensor indices must be consecutive, and 1-d tensors.

```
# Supported
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
x[ind1, ind2] = data

# Not supported
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
ind3 = torch.tensor([[0], [1]])
x[ind1, :, ind2] = data
x[ind3] = data
```

Negative indices are not supported.
```
# Not supported
x[-1] = data
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26941

Differential Revision: D17951030

Pulled By: houseroad

fbshipit-source-id: 4357777072f53aa0bc4b297aa1ee53457a7f8dec
2019-12-06 22:48:46 -08:00
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
BowenBao
0febff36ac Export dynamic unbind/split and __getitem__ (#29136)
Summary:
In ONNX opset 11, a series of sequence ops were added. Operators that are related to Tensor[] in PyTorch can be exported using these sequence ops.
In this PR, unbind/split that produces Tensor[], and __getitem__ that takes Tensor[] as input, are exported correctly to ONNX opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29136

Reviewed By: hl475

Differential Revision: D18309222

Pulled By: houseroad

fbshipit-source-id: be12c96bf8d0a56900683ef579f1c808c0a1af21
2019-11-26 06:54:06 -08:00
Lara
bbb3c415c9 ONNX Hardtanh Opset 11 Support (#30169)
Summary:
Add support for hardtanh that was blacklisted in opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30169

Reviewed By: hl475

Differential Revision: D18619552

Pulled By: houseroad

fbshipit-source-id: 0c1bfb0a53d1dd2327c5db7afd03a90482abb9fe
2019-11-20 18:59:00 -08:00
Lara Haidar
45024e7a35 Support Exporting Bitshift to ONNX (#28210)
Summary:
Support exporting left/right bitshifts to ONNX for all opset versions.

ONNX has a bitshift operator in opset 11, but it only supports unsigned ints, so it can't be used in PyTorch (since only uint8 is the only uint type).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28210

Reviewed By: hl475

Differential Revision: D18575512

Pulled By: houseroad

fbshipit-source-id: 74161db67f599996a0614981edcc171af6780d21
2019-11-19 09:25:50 -08:00
BowenBao
fbabf72829 Add ONNX support for Logdet (#29767)
Summary:
Exported as combination of ONNX::Log and ONNX::Det.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29767

Reviewed By: hl475

Differential Revision: D18499762

Pulled By: houseroad

fbshipit-source-id: e6f2298635a995f01b2913d8958b5e1ca9d04058
2019-11-15 20:27:43 -08:00
Lara
2acca09e1a Add Support for ONNX scripting Interpolate with missing shape (#29489)
Summary:
- Add support for missing case where interpolate is exported with missing shape information in scripting
- Add warnings
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29489

Reviewed By: hl475

Differential Revision: D18438872

Pulled By: houseroad

fbshipit-source-id: d01f833bec0cc4e881ddc18e7054d22f54e9886b
2019-11-11 21:20:14 -08:00
Negin Raoof
ebc216a076 Opset 11 updates (#28225)
Summary:
This PR contains:
1- pad updates for opset11 symbolic
2- Updated avg_pool for opset11
3- TopK updates for opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28225

Reviewed By: hl475

Differential Revision: D18282928

Pulled By: houseroad

fbshipit-source-id: aff2cabca9a155a9b475e35fed69a678544d6669
2019-11-04 12:16:12 -08:00
Lara
d762ad09df Enable Interpolate Tests for ONNX Opset 11 (#28560)
Summary:
- Enable tests for Interpolate in opset 11 for nearest and linear2d modes (linear1d/3d not implemented yet)
- Fix bugs found after enabling tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28560

Reviewed By: hl475

Differential Revision: D18110680

Pulled By: houseroad

fbshipit-source-id: 7f8811e40dc5cedaba6389460dcca52daa048f5f
2019-10-24 14:21:13 -07:00
Negin Raoof
4f70b5a4de Export det (#26958)
Summary:
Added symbolic to export det in opset 11
Updating ONNX submodule is required for det export
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26958

Reviewed By: hl475

Differential Revision: D17844887

Pulled By: houseroad

fbshipit-source-id: 224ae3ff82939dc7ae8584c5a30a31fe6afa05f6
2019-10-22 13:33:15 -07:00
neginraoof
95922c90b5 Export update for arange and _dim_arange (#26875)
Summary:
Export arange and _dim_arange using onnx::range in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26875

Reviewed By: hl475

Differential Revision: D17623848

Pulled By: houseroad

fbshipit-source-id: 41f0066ca1c42882ccc051a3ee5448dca25ee5d2
2019-10-17 13:55:45 -07:00
Lara
735463f210 ONNX Export Scripted Interpolate Op (#27566)
Summary:
We currently support exporting traced interpolate ops to ONNX.

Scripting interpolate op invokes aten::__interpolate in the Torch IR (instead of aten::upsample_[mode][dim]d), which we do not support yet.
This PR implements the ONNX symbolic for __interpolate() to support exporting interpolate in scripting scenarios.

Related open issue: https://github.com/pytorch/pytorch/issues/25807
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27566

Reviewed By: hl475

Differential Revision: D17817731

Pulled By: houseroad

fbshipit-source-id: e091793df503e2497f24821cf2954ff157492c75
2019-10-16 11:22:22 -07:00
BowenBao
ab50abca5c Export masked_select and masked_scatter in opset 11 (#25949)
Summary:
- masked_select is exported as ONNX::GatherND
- masked_scatter is exported as ONNX::ScatterND
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25949

Reviewed By: hl475

Differential Revision: D17465489

Pulled By: houseroad

fbshipit-source-id: 4c3732617733ca2024a5e306ffa9f6bfcf9725d5
2019-10-15 21:09:37 -07:00
Negin Raoof
3d2c90131a opset 11 updates (#27578)
Summary:
Opset 11 updates:
- Enabled ORT tests for updated ops in opset 11
- Updated index_copy and index_fill symbolic for opset 11 to modify onnx::Scatter -> onnx::ScatterElemets
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27578

Reviewed By: hl475

Differential Revision: D17852462

Pulled By: houseroad

fbshipit-source-id: c88747804054d0f3455f2c58fd1d8725e0b2f803
2019-10-11 16:18:40 -07:00
Negin Raoof
d93fc64776 Update export for topk and sort (#25739)
Summary:
updated export for topk and sort as part of opset11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25739

Reviewed By: hl475

Differential Revision: D17467131

Pulled By: houseroad

fbshipit-source-id: 653be138455728ec8e9bb81ae63dd7ce0c4d0793
2019-10-02 12:20:30 -07:00
Lara Haidar
f4f6d8dda5 Fix ONNX Interpolate
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27179

Reviewed By: hl475

Differential Revision: D17698364

Pulled By: houseroad

fbshipit-source-id: 8fddd1c13e7af026962cf2d9c05fd7c957d8526e
2019-10-02 01:17:46 -07:00
Lara
d396c7332a Update ONNX Export for Interpolate in Opset 11 (#26778)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26778

- Add support for linear and cubic interpolate in opset 11.
- Add support for 1d and 3d interpolate in nearest mode for opset 7 and 8.
- Add tests for all cases of interpolate in ORT tests (nearest/linear/cubic, 1d/2d/3d, upsample/downsample).
Original PR resolved: https://github.com/pytorch/pytorch/pull/24805

Reviewed By: hl475

Differential Revision: D17564911

Pulled By: houseroad

fbshipit-source-id: 591e1f5b361854ace322eca1590f8f84d29c1a5d
2019-09-25 05:43:20 -07:00
Edward Yang
1bb895e1c1 Revert D17330801: [pytorch][PR] Update ONNX Export for Interpolate in Opset 11
Test Plan: revert-hammer

Differential Revision:
D17330801

Original commit changeset: 1bdefff9e72f

fbshipit-source-id: dff07477403170c27260f736ab6e6010f0deca9f
2019-09-24 18:56:45 -07:00