Commit Graph

520 Commits

Author SHA1 Message Date
lezcano
46a81c8db7 Deprecate .mT,.T,.mH,.H on 0D tensors (#92143)
As discussed with @ngimel, this is not only not documented,
but also an unnecessary edge case. See https://github.com/pytorch/pytorch/pull/90463#discussion_r1064807197
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92143
Approved by: https://github.com/ngimel
2023-01-17 16:54:35 +00:00
Lei Mao
9cf8434776 [ONNX] Raise Unsupported for Grid Sample with volumetric 5D input (#92212)
Fixes #92209

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92212
Approved by: https://github.com/BowenBao
2023-01-16 03:34:05 +00:00
AllenTiTaiWang
e3ed55d483 [ONNX] Add aten::zero support (#91731)
Fixes #90268

When we use `tensor.zero_()` with inplace slice, it actually uses `aten::zero` instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91731
Approved by: https://github.com/BowenBao
2023-01-07 11:07:54 +00:00
PyTorch MergeBot
08a378a286 Revert "[ONNX] Add aten::zero support (#91731)"
This reverts commit ff23508c0d.

Reverted https://github.com/pytorch/pytorch/pull/91731 on behalf of https://github.com/clee2000 due to failing test_correct_module_names ff23508c0d https://github.com/pytorch/pytorch/actions/runs/3859079162/jobs/6578419644
2023-01-06 23:57:57 +00:00
AllenTiTaiWang
ff23508c0d [ONNX] Add aten::zero support (#91731)
Fixes #90268

When we use `tensor.zero_()` with inplace slice, it actually uses `aten::zero` instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91731
Approved by: https://github.com/BowenBao
2023-01-06 22:48:54 +00:00
BowenBao
66745831d7 [ONNX] Support constant 'aten::__contains__' (#91660)
#84624 introduces an update on `torch.norm` [dispatch logic](eaa43d9f25/torch/functional.py (L1489)) which now depends on `layout`. Resulting in regressions to export related operators from TorchScript.

This PR resolves the regression by partially supporting a subset use case of `prim::layout` (only `torch.strided`), `aten::__contains__` (only constants) operators. It requires much more effort to properly support other layouts, e.g. `torch.sparse_coo`. Extending JIT types, and supporting related family of ops like `aten::to_sparse`. This is out of the scope of this PR.

Fixes #83661
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91660
Approved by: https://github.com/justinchuby, https://github.com/kit1980
2023-01-06 01:39:32 +00:00
BowenBao
1b2c59ad24 [ONNX] Introduce ONNX reference evaluator for verification (#89808)
Reference evaluator requires ONNX >= 1.13. Running in CI is blocked by unable to bump onnx submodule version, like in #83201. Local tests pass.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89808
Approved by: https://github.com/justinchuby
2022-12-10 01:29:12 +00:00
AllenTiTaiWang
41bfa49db9 [ONNX] Add src/index dynamic axes support for aten::scatter_add (#90090)
Extend from #89787 , and answer from https://github.com/onnx/onnx/issues/4672, dynamically catching shape of index can let converter further support on this op.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90090
Approved by: https://github.com/BowenBao
2022-12-06 07:56:20 +00:00
AllenTiTaiWang
b2f340557a [ONNX] Supports scatter_add with different static shape of src and index (#89787)
Prior to this change, the converter doesn't support `scatter_add` with different shape of `src` and `index`, while [it's claimed to be supported by PyTorch](https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html#torch.Tensor.scatter_add_) in a way that scatter shape would be accommodated to index shape. This PR adds `onnx::Slice` to adjust the shape of `src` when a static and mismatched shape is found. However, if both of the shape (src and index) is set to dynamic, they are expected to be the same shape from ONNX due to the spec. More ScatterElements details on https://github.com/onnx/onnx/issues/4672
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89787
Approved by: https://github.com/BowenBao
2022-12-01 18:25:22 +00:00
lezcano
1d6a188d08 Reland Dispatch torch.norm to linalg.vector_norm and linalg.matrix_norm (#81761) (#84624)
Reland https://github.com/pytorch/pytorch/pull/81761

Differential Revision: [D39332292](https://our.internmc.facebook.com/intern/diff/D39332292)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84624
Approved by: https://github.com/kit1980
2022-11-22 07:53:24 +00:00
Kazuaki Ishizaki
088f2fa567 Fix typos in messages under test (#89121)
This PR fixes typos of messages in `.cpp` and `.py` files under test directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89121
Approved by: https://github.com/mruberry, https://github.com/kit1980
2022-11-17 01:55:03 +00:00
mindest
9fe36a0214 [ONNX] Extra support for bernoulli export (#88655)
* add opset 15 support for `bernoulli`.
* add extra export options for different `bernoulli` cases: `x.bernoulli(p)` where `p` is a tensor or float.

Fixes #88299

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88655
Approved by: https://github.com/BowenBao
2022-11-16 15:08:41 +00:00
AllenTiTaiWang
b843f4db0a [ONNX] Add test case for onnx::Max scalar type (#88751)
Referenced by minimum cases
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88751
Approved by: https://github.com/wschin, https://github.com/BowenBao
2022-11-11 07:08:56 +00:00
Thiago Crepaldi
a8f40b39ce Update all ONNX symbolics with new JitScalarType API (#87245)
Fixes https://github.com/pytorch/pytorch/issues/84365 and more

This PR addresses not only the issue above, but the entire family of issues related to `torch._C.Value.type()` parsing when `scalarType()` or `dtype()` is not available.

This issue exists before `JitScalarType` was introduced, but the new implementation refactored the bug in because the new api `from_name` and `from_dtype` requires parsing `torch._C.Value.type()` to get proper inputs, which is exactly the root cause for this family of bugs.

Therefore `from_name` and `from_dtype` must be called when the implementor knows the `name` and `dtype` without parsing a `torch._C.Value`. To handle the corner cases hidden within `torch._C.Value`, a new `from_value` API was introduced and it should be used in favor of the former ones for most cases. The new API is safer and doesn't require type parsing from user, triggering JIT asserts in the core of pytorch.

Although CI is passing for all tests, please review carefully all symbolics/helpers refactoring to make sure the meaning/intetion of the old call are not changed in the new call

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87245
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-11-03 03:01:33 +00:00
AllenTiTaiWang
3d90788a58 [ONNX] Add 0d-tensor test case in runtime check (#87212)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87212
Approved by: https://github.com/BowenBao
2022-11-02 16:04:21 +00:00
Thiago Crepaldi
fdc419786d Add unit test for torch_geometric library (#85937)
Fixes #65138

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85937
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-11-01 16:43:58 +00:00
AllenTiTaiWang
cb05a4da39 [ONNX] Parametrized Avgpool2D test to have all test combinations (#87893)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87893
Approved by: https://github.com/BowenBao
2022-10-29 11:45:28 +00:00
AllenTiTaiWang
52ac8adc20 [ONNX] Fix pad Circular Mode (#86984)
In https://github.com/pytorch/pytorch/pull/73433, a ONNX test case is missed, and the result is incorrect when it is converted to ONNX.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86984
Approved by: https://github.com/BowenBao
2022-10-25 19:39:35 +00:00
AllenTiTaiWang
65b4a633bb [ONNX] Support quantized::conv1d_relu (#85997)
According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85997
Approved by: https://github.com/BowenBao
2022-10-25 18:48:25 +00:00
shubhambhokare1
8d37e51931 [ONNX] Enable test_fill script test (#79555)
For scripting mode, aten::clone requires input to be a TensorType. Hence if we encounter an IntType, FloatType or BoolType input, we set the input to the appropriate TensorType
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79555
Approved by: https://github.com/justinchuby, https://github.com/BowenBao, https://github.com/abock
2022-10-24 20:48:29 +00:00
Thiago Crepaldi
1167949b2d [ONNX] Ignore print(Tensor) during tracing (#86223)
Fixes #73619
Fixes https://github.com/microsoft/onnxruntime/issues/11812

This PR adds new symbolics: `aten::_conj`, `aten::conj_physical`, `aten::resolve_conj`, and `aten::resolve_neg`
While the last two are always NO-OP by definition (do not change nodes), the first raises an exception as they are not supported by ONNX yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86223
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-10-17 19:45:33 +00:00
BowenBao
af1dcef79c [ONNX] Fix triu/tril export with diagonal input (#86843)
Investigation with @thiagocrepaldi discovered this bug with triu/tril export when
`diagonal` is passed in as input. Previously assumption was made that `diagonal`
is always provided a constant value.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86843
Approved by: https://github.com/thiagocrepaldi, https://github.com/abock
2022-10-13 18:09:37 +00:00
BowenBao
b0d80f4355 [ONNX] Clarify phrasing of skipScriptTest/skipTraceTest decorators (#86216)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86216
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/abock
2022-10-13 17:20:35 +00:00
BowenBao
45274c56a4 [ONNX] Partially re-enable RoiAlign and RoiPool unit tests (#86169)
This PR depends on https://github.com/pytorch/vision/pull/6685

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86169
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/abock
2022-10-13 14:39:44 +00:00
BowenBao
cc7ea93c2c [ONNX] Support device().type() string comparison with constant (#86168)
Fixes #86168

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86168
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/abock
2022-10-12 17:25:45 +00:00
Justin Chu
2fa8142cf9 [ONNX] Rename constants for clarity (#84645)
Rename constants to make them more clear. Fix styles to upper case.

Removed `onnx_stable_opsets` because it can be computed from `ONNX_MIN_OPSET` and `ONNX_MAX_OPSET`.

Fixes #84643

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84645
Approved by: https://github.com/BowenBao
2022-09-09 01:22:14 +00:00
titaiwang
942c0f31df [ONNX] Align Optional Type in block (#83599)
Why:

Previously, we use `replaceAlluseswith` after adding Optional on the node which is right before output. However, this may break the graph by also changing the nodes that needs the node (original) as input. We only need the node to be optional in output.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83599
Approved by: https://github.com/justinchuby, https://github.com/BowenBao, https://github.com/malfet
2022-09-08 03:13:19 +00:00
PyTorch MergeBot
166dec74b5 Revert "Dispatch torch.norm to linalg.vector_norm and linalg.matrix_norm (#81761)"
This reverts commit 65beff5acb.

Reverted https://github.com/pytorch/pytorch/pull/81761 on behalf of https://github.com/mehtanirav due to Breakages in pytorch/glow
2022-09-06 22:31:14 +00:00
titaiwang
7c4c7dafbd [ONNX] Add onnx::LayerNorm support for version 17 (#84293)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84293
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-09-04 02:20:08 +00:00
Justin Chu
388368b699 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-09-03 01:40:18 +00:00
lezcano
65beff5acb Dispatch torch.norm to linalg.vector_norm and linalg.matrix_norm (#81761)
`torch.norm` is very odd. Some notable issues are:

- The default value of `"fro"` in `torch.norm` has an odd behaviour when `dim=None`. This is handled in the new dispatch
- The treatment of the `dtype` argument in `torch.norm` was completely wrong. This should fix it
- Some `out=` variants in the previous implementation were also wrong. This should fix those.
- This new dispatch should make some paths much faster. For example, `torch.norm(x)` where `x` is complex.

I'll try to make the changes in these PRs as incremental as possible as this is a tricky one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81761
Approved by: https://github.com/ngimel
2022-09-02 19:12:25 +00:00
BowenBao
fd756caa36 [ONNX] Support nn.init.normal (#84149)
* Updated symbolic function for `aten::normal` to support additional generator arguments emitted from 5563248b58/torch/csrc/jit/passes/remove_mutation.cpp (L51)
* Added symbolic function for `aten::is_pinned` and `prim::layout`. Both are unused by ONNX later on.

Fixes #83647

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84149
Approved by: https://github.com/AllenTiTaiWang, https://github.com/abock
2022-09-01 18:29:41 +00:00
titaiwang
5bceaadb70 [ONNX] Add script/trace different flatten and move optional type tests to runtime (#83184)
fix #78119

Why:
As in onnx tests verification code, we used to only consider tracing output, which ignores None type, this PR enables runtime test to keep None type in torch in script mode.

1. Move Optional Type tests from no runtime to runtime, as it's supported by ONNXRUNTIME.
2. Add ignoreNone flag for output comparison of internal tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83184
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-08-30 18:23:24 +00:00
PyTorch MergeBot
d8cc8368ab Revert "[ONNX] Fix type annotations and enable type checking for all apis (#84091)"
This reverts commit 6446da1730.

Reverted https://github.com/pytorch/pytorch/pull/84091 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-08-28 12:28:58 +00:00
Justin Chu
6446da1730 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao
2022-08-27 04:40:41 +00:00
zaf
c92e5ac95b [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012/)

Differential Revision: [D38926012](https://our.internmc.facebook.com/intern/diff/D38926012)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-25 16:50:33 +00:00
BowenBao
3f88171240 [ONNX] Remove static None graph output (#82623)
Fixes #82370
* Unify the export behavior regarding static None outputs. These are
dropped for both traced graph and TorchScript graph export.
* `Optional` outputs are not affected.
Fixes #82370
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82623
Approved by: https://github.com/AllenTiTaiWang, https://github.com/abock
2022-08-24 17:11:46 +00:00
Justin Chu
80cfafc385 [ONNX] Add quantization support to more single output ops (#83008)
#80039

- Implement quantization support for single output ops
  - quantized::sigmoid
  - quantized::instance_norm
  - aten::reshape
  - aten::reshape_as
  - aten::sum
  - aten::mean
  - aten::prod
  - aten::t
  - aten::numpy_T
  - aten::expand
  - aten::expand_as
  - aten::embedding
  - aten::embedding_bag
  - aten::view
  - aten::select
  - aten::eq
  - aten::ne
  - aten::gt
  - aten::lt
  - aten::le
  - aten::ge
  - quantized::layer_norm
  - aten::elu
  - aten::selu
  - aten::maximum
  - aten::minimum
  - aten::amax
  - aten::amin
  - aten::hardtanh
  - aten::hardswish
  - quantized::group_norm
  - aten::as_strided
  - quantized::leaky_relu
  - aten::transpose
- Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity)
- Remove magic number and assign it to INT64_MAX
- implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83008
Approved by: https://github.com/BowenBao
2022-08-23 00:39:24 +00:00
PyTorch MergeBot
6a9c02339d Revert "[quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)"
This reverts commit 432f037498.

Reverted https://github.com/pytorch/pytorch/pull/78713 on behalf of https://github.com/janeyx99 due to Reverting for breaking (trunk-only) ios build
2022-08-22 07:32:37 +00:00
zaf
432f037498 [quant][ao_migration] torch.nn.quantized.modulestorch.ao.nn.quantized.modules (#78713)
Context: In order to avoid the cluttering of the `torch.nn` namespace
the quantized modules namespace is moved to `torch.ao.nn`.

The list of the `nn.quantized` files that are being migrated:

- [ ] `torch.nn.quantized` → `torch.ao.nn.quantized`
    - [X] `torch.nn.quantized.functional` → `torch.ao.nn.quantized.functional`
    - [X] [Current PR] `torch.nn.quantized.modules` → `torch.ao.nn.quantized.modules`
    - [ ] `torch.nn.quantized.dynamic` → `torch.ao.nn.quantized.dynamic`
    - [ ] `torch.nn.quantized._reference` → `torch.ao.nn.quantized._reference`
- [ ] `torch.nn.quantizable` → `torch.ao.nn.quantizable`
- [ ] `torch.nn.qat` → `torch.ao.nn.qat`
    - [ ] `torch.nn.qat.modules` → `torch.ao.nn.qat.modules`
    - [ ] `torch.nn.qat.dynamic` → `torch.ao.nn.qat.dynamic`
- [ ] `torch.nn.intrinsic` → `torch.ao.nn.intrinsic`
    - [ ] `torch.nn.intrinsic.modules` → `torch.ao.nn.intrinsic.modules`
    - [ ] `torch.nn.intrinsic.qat` → `torch.ao.nn.intrinsic.qat`
    - [ ] `torch.nn.intrinsic.quantized` → `torch.ao.nn.intrinsic.quantized`
        - [ ] `torch.nn.intrinsic.quantized.modules` → `torch.ao.nn.intrinsic.quantized.modules`
        - [ ] `torch.nn.intrinsic.quantized.dynamic` → `torch.ao.nn.intrinsic.quantized.dynamic`

Majority of the files are just moved to the new location.
However, specific files need to be double checked:

- Documentation @vkuzo
  - docs/source/conf.py
  - docs/source/quantization.rst
- [quantize_fx](torch/ao/quantization/quantize_fx.py) @jerryzh168
- [common test routine](test/quantization/ao_migration/common.py) @HDCharles
- JIT stuff @jamesr66a
  - torch/csrc/jit/passes/hoist_conv_packed_params.cpp
  - torch/csrc/jit/passes/quantization/helper.h
  - torch/csrc/jit/serialization/import_source.cpp

Differential Revision: [D36860145](https://our.internmc.facebook.com/intern/diff/D36860145/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78713
Approved by: https://github.com/jerryzh168
2022-08-22 01:38:55 +00:00
Justin Chu
05849eafb9 [ONNX] Create empty opset 17 symbolic file (#83287)
The PR

- Creates an empty symbolic file to house the new ops defined in ONNX 17
- Increments the max version to 17 and fixes the doc for version 16
- Enables tests for opset 17
- Updates the IR version in `export.cpp`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83287
Approved by: https://github.com/thiagocrepaldi, https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-19 02:02:46 +00:00
titaiwang
e327cc5e44 [ONNX] Enable test_uninitialized_optional (#83183)
Why:
Now that onnxruntime supports optional tensor execution, we should enable its tests on runtime.

CI should pass after ONNXRUNTIME bumps to 1.12.0 #81147
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83183
Approved by: https://github.com/BowenBao
2022-08-12 17:50:30 +00:00
Li-Huai (Allan) Lin
d9a7e93aaf [ONNX] Add dtype check in onnx verification (#79263)
Currently we don't have a dtype check in verifying the consistency between PyTorch and ONNX outputs. As a result, some of dtype inconsistencies were found and reported: #77842 #77845

This is a POC.

Failed workflows:
- [linux-xenial-py3.7-clang7-onnx / test (default, 2, 2, linux.2xlarge)]
  - inconsistent shape
    - TestONNXRuntime_opset10.test_all (#79371)
    - TestONNXRuntime_opset10.test_any (#79371)
    - TestONNXRuntime_opset10.test_argmin_argmax (#79503)
    - TestONNXRuntime_opset10.test_hardshrink (#79695)
    - TestONNXRuntime_opset10.test_linalg_norm (#79506)
    - TestONNXRuntime_opset10.test_linalg_vector_norm (#79506)
    - TestONNXRuntime_opset10.test_prelu_scalar (#79846)
    - TestONNXRuntime_opset10.test_softshrink (#79695)
    - TestONNXRuntime_opset10.test_sum_empty_tensor (skipped)
    - TestONNXRuntime_opset10.test_tolist (skipped)
  - inconsistent dtype
    - test_arithmetic_prim_bool (skipped)
    - test_arithmeticOps_with_low_precision (skipped)
    - test_arithmetic_prim_float (skipped)
    - test_logical_and (#79339)
    - test_logical_or (#79339)
    - test_logical_xor (#79339)
    - test_pow (skipped)
    - test_primitive_input_floating (skipped)
    - test_quantize_per_tensor (#79690)
    - test_quantized_adaptive_avg_pool2d (#79690)
    - test_quantized_arithmetic (#79690)
    - test_quantized_arithmetic_qfunctional (#79690)
    - test_quantized_conv2d (#79690)
    - test_quantized_conv2d_relu (#79690)
    - test_quantized_flatten (#79690)
    - test_quantized_hardsigmoid (#79690)
    - test_quantized_hardswish (#79690)
    - test_quantized_linear (#79690)
    - test_quantized_sigmoid (#79690)
    - test_item (skipped)
    - test_full_like_value (skipped)
    - TestONNXRuntime_opset7.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset8.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9.test_div_rounding_mode (skipped)
    - TestONNXRuntime_opset9_IRv4.test_div_rounding_mode (skipped)
    - test_outer (skipped)
    - test_symbolic_shape_inference_arange_2 (skipped)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79263
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-08-10 07:14:12 +00:00
Li-Huai (Allan) Lin
6bdf89b0c7 [ONNX] Fix argmin and argmax test cases (#79503)
Part of #79263

The `keepdim` argument is theoretically ignored when `dim` is not specified (See [docs](https://pytorch.org/docs/stable/generated/torch.argmin.html)).

Unfortunately the PyTorch implementation seems to still take it into account, resulting in a non-fully-reduced tensor, which is an undefined behavior. Thus, I add `dim` argument to the tests to make the outputs between PyTorch and ONNX runtime consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79503
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 18:09:47 +00:00
titaiwang
b18498a636 [ONNX] Add RReLU eval mode behavior (#82678)
### Description

RReLU behaves the same as LeakyReLU when it's on test mode ([paper](https://arxiv.org/pdf/1505.00853.pdf)), but onnx now only supports train mode behavior, which blocks the models using RReLU.

This PR adds test mode behavior into RReLU symbolic function, adds a runtime case to validate that the outcome now matches torch result, and updates related UT.

1. Extend RReLU symbolic function with test mode behavior
2. Add onnxruntime UT to validate the usage
3. update the existing RReLU UT

### Issue
Fix #82031
Also raise a document issue for torch #82677

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82678
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-08-05 01:46:12 +00:00
BowenBao
8324cdda35 [ONNX] Add quantized model tests to CI (#80398)
In parallel to #80039, start tracking torchvision quantized model export in CI.

This PR depends on ~~#80393~~#79256, bumping torchvision version in CI, due to PyTorch not backward compatible with vision #74028.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80398
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/garymm
2022-07-28 21:25:29 +00:00
Wei-Sheng Chin
d30784be31 [ONNX] Fix ONNX aten::mul exporter with boolean inputs (#81671)
Continue work left in #72102.

The current exporter always export `aten::mul` to ONNX `Mul`. However, ONNX `Mul` [doesn't support Boolean](https://github.com/onnx/onnx/blob/main/docs/Operators.md#type-constraints-92) so we need to explicitly use ONNX `And` in this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81671
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-07-25 14:47:32 +00:00
Huy Do
88e1c5c1d8 Apply ufmt linter to all py files under test/onnx (#81335)
Same as https://github.com/pytorch/pytorch/pull/81285 but for `test/onnx`. The merge conflicts in `linrunner.toml` is expected. I will resolve them depending on the merge order of the PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81335
Approved by: https://github.com/BowenBao, https://github.com/kit1980
2022-07-15 18:51:38 +00:00
Justin Chu
4c728a7581 [ONNX] Add tests to quantized cat (#81484)
Add more test cases to test the onnx conversion of `quantized::cat`

Tested: Locally tested with onnx runtime 1.12. Added the skip afterward.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81484
Approved by: https://github.com/BowenBao
2022-07-15 17:13:26 +00:00
lezcano
b5b9db9f84 Make kl_div a composite function. (#80334)
Benchmarks: https://github.com/pytorch/pytorch/pull/80334#issuecomment-1167229285

Fixes https://github.com/pytorch/pytorch/issues/80158
Fixes https://github.com/pytorch/pytorch/issues/78867
Fixes https://github.com/pytorch/pytorch/issues/69230

Supersedes https://github.com/pytorch/pytorch/pull/79007
Supersedes https://github.com/pytorch/pytorch/pull/69212
Supersedes https://github.com/pytorch/pytorch/pull/19659
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80334
Approved by: https://github.com/ezyang
2022-07-13 20:07:36 +00:00