Commit Graph

177 Commits

Author SHA1 Message Date
wangxiyuan
f9947830bb [ONNX] Remove the depreacated function in symbolic_helper (#109681)
These three functions in symbolic_helper are depreacated and should be removed after pytorch 2.0.

The clean up job will be separated into several patches to ensure the safety. See: https://github.com/pytorch/pytorch/pull/107208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109681
Approved by: https://github.com/thiagocrepaldi
2023-09-20 19:31:39 +00:00
PyTorch MergeBot
cd31c170c9 Revert "[ONNX] Remove deprecated functions (#107208)"
This reverts commit 263ca7d69b.

Reverted https://github.com/pytorch/pytorch/pull/107208 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/107208#issuecomment-1726183104))
2023-09-19 17:26:48 +00:00
wangxiyuan
263ca7d69b [ONNX] Remove deprecated functions (#107208)
The usage of some functions is deprecated. This PR drop them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107208
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi
2023-09-14 19:09:56 +00:00
Thiago Crepaldi
4be6b6b673 Add quantization support to reshape and size for the ONNX exporter (#106629)
Fixes https://github.com/microsoft/onnx-converters-private/issues/175

Add quantization support for Reshape-14, Size-9 and Size-11
For Size operators, we don't requantize outputs because we want the original scalar in the graph
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106629
Approved by: https://github.com/BowenBao
2023-08-05 02:08:52 +00:00
Ilya Sherstyuk
8c0b9a2d69 [ONNX] Export dynamic step size for aten::slice() (#104385)
This commit improves the export of aten::slice() to ONNX in the following ways:

1. The step size can be an input tensor rather than a constant.
2. Fixes a bug where using a 1-D, 1-element torch tensor as an index created a broken ONNX model.

This commit also adds tests for the new functionality.

Fixes #104314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104385
Approved by: https://github.com/thiagocrepaldi
2023-07-06 21:38:59 +00:00
Ilya Sherstyuk
40df6e1647 [ONNX] Simplify repeat_intereleave export for scalar-valued 'repeat' (#100575)
This PR simplifies the ONNX export of torch.repeat_interleave when 'repeat' is a scalar value (so each index in the input is repeated the same number of times). (Issue #100438)

Here is a before/after of a simple model export:
```python
# Model + export code
import torch

class RepeatInterleaveModel(torch.nn.Module):
    def forward(self, x):
        return x.repeat_interleave(2, dim=-1)

args = (torch.rand((2, 2, 16)),)
model = RepeatInterleaveModel()
torch.onnx.export(model, args, "repeat_interleave.onnx", opset_version=17)
```

**Before (static shapes)**
![repeat_interleave onnx(1)](https://user-images.githubusercontent.com/46343317/236014996-00726832-1e76-4fb4-950d-4b54cc5cc20c.png)

-----
**Before (dynamic shapes, second graph is Loop body)**
<p float="left">
  <img src="https://user-images.githubusercontent.com/46343317/236029895-20b0ae0a-240f-466d-bb01-e619ec5967ad.png" width="45%" />
  <img src="https://user-images.githubusercontent.com/46343317/236029915-e67b808a-029b-4997-bc05-1ce59eec409a.png" width="47%" />
</p>

-----
**After (for both static and dynamic shapes)**
<img src="https://user-images.githubusercontent.com/46343317/236015235-633811cb-09a2-435d-a293-1b2bcb7dea50.png" width="66%" />

-----

This PR also fixes a bug where the exporter throws an expection when the input has dynamic shapes and the 'dim' parameter is not specified to torch.repeat_interleave. Also adds a new testcase to cover this. (Issue #100429)

Fixes #100438 and #100429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100575
Approved by: https://github.com/BowenBao
2023-05-05 17:00:42 +00:00
Aaron Gokaslan
e2a3817dfd [BE] Enable C419 rule for any all shortcircuiting (#99890)
Apparently https://github.com/pytorch/pytorch/pull/78142 made torch.JIT allow for simple generator expressions which allows us to enable rules that replace unnecessary list comprehensions with generators in any/all. This was originally part of #99280 but I split it off into this PR so that it can be easily reverted should anything break.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99890
Approved by: https://github.com/justinchuby, https://github.com/kit1980, https://github.com/malfet
2023-04-25 15:02:13 +00:00
Justin Chu
5ed7c701a3 [ONNX] Remove the deprecated monkey patches to torch.Graph (#94747)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94747
Approved by: https://github.com/BowenBao, https://github.com/Skylion007
2023-02-14 00:08:09 +00:00
Xuehai Pan
69e0bda999 [BE] Import Literal, Protocol, and Final from standard library typing as of Python 3.8+ (#94490)
Changes:

1. `typing_extensions -> typing-extentions` in dependency. Use dash rather than underline to fit the [PEP 503: Normalized Names](https://peps.python.org/pep-0503/#normalized-names) convention.

```python
import re

def normalize(name):
    return re.sub(r"[-_.]+", "-", name).lower()
```

2. Import `Literal`, `Protocal`, and `Final` from standard library as of Python 3.8+
3. Replace `Union[Literal[XXX], Literal[YYY]]` to `Literal[XXX, YYY]`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94490
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-09 19:17:49 +00:00
AllenTiTaiWang
4e9539e002 [ONNX] Support ListConstruct in quantized_args (#92009)
Fixes #91303

quantized_args didn't support ListConstruct leading to an error when user uses quantized op with list inputs, ex: aten::cat. After this PR, converter can successfully export the issued model and pass ONNX checker. However, ORT doesn't seem to support it with the very same error as https://github.com/microsoft/onnxruntime/issues/12131.

Update:
I find test_quantized_cat_when_concatinating_the_same_tensor is even similar to the new case we have in here. The only difference is whether the inputs are already quantized. ONNX graphs both seem to be valid.
[test_quantized_cat_when_concatinating_the_same_tensor.zip](https://github.com/pytorch/pytorch/files/10396798/test_quantized_cat_when_concatinating_the_same_tensor.zip)
[test_quantized_list_of_inputs_with_cat.zip](https://github.com/pytorch/pytorch/files/10396799/test_quantized_list_of_inputs_with_cat.zip)

issue raised https://github.com/microsoft/onnxruntime/issues/14245
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92009
Approved by: https://github.com/BowenBao
2023-01-23 20:55:08 +00:00
Peter Bell
6912f7c564 Update references to 1.14 to 2.0 (#91769)
There won't be a 1.14 release, so these should be updated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91769
Approved by: https://github.com/Skylion007, https://github.com/svekars, https://github.com/lezcano
2023-01-10 23:42:07 +00:00
Kazuaki Ishizaki
1cd6ebe095 Fix typos in messages under torch (#89049)
This PR fixes typos of messages in `.py` files under torch directory.
Only in `torch/onnx/symbolic_opset16.py`, fix a typo in comment to make the operator name correct.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89049
Approved by: https://github.com/lezcano
2022-11-17 04:18:14 +00:00
Thiago Crepaldi
a8f40b39ce Update all ONNX symbolics with new JitScalarType API (#87245)
Fixes https://github.com/pytorch/pytorch/issues/84365 and more

This PR addresses not only the issue above, but the entire family of issues related to `torch._C.Value.type()` parsing when `scalarType()` or `dtype()` is not available.

This issue exists before `JitScalarType` was introduced, but the new implementation refactored the bug in because the new api `from_name` and `from_dtype` requires parsing `torch._C.Value.type()` to get proper inputs, which is exactly the root cause for this family of bugs.

Therefore `from_name` and `from_dtype` must be called when the implementor knows the `name` and `dtype` without parsing a `torch._C.Value`. To handle the corner cases hidden within `torch._C.Value`, a new `from_value` API was introduced and it should be used in favor of the former ones for most cases. The new API is safer and doesn't require type parsing from user, triggering JIT asserts in the core of pytorch.

Although CI is passing for all tests, please review carefully all symbolics/helpers refactoring to make sure the meaning/intetion of the old call are not changed in the new call

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87245
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-11-03 03:01:33 +00:00
Thiago Crepaldi
1167949b2d [ONNX] Ignore print(Tensor) during tracing (#86223)
Fixes #73619
Fixes https://github.com/microsoft/onnxruntime/issues/11812

This PR adds new symbolics: `aten::_conj`, `aten::conj_physical`, `aten::resolve_conj`, and `aten::resolve_neg`
While the last two are always NO-OP by definition (do not change nodes), the first raises an exception as they are not supported by ONNX yet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86223
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-10-17 19:45:33 +00:00
Justin Chu
85d8441fba [ONNX] Deprecate setter functions for global variables (#85165)
`_set_opset_version` and `_set_operator_export_type` are previously deprecated. This PR decorates them with the deprecation decorator, so warnings are emitted.

- Remove usage of `_set_opset_version` and `_set_operator_export_type` in favor of setting the globals vars directly in torch.onnx internal
- Update `GLOBALS.operator_export_type`'s default to not be None to tighten types
- Remove usage of `_set_onnx_shape_inference`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85165
Approved by: https://github.com/BowenBao, https://github.com/AllenTiTaiWang
2022-09-28 22:43:43 +00:00
Justin Chu
5deeb09d4e [ONNX] Annotate all g as GraphContext (#85491)
- Use g.opset to test export opset version
- Annotate all `g` as GraphContext

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85491
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:39:28 +00:00
Justin Chu
3d2316670f [ONNX] Create GraphContext and load g.op method to the class (#84728)
This PR create the `GraphContext` class and relays all graph methods to _C.Graph as well as implements the `g.op`  method. The GraphContext object is passed into the symbolic functions in place of _C.Graph for compatibility with existing symbolic functions.

This way (1) we can type annotate all `g` args because the method is defined and (2) we can use additional context information in symbolic functions. (3) no more monkey patching on `_C.Graph`

Also

- Fix return type of `_jit_pass_fixup_onnx_controlflow_node`
- Create `torchscript.py` to house torch.Graph related functions
- Change `GraphContext.op` to create nodes in the Block instead of the Graph
- Create `add_op_with_blocks` to handle scenarios where we need to directly manipulate sub-blocks. Update loop and if symbolic functions to use this function.

## Discussion

Should we put all the context inside `SymbolicContext` and make it an attribute in the `GraphContext` class? This way we only define two attributes `GraphContext.graph` and `GraphContext.context`. Currently all context attributes are directly defined in the class.

### Decision

Keep GraphContext flatand note that it will change in the future.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84728
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:21:55 +00:00
Justin Chu
2f50d2f685 [ONNX] Update docs on symbolic registration (#85290)
- Move inline instructions on editing symbolic functions to the README
- Add a line on using the symbolic function registration decorator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85290
Approved by: https://github.com/BowenBao
2022-09-22 13:37:11 +00:00
titaiwang
7c4c7dafbd [ONNX] Add onnx::LayerNorm support for version 17 (#84293)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84293
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-09-04 02:20:08 +00:00
Justin Chu
388368b699 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-09-03 01:40:18 +00:00
PyTorch MergeBot
d8cc8368ab Revert "[ONNX] Fix type annotations and enable type checking for all apis (#84091)"
This reverts commit 6446da1730.

Reverted https://github.com/pytorch/pytorch/pull/84091 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-08-28 12:28:58 +00:00
Justin Chu
6446da1730 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao
2022-08-27 04:40:41 +00:00
Justin Chu
bf25a140f9 [ONNX] Add runtime type checking to export (#83673)
This PR adds an internal wrapper on the [beartype](https://github.com/beartype/beartype) library to perform runtime type checking in `torch.onnx`. It uses beartype when it is found in the environment and is reduced to a no-op when beartype is not found.

Setting the env var `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK=ERRORS` will turn on the feature. setting `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK=DISABLED` will disable all checks. When not set and `beartype` is installed, a warning message is emitted.

Now when users call an api with invalid arguments e.g.

```python
torch.onnx.export(conv, y, path, export_params=True, training=False)

# traning should take TrainingModel, not bool
```

they get

```
Traceback (most recent call last):
  File "bisect_m1_error.py", line 63, in <module>
    main()
  File "bisect_m1_error.py", line 59, in main
    reveal_error()
  File "bisect_m1_error.py", line 32, in reveal_error
    torch.onnx.export(conv, y, cpu_model_path, export_params=True, training=False)
  File "<@beartype(torch.onnx.utils.export) at 0x1281f5a60>", line 136, in export
  File "pytorch/venv/lib/python3.9/site-packages/beartype/_decor/_error/errormain.py", line 301, in raise_pep_call_exception
    raise exception_cls(  # type: ignore[misc]
beartype.roar.BeartypeCallHintParamViolation: @beartyped export() parameter training=False violates type hint <class 'torch._C._onnx.TrainingMode'>, as False not instance of <protocol "torch._C._onnx.TrainingMode">.
```

when `TORCH_ONNX_EXPERIMENTAL_RUNTIME_TYPE_CHECK` is not set and `beartype` is installed, a warning message is emitted.

```
>>> torch.onnx.export("foo", "bar", "f")
<stdin>:1: CallHintViolationWarning: Traceback (most recent call last):
  File "/home/justinchu/dev/pytorch/torch/onnx/_internal/_beartype.py", line 54, in _coerce_beartype_exceptions_to_warnings
    return beartyped(*args, **kwargs)
  File "<@beartype(torch.onnx.utils.export) at 0x7f1d4ab35280>", line 39, in export
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/site-packages/beartype/_decor/_error/errormain.py", line 301, in raise_pep_call_exception
    raise exception_cls(  # type: ignore[misc]
beartype.roar.BeartypeCallHintParamViolation: @beartyped export() parameter model='foo' violates type hint typing.Union[torch.nn.modules.module.Module, torch.jit._script.ScriptModule, torch.jit.ScriptFunction], as 'foo' not <protocol "torch.jit.ScriptFunction">, <protocol "torch.nn.modules.module.Module">, or <protocol "torch.jit._script.ScriptModule">.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/justinchu/dev/pytorch/torch/onnx/_internal/_beartype.py", line 63, in _coerce_beartype_exceptions_to_warnings
    return func(*args, **kwargs)
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 482, in export
    _export(
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 1422, in _export
    with exporter_context(model, training, verbose):
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 177, in exporter_context
    with select_model_mode_for_export(
  File "/home/justinchu/anaconda3/envs/pytorch/lib/python3.9/contextlib.py", line 119, in __enter__
    return next(self.gen)
  File "/home/justinchu/dev/pytorch/torch/onnx/utils.py", line 95, in select_model_mode_for_export
    originally_training = model.training
AttributeError: 'str' object has no attribute 'training'
```

We see the error is caught right when the type mismatch happens, improving from what otherwise would become `AttributeError: 'str' object has no attribute 'training'`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83673
Approved by: https://github.com/BowenBao
2022-08-25 21:24:37 +00:00
Justin Chu
3dfb8dfcf3 [ONNX] Use errors.SymbolicValueError for more context (#83332)
Replace runtime errors in torch.onnx with `errors.SymbolicValueError` for more context around jit values.

- Extend `_unimplemented`, `_onnx_unsupported`, `_onnx_opset_unsupported`, `_onnx_opset_unsupported_detailed` errors to include JIT value information
- Replace plain RuntimeError with `errors.SymbolicValueError`
- Clean up: Use `_is_bool` to replace string comparison on jit types
- Clean up: Remove the todo `Remove type ignore after #81112`

#77316
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83332
Approved by: https://github.com/AllenTiTaiWang, https://github.com/thiagocrepaldi, https://github.com/BowenBao
2022-08-23 05:39:17 +00:00
Justin Chu
80cfafc385 [ONNX] Add quantization support to more single output ops (#83008)
#80039

- Implement quantization support for single output ops
  - quantized::sigmoid
  - quantized::instance_norm
  - aten::reshape
  - aten::reshape_as
  - aten::sum
  - aten::mean
  - aten::prod
  - aten::t
  - aten::numpy_T
  - aten::expand
  - aten::expand_as
  - aten::embedding
  - aten::embedding_bag
  - aten::view
  - aten::select
  - aten::eq
  - aten::ne
  - aten::gt
  - aten::lt
  - aten::le
  - aten::ge
  - quantized::layer_norm
  - aten::elu
  - aten::selu
  - aten::maximum
  - aten::minimum
  - aten::amax
  - aten::amin
  - aten::hardtanh
  - aten::hardswish
  - quantized::group_norm
  - aten::as_strided
  - quantized::leaky_relu
  - aten::transpose
- Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity)
- Remove magic number and assign it to INT64_MAX
- implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83008
Approved by: https://github.com/BowenBao
2022-08-23 00:39:24 +00:00
Thiago Crepaldi
2c089290b6 [ONNX] Fix float point detection for optional tensor (with unknown rank) within a list (#81386)
In some scenarios, by combining a traced model with a scripted function in it, a `%74 : Tensor?[] = prim::ListConstruct(%35, %y_int, %x_int)` (aka List of Optional Tensor) can be generated, which will make `symbolic_helper._is_fp()` fail to read the data type of the specified input.

In such scenario, something like `type = value.type().scalarType()` raises `RuntimeError: r INTERNAL ASSERT FAILED at "/github/pytorch/aten/src/ATen/core/jit_type_base.h":545, please report a bug to PyTorch. ` that refers to

```
  template <typename T>
  T& expectRef() {
    auto* r = castRaw<T>();
    AT_ASSERT(r);
    return *r;
  }
```

What happens is that for `torch._C.TypeList` in this repro, `input.type()` is `torch._C.TypeList` which does not have `scalarType()` method. Instead, `value.type().getElementType().dtype()` should be used to get the underlying type.

This PR tries to use `value.type().getElementType().dtype()` when `isinstance(value.type(), torch.ListType)`.
A unit test is proposed along with the fix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81386
Approved by: https://github.com/BowenBao
2022-08-12 19:44:50 +00:00
Justin Chu
27108d9434 [ONNX] Update typing and error messages in symbolic_helper (#83007)
### Description

- Clearer error messages with more context
-   Created `SymbolicValueError` which adds context of the value to the error message
- Type annotation

example error message:

```
torch.onnx.errors.SymbolicValueError: ONNX symbolic does not understand the Constant node '%1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
' specified with descriptor 'is'.  [Caused by the value '1 defined in (%1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
)' (type 'Tensor') in the TorchScript graph. The containing node has kind 'onnx::Constant'.]

    Inputs:
        Empty
    Outputs:
        #0: 1 defined in (%1 : Long(2, strides=[1], device=cpu) = onnx::Constant[value= 3  3 [ CPULongType{2} ]]()
    )  (type 'Tensor')
```

### Issue

- #77316 (Runtime error during symbolic conversion)

### Testing

Unit tested
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83007
Approved by: https://github.com/BowenBao
2022-08-11 23:26:13 +00:00
BowenBao
017ecb782d [ONNX] Update legacy code, initialize onnx_shape_inference=True by default (#82767)
Legacy code has onnx_shape_inference=False by default, which is misleading
as every other export api sets it to True unless otherwise overriden by caller.
There is only two tests that need updating according to this change.
* test_utility_funs.py::test_constant_fold_shape. The resulting number of nodes
  in graph is increased by 1, due to that previously the extra constant node was
  added as initializer.
* test_utility_funs.py::test_onnx_function_substitution_pass. Enabling onnx
  shape inference discovered discrepancy in test input shape and supplied dynamic
  axes arguments.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82767
Approved by: https://github.com/justinchuby, https://github.com/abock
2022-08-10 21:50:13 +00:00
Justin Chu
f5701a1f9a [ONNX] Remove unused patching methods (#83006)
### Description
<!-- What did you change and why was it needed? -->

Remove unused patching methods:

- `torch._C.Graph.constant`
- unpatch `torch._C.Node.__getitem__` and move the helper function to `symbolic_helper`

Add typing annotations

### Issue
<!-- Link to Issue ticket or RFP -->

#76254

### Testing
<!-- How did you test your change? -->

Unit tested
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83006
Approved by: https://github.com/BowenBao
2022-08-09 19:24:03 +00:00
Justin Chu
c6cdca5c68 [ONNX] Reland #81953 Type utility for converting among JIT, torch and ONNX data types (#82995)
Re-land #81953

Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Deprecated: "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type" in `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82995
Approved by: https://github.com/kit1980
2022-08-08 23:43:43 +00:00
PyTorch MergeBot
b170a52a09 Revert "[ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)"
This reverts commit 6ddf4c6f58.

Reverted https://github.com/pytorch/pytorch/pull/81953 on behalf of https://github.com/kit1980 due to Broke internal builds by removing functions without deprecation
2022-08-07 20:15:28 +00:00
Justin Chu
6ddf4c6f58 [ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)
Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Breaking: **Remove "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type"** from `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81953
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 22:24:45 +00:00
Li-Huai (Allan) Lin
6bdf89b0c7 [ONNX] Fix argmin and argmax test cases (#79503)
Part of #79263

The `keepdim` argument is theoretically ignored when `dim` is not specified (See [docs](https://pytorch.org/docs/stable/generated/torch.argmin.html)).

Unfortunately the PyTorch implementation seems to still take it into account, resulting in a non-fully-reduced tensor, which is an undefined behavior. Thus, I add `dim` argument to the tests to make the outputs between PyTorch and ONNX runtime consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79503
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 18:09:47 +00:00
Wei-Sheng Chin
e32691dc7a [ONNX] extend add and sub exporter to cover graph non-tensor inputs (#81736)
ONNX exporter fails when the 3rd input of `aten:add` or `aten::sub` isn't a tensor. This PR fixes this failure.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81736
Approved by: https://github.com/BowenBao
2022-07-25 21:54:52 +00:00
Wei-Sheng Chin
d30784be31 [ONNX] Fix ONNX aten::mul exporter with boolean inputs (#81671)
Continue work left in #72102.

The current exporter always export `aten::mul` to ONNX `Mul`. However, ONNX `Mul` [doesn't support Boolean](https://github.com/onnx/onnx/blob/main/docs/Operators.md#type-constraints-92) so we need to explicitly use ONNX `And` in this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81671
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-07-25 14:47:32 +00:00
Justin Chu
ed1da2a9df [ONNX] Quantization support for quantized::cat (#79826)
- Add support for quantized `cat`
- Add type annotations for helper functions

Now we can export

```python
import torchvision.models.quantization as models
from torchvision import transforms

torch_model = models.inception_v3(pretrained=True, quantize=True)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79826
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-07-12 15:44:23 +00:00
Justin Chu
06710ec1b9 [ONNX] Reland: Add quantization support to _avg_pool opset 9 and clean up (#81267)
- Add quantization support to _avg_pool opset 9
- Clean up reused / unused variables in avgpool helper
- Add types
- Sort `__all__`

Reland #79793 (Added `argsort` in `__all__`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81267
Approved by: https://github.com/BowenBao
2022-07-11 23:58:28 +00:00
PyTorch MergeBot
b98b9eaae5 Revert "[ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)"
This reverts commit 356341a3ec.

Reverted https://github.com/pytorch/pytorch/pull/79793 on behalf of https://github.com/malfet due to Broke trunk, see 356341a3ec
2022-07-09 02:40:28 +00:00
Justin Chu
356341a3ec [ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)
- Add quantization support to _avg_pool opset 9
- Clean up reused / unused variables in avgpool helper
- Add types
- Sort `__all__`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79793
Approved by: https://github.com/BowenBao
2022-07-08 23:14:01 +00:00
Jinkun Lin
86682caf93 [ONNX Export] Use half_pixel instead of pytorch_half_pixel. (#80003)
Fixes #79361

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80003
Approved by: https://github.com/BowenBao
2022-06-24 18:25:29 +00:00
Justin Chu
c8b9b6266b [ONNX] Fix arg type in _set_training_mode (#78583)
When `TrainingMode.PRESERVE` is set for export, the exporter used to change the model's training mode based on some logic. Now we respect the option and not touch the model's training state.

- Previously `_set_training_mode`'s behavior doesn't match what the global variable expects. This PR removes the deprecated `_set_training_mode` and makes the type correct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78583
Approved by: https://github.com/BowenBao
2022-06-15 23:47:12 +00:00
Justin Chu
d3ef5c3fa3 [ONNX] Clean up __init__ in torch.onnx (#78446)
- Move definitions in `__init__` to internal classes and expose them by importing to init (prevent circular dependencies): https://github.com/pytorch/pytorch/wiki/torch.onnx-Namespacing
  - Context classes and enums are moved to `_exporter_states.py`
  - Exceptions are moved to `errors.py`
- Define `__all__` for torch.onnx. https://github.com/pytorch/pytorch/wiki/Public-API-definition-and-documentation
- Moved `utils.__IN_ONNX_EXPORT` to `GLOBALS.in_onnx_export`
- Deprecated `torch.onnx._export`

Precedes #78231

Using this as an aid for finding public functions:

```python
list(filter(lambda x: not x.startswith("_"), torch.onnx.utils.__dict__.keys()))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78446
Approved by: https://github.com/BowenBao
2022-06-14 04:35:06 +00:00
Justin Chu
def778527e [ONNX] Quantization support for five ops (#78103)
- Add quantization support for `interpolate`, `avgpool`, `sigmoid` and `add_relu`
- Return the inputs to ListUnpack if the previous node is ListConstruct so that `ListConstruct` and `ListUnpack` are canceled and removed in the jit passes. ONNX doesn't support them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78103
Approved by: https://github.com/garymm
2022-06-03 20:22:07 +00:00
Justin Chu
c0814bff87 [ONNX] Variable length argument support for quantized_args (#78775)
Add support for decorating functions with variable length arguments in `quantized_args`. This is needed to decorate functions like `symbolic_fn` in `_interpolate_helper` which takes `*args`.

Previously it is not possible to decorate functions like it. Now we can do

```python
@quantized_args(True)
def symbolic_fn(g, input, output_size, *args):
    ...
```

and the rest of the params are defaulted to non-quantized.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78775
Approved by: https://github.com/garymm
2022-06-03 01:31:19 +00:00
Justin Chu
299fbbccec [ONNX] Fix check_training_mode in symbolic_helper (#78376)
`check_training_mode` always warned that an op is set to training because it was comparing an int `op_train_mode` with an Enum `GLOBALS.training_mode`. This PR fixes the behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78376
Approved by: https://github.com/garymm
2022-05-27 00:38:16 +00:00
Justin Chu
161e931156 [ONNX] Modernize python syntax (#77935)
Use pyupgrade(https://github.com/asottile/pyupgrade) and flynt to modernize python syntax

```sh
pyupgrade --py36-plus --keep-runtime-typing torch/onnx/**/*.py
pyupgrade --py36-plus --keep-runtime-typing test/onnx/**/*.py
flynt torch/onnx/ --line-length 120
```

- Use f-strings for string formatting
- Use the new `super()` syntax for class initialization
- Use dictionary / set comprehension
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77935
Approved by: https://github.com/BowenBao
2022-05-24 22:52:37 +00:00
Justin Chu
0d76299ff7 [ONNX] Clean up module imports (#77423)
Cleaning up onnx module imports to prepare for updating `__init__`.

- Simplify importing the `_C` and `_C._onnx` name spaces
- Remove alias of the symbolic_helper module in imports
- Remove any module level function imports. Import modules instead
    - Alias `symbilic_opsetx` as `opsetx`
- Fix some docstrings

Requires:
- https://github.com/pytorch/pytorch/pull/77448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77423
Approved by: https://github.com/BowenBao
2022-05-20 01:56:24 +00:00
Justin Chu
563c2719bf [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
Re-land
- #77142

(diff: https://github.com/pytorch/pytorch/compare/c08b8f0..justinchuby:justinchu/remove-patch2)

Fixed:
- Delay import symbolic_opsets in the registry.

Tested locally with torchvision
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77448
Approved by: https://github.com/garymm
2022-05-16 14:44:24 +00:00
PyTorch MergeBot
6b366dd3c1 Revert "[ONNX] Refactor to remove inline imports (#77142)"
This reverts commit c08b8f0967.

Reverted https://github.com/pytorch/pytorch/pull/77142 on behalf of https://github.com/malfet
2022-05-13 19:44:17 +00:00
Justin Chu
c08b8f0967 [ONNX] Refactor to remove inline imports (#77142)
Reduce circular dependencies

- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
    - Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
2022-05-13 03:46:33 +00:00