Commit Graph

89 Commits

Author SHA1 Message Date
AllenTiTaiWang
e8e3afb784 [ONNX] Refactor MaxPool to support dynamic inputs (#113318)
In https://github.com/pytorch/pytorch/pull/106270, the solution managed to solve the [`ceil_model` corner issue](https://github.com/onnx/onnx/issues/5711) with the usage of `get_pool_ceil_padding`. However, padding the ceil in converter side only works when we already know the input shapes, therefore, a regression happens when users want to do dynamic inputs.

This PR provides (1) refactor codes with torchlib implementation, (2) add dynamic shapes test, and (3) disable the corner tests with comments saying re-enable it when the [real fix from ONNX](https://github.com/onnx/onnx/pull/5741) is merged.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113318
Approved by: https://github.com/thiagocrepaldi
2023-11-10 23:23:49 +00:00
Gustav Larsson
8dcdc74915 torch->onnx export support: quantized::linear_relu (#109755)
- Adds support for quantized::linear_relu
  - Adds weight unpacking pattern matcher
  - Adds to export for opset 10 and 13.
- Adds QAT test modeled after conv2d+relu fusion test

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109755
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2023-09-21 23:24:20 +00:00
CYuxian
2f35715f0d [onnx] Fix output shape mismatch issue of max_pool (#106270)
For onnx MaxPool with ceil_mode=1, the sliding windows that starts in the right padded region won't be ignored, which causes different output shape with torch.
Therefore, need to add Pad op before and not to set ceil_mode for MaxPool op like what is done in symbolic_opset9 when convertting torch max_pool to onnx MaxPool.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106270
Approved by: https://github.com/thiagocrepaldi
2023-07-31 21:03:08 +00:00
AllenTiTaiWang
d5d6eb2d46 [ONNX] Refactor AvgPool to support dynamic shapes (#105683)
In #87892, to pick up the corner cases found in #71549, the PR falls back the implementation of AvgPool to the way opset 9 implementing. However, it introduces a regression on dynamic shape cases found in #101397. This PR refactors the AvgPool op with the same implementation we have in onnxscript: https://github.com/microsoft/onnxscript/pull/754.

However, the corner case with `count_include_pad` remains unsolved in onnxruntime: https://github.com/microsoft/onnxruntime/issues/16203. The calculuation on the last value of each dimension is different between ORT and PyTorch. But the fix can be proved in: https://github.com/microsoft/onnxruntime/pull/16752, and it supports AvgPool since opset19.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105683
Approved by: https://github.com/thiagocrepaldi
2023-07-21 20:22:08 +00:00
Ilya Sherstyuk
8c0b9a2d69 [ONNX] Export dynamic step size for aten::slice() (#104385)
This commit improves the export of aten::slice() to ONNX in the following ways:

1. The step size can be an input tensor rather than a constant.
2. Fixes a bug where using a 1-D, 1-element torch tensor as an index created a broken ONNX model.

This commit also adds tests for the new functionality.

Fixes #104314

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104385
Approved by: https://github.com/thiagocrepaldi
2023-07-06 21:38:59 +00:00
Gustav Larsson
ebb8aa9c0b Correct output_padding for quantized tconv (torch->onnx) (#104207)
- In #102759, the support for `quantized::conv_transposeNd` was introduced. This incorrectly set `output_padding` to all zeros. Turns out, you can specify output_padding in PyTorch, but this parameter was not being unpacked correctly and thus did not show up in the python torch->onnx code.
- This adds unpacking of output_padding in `unpack_quantized_weights.cpp` when needed. It also adds this as a parameter in the python functions and uses that (and removes the all-zero defaults)
- Another issue with #102759 is that it only added these new ops to opset10 without adding the ability to specify axis in opset13. This PR also fixes this.

Fixes #104206

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104207
Approved by: https://github.com/BowenBao
2023-06-29 13:40:48 +00:00
Gustav Larsson
9f11ad6f86 Extend torch->onnx export for quantized convolutional ops (#102759)
- Extend support:
  - quantized::conv1d
  - quantized::conv3d
  - quantized::conv3d_relu
  - quantized::conv_transpose1d
  - quantized::conv_transpose2d
  - quantized::conv_transpose3d
  - Note: quantized::{conv1d_relu,conv2d,conv2d_relu} already supported.
- To support this, quantization unpacking added for:
  - conv1d
  - conv_transpose1d
  - conv_transpose2d
  - conv_transpose3d
  - Note: conv3d/conv3d_relu already had weights unpacking set up, even though it didn't have torch.onnx support.
- Add tests.
- The 3D tests will fail if run with the qnnpack backend (e.g., on Apple silicon Mac), so added decorator skipIfQuantizationBackendQNNPack.
- Minor fix in `aten/src/ATen/native/quantized/cpu/qconv.cpp` for 3D convolutions (triggered by added tests).

Fixes #102747

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102759
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi, https://github.com/kit1980
2023-06-23 22:50:17 +00:00
Justin Chu
5ed7c701a3 [ONNX] Remove the deprecated monkey patches to torch.Graph (#94747)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94747
Approved by: https://github.com/BowenBao, https://github.com/Skylion007
2023-02-14 00:08:09 +00:00
BowenBao
88d0235b73 [ONNX] Update CI test environment; Add symbolic functions (#94564)
* CI Test environment to install onnx and onnx-script.
* Add symbolic function for `bitwise_or`, `convert_element_type` and `masked_fill_`.
* Update symbolic function for `slice` and `arange`.
* Update .pyi signature for `_jit_pass_onnx_graph_shape_type_inference`.

Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94564
Approved by: https://github.com/abock
2023-02-10 20:44:59 +00:00
Justin Chu
23a6e15321 [ONNX] Remove the INT64_MAX magic numbers (#88341)
Remove the magic numbers in symbolic opsets and use a INT64_MAX  global instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88341
Approved by: https://github.com/BowenBao
2022-11-03 20:18:36 +00:00
Thiago Crepaldi
a8f40b39ce Update all ONNX symbolics with new JitScalarType API (#87245)
Fixes https://github.com/pytorch/pytorch/issues/84365 and more

This PR addresses not only the issue above, but the entire family of issues related to `torch._C.Value.type()` parsing when `scalarType()` or `dtype()` is not available.

This issue exists before `JitScalarType` was introduced, but the new implementation refactored the bug in because the new api `from_name` and `from_dtype` requires parsing `torch._C.Value.type()` to get proper inputs, which is exactly the root cause for this family of bugs.

Therefore `from_name` and `from_dtype` must be called when the implementor knows the `name` and `dtype` without parsing a `torch._C.Value`. To handle the corner cases hidden within `torch._C.Value`, a new `from_value` API was introduced and it should be used in favor of the former ones for most cases. The new API is safer and doesn't require type parsing from user, triggering JIT asserts in the core of pytorch.

Although CI is passing for all tests, please review carefully all symbolics/helpers refactoring to make sure the meaning/intetion of the old call are not changed in the new call

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87245
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-11-03 03:01:33 +00:00
AllenTiTaiWang
f2ae459311 [ONNX] Disable ONNX ceil_mode and count_include_pad to aligntorch ceil_mode results in corner case (#87892)
ONNX and PyTorch has different equation on pooling and different strategy on ceil_mode, which leads to discrepancy on corner case (#71549 ).
Specifically, PyTorch avereage pooling is not following [the equation on documentation](https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html), it allows sliding window to go off-bound instead, if they start within the left padding or the input (in NOTE section). More details can be found in #57178.

This PR changes avgpool in opset 10 and 11 back the way as opset 9, which it stops using ceil_mode and count_include_pad  in onnx::AveragePool

A comprehensive test for all combinations of parameters can be found in the next PR. #87893
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87892
Approved by: https://github.com/BowenBao
2022-10-29 11:35:10 +00:00
Justin Chu
85a79a7f50 [ONNX] Expand _cast_ symbolic functions (#87666)
The `_cast_` family of symbolic functions has been created from a template function. Even though it saved some lines, it very much obscured the intention of the code. Since the list doesn't really change and the `_cast_` family are IIRC deprecated, it is safe for us to expand the templates and make the code more readable.

This PR also removes any direct calls to `_cast_` functions to maintain a consistent pattern of directly creating `Cast` nodes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87666
Approved by: https://github.com/BowenBao
2022-10-26 00:39:59 +00:00
AllenTiTaiWang
65b4a633bb [ONNX] Support quantized::conv1d_relu (#85997)
According to #38248, quantized::conv1d_relu shares packing parameters with Conv2D (kspatialDim is also 2), and needs a different unpacking way. Therefore, a new `QuantizedParamsType=Conv1D` is used to differentiate the two, and has to extract 1D information from 2D packed parameters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85997
Approved by: https://github.com/BowenBao
2022-10-25 18:48:25 +00:00
Justin Chu
5deeb09d4e [ONNX] Annotate all g as GraphContext (#85491)
- Use g.opset to test export opset version
- Annotate all `g` as GraphContext

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85491
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-28 22:39:28 +00:00
Justin Chu
2f50d2f685 [ONNX] Update docs on symbolic registration (#85290)
- Move inline instructions on editing symbolic functions to the README
- Add a line on using the symbolic function registration decorator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85290
Approved by: https://github.com/BowenBao
2022-09-22 13:37:11 +00:00
Justin Chu
76d60778eb [ONNX] Use decorators for symbolic function registration (#84448)
This is the 4th PR in the series of #83787. It enables the use of `@onnx_symbolic` across `torch.onnx`.

- **Backward breaking**: Removed some symbolic functions from `__all__` because of the use of  `@onnx_symbolic` for registering the same function on multiple aten names.
- Decorate all symbolic functions with `@onnx_symbolic`
- Move Quantized and Prim ops out from classes to functions defined in the modules. Eliminate the need for `isfunction` checking, speeding up the registration process by 60%.
    - Remove the outdated unit test `test_symbolic_opset9.py`
- Symbolic function registration moved from the first call to `_run_symbolic_function` to init time.
- Registration is fast:
  ![image](https://user-images.githubusercontent.com/11205048/189164959-f3fca173-19bc-4682-b150-f13a586387bf.png)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84448
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-09-22 06:25:24 +00:00
Justin Chu
388368b699 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-09-03 01:40:18 +00:00
PyTorch MergeBot
d8cc8368ab Revert "[ONNX] Fix type annotations and enable type checking for all apis (#84091)"
This reverts commit 6446da1730.

Reverted https://github.com/pytorch/pytorch/pull/84091 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-08-28 12:28:58 +00:00
Justin Chu
6446da1730 [ONNX] Fix type annotations and enable type checking for all apis (#84091)
Enable runtime type checking for all torch.onnx public apis, symbolic functions and most helpers (minus two that does not have a checkable type: `_.JitType` does not exist) by adding the beartype decorator. Fix type annotations to makes unit tests green.

Profile:

export `torchvision.models.alexnet(pretrained=True)`

```
with runtime type checking: 21.314 / 10 passes
without runtime type checking: 20.797 / 10 passes

+ 2.48%
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84091
Approved by: https://github.com/BowenBao
2022-08-27 04:40:41 +00:00
Justin Chu
3dfb8dfcf3 [ONNX] Use errors.SymbolicValueError for more context (#83332)
Replace runtime errors in torch.onnx with `errors.SymbolicValueError` for more context around jit values.

- Extend `_unimplemented`, `_onnx_unsupported`, `_onnx_opset_unsupported`, `_onnx_opset_unsupported_detailed` errors to include JIT value information
- Replace plain RuntimeError with `errors.SymbolicValueError`
- Clean up: Use `_is_bool` to replace string comparison on jit types
- Clean up: Remove the todo `Remove type ignore after #81112`

#77316
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83332
Approved by: https://github.com/AllenTiTaiWang, https://github.com/thiagocrepaldi, https://github.com/BowenBao
2022-08-23 05:39:17 +00:00
Justin Chu
80cfafc385 [ONNX] Add quantization support to more single output ops (#83008)
#80039

- Implement quantization support for single output ops
  - quantized::sigmoid
  - quantized::instance_norm
  - aten::reshape
  - aten::reshape_as
  - aten::sum
  - aten::mean
  - aten::prod
  - aten::t
  - aten::numpy_T
  - aten::expand
  - aten::expand_as
  - aten::embedding
  - aten::embedding_bag
  - aten::view
  - aten::select
  - aten::eq
  - aten::ne
  - aten::gt
  - aten::lt
  - aten::le
  - aten::ge
  - quantized::layer_norm
  - aten::elu
  - aten::selu
  - aten::maximum
  - aten::minimum
  - aten::amax
  - aten::amin
  - aten::hardtanh
  - aten::hardswish
  - quantized::group_norm
  - aten::as_strided
  - quantized::leaky_relu
  - aten::transpose
- Avoid modifying functions in `quantized_args` and have the wrapper closed over `scale` and `zero_point` instead (for purity)
- Remove magic number and assign it to INT64_MAX
- implement `_unpack_quantized_tensor` for handling quantized tensor unpacking to separate the logic from tuple unpacking and for clearer error handling
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83008
Approved by: https://github.com/BowenBao
2022-08-23 00:39:24 +00:00
Justin Chu
c6cdca5c68 [ONNX] Reland #81953 Type utility for converting among JIT, torch and ONNX data types (#82995)
Re-land #81953

Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Deprecated: "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type" in `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82995
Approved by: https://github.com/kit1980
2022-08-08 23:43:43 +00:00
PyTorch MergeBot
b170a52a09 Revert "[ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)"
This reverts commit 6ddf4c6f58.

Reverted https://github.com/pytorch/pytorch/pull/81953 on behalf of https://github.com/kit1980 due to Broke internal builds by removing functions without deprecation
2022-08-07 20:15:28 +00:00
Justin Chu
6ddf4c6f58 [ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)
Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Breaking: **Remove "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type"** from `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81953
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 22:24:45 +00:00
Huy Do
6ea422dd0b Format torch/onnx with ufmt (#82137)
This is the last batch for the new ufmt (black + usort) linter. After this, black linter can finally be replaced. The previous PR to format ONNX tests was #81335
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82137
Approved by: https://github.com/kit1980, https://github.com/AllenTiTaiWang
2022-07-25 22:42:21 +00:00
Justin Chu
ed1da2a9df [ONNX] Quantization support for quantized::cat (#79826)
- Add support for quantized `cat`
- Add type annotations for helper functions

Now we can export

```python
import torchvision.models.quantization as models
from torchvision import transforms

torch_model = models.inception_v3(pretrained=True, quantize=True)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79826
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-07-12 15:44:23 +00:00
Justin Chu
06710ec1b9 [ONNX] Reland: Add quantization support to _avg_pool opset 9 and clean up (#81267)
- Add quantization support to _avg_pool opset 9
- Clean up reused / unused variables in avgpool helper
- Add types
- Sort `__all__`

Reland #79793 (Added `argsort` in `__all__`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81267
Approved by: https://github.com/BowenBao
2022-07-11 23:58:28 +00:00
PyTorch MergeBot
b98b9eaae5 Revert "[ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)"
This reverts commit 356341a3ec.

Reverted https://github.com/pytorch/pytorch/pull/79793 on behalf of https://github.com/malfet due to Broke trunk, see 356341a3ec
2022-07-09 02:40:28 +00:00
Justin Chu
356341a3ec [ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)
- Add quantization support to _avg_pool opset 9
- Clean up reused / unused variables in avgpool helper
- Add types
- Sort `__all__`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79793
Approved by: https://github.com/BowenBao
2022-07-08 23:14:01 +00:00
Justin Chu
c8b9b6266b [ONNX] Fix arg type in _set_training_mode (#78583)
When `TrainingMode.PRESERVE` is set for export, the exporter used to change the model's training mode based on some logic. Now we respect the option and not touch the model's training state.

- Previously `_set_training_mode`'s behavior doesn't match what the global variable expects. This PR removes the deprecated `_set_training_mode` and makes the type correct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78583
Approved by: https://github.com/BowenBao
2022-06-15 23:47:12 +00:00
Justin Chu
d3ef5c3fa3 [ONNX] Clean up __init__ in torch.onnx (#78446)
- Move definitions in `__init__` to internal classes and expose them by importing to init (prevent circular dependencies): https://github.com/pytorch/pytorch/wiki/torch.onnx-Namespacing
  - Context classes and enums are moved to `_exporter_states.py`
  - Exceptions are moved to `errors.py`
- Define `__all__` for torch.onnx. https://github.com/pytorch/pytorch/wiki/Public-API-definition-and-documentation
- Moved `utils.__IN_ONNX_EXPORT` to `GLOBALS.in_onnx_export`
- Deprecated `torch.onnx._export`

Precedes #78231

Using this as an aid for finding public functions:

```python
list(filter(lambda x: not x.startswith("_"), torch.onnx.utils.__dict__.keys()))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78446
Approved by: https://github.com/BowenBao
2022-06-14 04:35:06 +00:00
Justin Chu
3f14ca8f02 [ONNX] Compare types for DeviceObjType instead of strings (#78114)
- Turn all string comparison on node types into `isinstance` checks
- Update error message in the device op to include the unexpected type's name: `RuntimeError: Unsupported: ONNX export of operator prim::device, output type should be 'DeviceObjType', not '<some unknown type>'. Please feel free to request support or submit a pull request on PyTorch GitHub.`

Tested:

Unit test in `test/onnx/test_pytorch_onnx_onnxruntime.py::TestONNXRuntime_opset13::test_to_device`

Follow up of #78085
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78114
Approved by: https://github.com/garymm
2022-06-07 16:57:24 +00:00
Justin Chu
def778527e [ONNX] Quantization support for five ops (#78103)
- Add quantization support for `interpolate`, `avgpool`, `sigmoid` and `add_relu`
- Return the inputs to ListUnpack if the previous node is ListConstruct so that `ListConstruct` and `ListUnpack` are canceled and removed in the jit passes. ONNX doesn't support them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78103
Approved by: https://github.com/garymm
2022-06-03 20:22:07 +00:00
Justin Chu
161e931156 [ONNX] Modernize python syntax (#77935)
Use pyupgrade(https://github.com/asottile/pyupgrade) and flynt to modernize python syntax

```sh
pyupgrade --py36-plus --keep-runtime-typing torch/onnx/**/*.py
pyupgrade --py36-plus --keep-runtime-typing test/onnx/**/*.py
flynt torch/onnx/ --line-length 120
```

- Use f-strings for string formatting
- Use the new `super()` syntax for class initialization
- Use dictionary / set comprehension
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77935
Approved by: https://github.com/BowenBao
2022-05-24 22:52:37 +00:00
Justin Chu
0d76299ff7 [ONNX] Clean up module imports (#77423)
Cleaning up onnx module imports to prepare for updating `__init__`.

- Simplify importing the `_C` and `_C._onnx` name spaces
- Remove alias of the symbolic_helper module in imports
- Remove any module level function imports. Import modules instead
    - Alias `symbilic_opsetx` as `opsetx`
- Fix some docstrings

Requires:
- https://github.com/pytorch/pytorch/pull/77448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77423
Approved by: https://github.com/BowenBao
2022-05-20 01:56:24 +00:00
Justin Chu
563c2719bf [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
Re-land
- #77142

(diff: https://github.com/pytorch/pytorch/compare/c08b8f0..justinchuby:justinchu/remove-patch2)

Fixed:
- Delay import symbolic_opsets in the registry.

Tested locally with torchvision
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77448
Approved by: https://github.com/garymm
2022-05-16 14:44:24 +00:00
PyTorch MergeBot
6b366dd3c1 Revert "[ONNX] Refactor to remove inline imports (#77142)"
This reverts commit c08b8f0967.

Reverted https://github.com/pytorch/pytorch/pull/77142 on behalf of https://github.com/malfet
2022-05-13 19:44:17 +00:00
Justin Chu
c08b8f0967 [ONNX] Refactor to remove inline imports (#77142)
Reduce circular dependencies

- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
    - Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
2022-05-13 03:46:33 +00:00
Justin Chu
1dd7336441 [ONNX] Add quantization support for maxpool (#77393)
Support quantization for maxpool exporting to ONNX.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77393
Approved by: https://github.com/BowenBao
2022-05-13 00:32:25 +00:00
Justin Chu
5dd1c67776 [ONNX] Format ONNX python with black
Format all onnx python code with black and isort with

```sh
isort torch/onnx/ test/onnx
black torch/onnx/ test/onnx
```

Updated lintrunner config to include these paths.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76754
Approved by: https://github.com/suo, https://github.com/BowenBao
2022-05-05 00:19:22 +00:00
BowenBao
8d31706b9e [ONNX] Support restricted quantized range for activation.
PyTorch restricts activations to be in the range (0, 127).
In ONNX, the supported ranges are (0, 255) and (-128, 127),
respectfully, uint8 and int8. This PR extends support for range
(0, 127), by adding additional clipping when detected.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76055

Approved by: https://github.com/garymm
2022-04-25 01:17:21 +00:00
BowenBao
cada2cd3ae [ONNX] Support per channel quantization
Extending the support for quantization with per channel quantization.
An extra attribute `axis` can be found for per channel quantized tensors,
most commonly in quantized weight of Convolution or Linear module.
The PR adds support to correctly parse the `axis` attribute, and map to
ONNX representation in `QuantizeLinear` and `DequantizeLinear`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76002

Approved by: https://github.com/garymm
2022-04-25 01:14:57 +00:00
BowenBao
6305e572ed [ONNX] Support dynamic scale & zero_point for fake_quantize_per_tensor_affine
Dynamic scale & zero_point requires opset 13 `ONNX::QuantizeLinear`
and `ONNX::DequantizeLinear`.
Improved error message when scale is not constant for opset 10 symbolic function.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75697
Approved by: https://github.com/garymm
2022-04-13 19:17:58 +00:00
ganler
c1f0e6e763 [ONNX] Make Non-Float Op Exportation Compatible to Avoid Invalid ONNX Models
There are a few ONNX operators do not support non-float (e.g., integer) inputs at early versions. For example, Clip supports non-float types until [opset 12](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#type-constraints-280), that said older versions like [opset 6](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#type-constraints-107) cannot deal with integer types.

I initially find such a bug in Clip (https://github.com/pytorch/pytorch/pull/70584), but later found more:
1. Clip < 12;
2. Min/Max < 12;
3. ReLU < 14;
4. Pad < 11;

In PyTorch, if we export Max-11 with integer inputs, actually the exportation will succeed; however, fail when imported by other frameworks like ONNXRuntime.

```python
import torch

class Net(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()

    def forward(self, x: torch.Tensor):
        return torch.max(x, x + 1)

net = Net()
onnx_model = 'test.onnx'

torch.onnx.export(net, (torch.zeros((3, 3), dtype=torch.int32),),
                  onnx_model, verbose=True, opset_version=11)
```

This is an unexpected behavior as we want to ensure that every model exported by PyTorch is valid (https://github.com/pytorch/pytorch/pull/70584#issuecomment-1020636579). Theoretically, we can simply forbid such cases (e.g., `Clip<int>` < 12, `ReLU<int>` < 14). But actually we can enhance the compatibility and flexibility of PyTorch by simply casting inputs of those operators into float tensors, which allows the float operator functions, and then casting it back to original types.

This PR implements the second approach to achieve better compatibility in PyTorch.

@garymm  @thiagocrepaldi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72401
Approved by: https://github.com/garymm, https://github.com/thiagocrepaldi
2022-04-11 23:26:44 +00:00
BowenBao
8280919fe6 [ONNX] Export bias requantize steps to ONNX
Original `bias` is float in PyTorch. Quantization is applied in kernel.
To mimic behavior in ONNX, export the `bias` quantization step,
then append the dequantization step to ready `bias` for unquantized operators.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73336
2022-03-01 01:01:23 +00:00
BowenBao
80291dff43 [ONNX] Add torch.nan_to_num and torch.maximum/minimum symbolic (#72090)
* Add nan_to_num symbolic

* Restructure if statements

* Add torch.maximum and torch.minimum support

* Squash tests

* Add dependency on input dtype

* Add documentation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73103
2022-02-23 06:38:11 +00:00
BowenBao
40de6b80ee [ONNX] Add infra for quantized model export and support quantized mobilenet v3 (#72215)
* Add infrastructure and helper functions to enable future work for other quantized operators and models.
* Add export for quantized operators needed by torchvision mobilenet v3 large.
    * ATen namespace: hardsigmoid, flatten, adaptive_avg_pool, quantize_per_tensor, dequantize.
    * Quantized namespace: conv2d, conv2d_relu, hardswish, add, mul.
* Numerous bug fixes, in unpack_quantized_weight.cpp, symbolic functions, and unit test.

Co-authored-by: BowenBao <bowbaomicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73102
2022-02-23 06:22:58 +00:00
BowenBao
32f6a1e2a2 [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
Co-authored-by: David Fan <jiafamicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72986
2022-02-18 18:27:26 +00:00
vfdev-5
3da2e09c9b Added antialias flag to interpolate (CPU only, bilinear) (#65142)
Summary:
Description:
- Added antialias flag to interpolate (CPU only)
  - forward and backward for bilinear mode
  - added tests

### Benchmarks

<details>
<summary>
Forward pass, CPU. PTH interpolation vs PIL
</summary>

Cases:
- PTH RGB 3 Channels, float32 vs PIL RGB uint8 (apply vs pears)
- PTH 1 Channel, float32 vs PIL 1 Channel Float

Code: https://gist.github.com/vfdev-5/b173761a567f2283b3c649c3c0574112

```
# OMP_NUM_THREADS=1 python bench_interp_aa_vs_pillow.py

Torch config: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.1
  - NVCC architecture flags: -gencode;arch=compute_75,code=sm_75
  - CuDNN 8.0.5
  - Build settings: BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=1, USE_CUDNN=1, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=0, USE_OPENMP=ON,

Num threads: 1
[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (320, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.9                |          3.1
      channels_last non-contiguous torch.float32  |                2.6                |          3.6

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (460, 220) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                3.4                |          4.0
      channels_last non-contiguous torch.float32  |                3.4                |          4.8

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 96) -------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                1.6                |          1.8
      channels_last non-contiguous torch.float32  |                1.6                |          1.9

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (1200, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                9.0                |          11.3
      channels_last non-contiguous torch.float32  |                8.9                |          12.5

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 1200) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.1                |          1.8
      channels_last non-contiguous torch.float32  |                2.1                |          3.4

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (320, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.2               |          1.0

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (460, 220) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.4               |          1.3

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 96) ---------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              719.9              |         599.9

Times are in microseconds (us).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (1200, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               3.7               |          3.5

Times are in milliseconds (ms).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 1200) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              834.4              |         605.7

Times are in microseconds (us).

```

</details>

Code is moved from torchvision: https://github.com/pytorch/vision/pull/4208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65142

Reviewed By: mrshenli

Differential Revision: D32432405

Pulled By: jbschlosser

fbshipit-source-id: b66c548347f257c522c36105868532e8bc1d4c6d
2021-11-17 09:10:15 -08:00