Cleaning up onnx module imports to prepare for updating `__init__`.
- Simplify importing the `_C` and `_C._onnx` name spaces
- Remove alias of the symbolic_helper module in imports
- Remove any module level function imports. Import modules instead
- Alias `symbilic_opsetx` as `opsetx`
- Fix some docstrings
Requires:
- https://github.com/pytorch/pytorch/pull/77448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77423
Approved by: https://github.com/BowenBao
Reduce circular dependencies
- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
- Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
PyTorch restricts activations to be in the range (0, 127).
In ONNX, the supported ranges are (0, 255) and (-128, 127),
respectfully, uint8 and int8. This PR extends support for range
(0, 127), by adding additional clipping when detected.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76055
Approved by: https://github.com/garymm
Extending the support for quantization with per channel quantization.
An extra attribute `axis` can be found for per channel quantized tensors,
most commonly in quantized weight of Convolution or Linear module.
The PR adds support to correctly parse the `axis` attribute, and map to
ONNX representation in `QuantizeLinear` and `DequantizeLinear`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76002
Approved by: https://github.com/garymm
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62764Fixes#58733
- Support dynamic interleave for cases with dynamic repeat values
- Moved repeat_interleave symbolic from opset 11 to opset 13, as sequence as output types for loop outputs is needed for this change
Test Plan: Imported from OSS
Reviewed By: SplitInfinity
Differential Revision: D30375179
Pulled By: msaroufim
fbshipit-source-id: 787f96bf91d124fd0483761088c5f4ae930d96a9
Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59537
PyTorch sum over empty tensor gives 0, while ONNX produces an error.
torch.sum will be translated into onnx::ReduceSum op. Per the definition of ReduceSum, update the keepdims attribute for this scenario.
Test Plan: Imported from OSS
Reviewed By: nikithamalgifb, ansley
Differential Revision: D29046604
Pulled By: SplitInfinity
fbshipit-source-id: 6f5f3a66cb8eda8b5114b8474dda6fcdbae73469
Co-authored-by: fatcat-z <jiz@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695
As PEP8 says: "Pick a rule and stick to it." [1]
[1] https://www.python.org/dev/peps/pep-0008/#string-quotes
Test Plan: Imported from OSS
Reviewed By: driazati
Differential Revision: D28714811
Pulled By: SplitInfinity
fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53312
- Add support for aten::repeat_interleave
- NOTE: Also adds fix for cases with split op where input tensor sizes are not known but _outputs is provided
Test Plan: Imported from OSS
Reviewed By: pbelevich, malfet
Differential Revision: D26922422
Pulled By: SplitInfinity
fbshipit-source-id: 5362d0d8ccfdc14c15e1ae73fd70c4c113f823e6
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857
These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
- `GLOSSARY.md`
- `aten/src/ATen/core/op_registration/README.md`
- `scripts/README.md`
- `torch/csrc/jit/codegen/fuser/README.md`
The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```
I looked over the auto-generated changes and didn't see anything that looked problematic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406
Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377
This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348
Reviewed By: walterddr, seemethere
Differential Revision: D26856620
Pulled By: samestep
fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
Summary:
Fixes https://github.com/pytorch/pytorch/issues/39502
This PR adds support for exporting **fake_quantize_per_channel_affine** to a pair of QuantizeLinear and DequantizeLinear. Per tensor support was added by PR https://github.com/pytorch/pytorch/pull/39738.
`axis` attribute of QuantizeLinear and DequantizeLinear, which is required for per channel support, is added in opset13 added by https://github.com/onnx/onnx/pull/2772.
[update 1/20/2021]: opset13 is being supported on master, the added function is now properly tested. Code also rebased to new master.
The function is also tested offline with the following code
```python
import torch
from torch import quantization
from torchvision import models
qat_resnet18 = models.resnet18(pretrained=True).eval().cuda()
qat_resnet18.qconfig = quantization.QConfig(
activation=quantization.default_fake_quant, weight=quantization.default_per_channel_weight_fake_quant)
quantization.prepare_qat(qat_resnet18, inplace=True)
qat_resnet18.apply(quantization.enable_observer)
qat_resnet18.apply(quantization.enable_fake_quant)
dummy_input = torch.randn(16, 3, 224, 224).cuda()
_ = qat_resnet18(dummy_input)
for module in qat_resnet18.modules():
if isinstance(module, quantization.FakeQuantize):
module.calculate_qparams()
qat_resnet18.apply(quantization.disable_observer)
qat_resnet18.cuda()
input_names = [ "actual_input_1" ]
output_names = [ "output1" ]
torch.onnx.export(qat_resnet18, dummy_input, "quant_model.onnx", verbose=True, opset_version=13)
```
It can generate the desired graph.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42835
Reviewed By: houseroad
Differential Revision: D26293823
Pulled By: SplitInfinity
fbshipit-source-id: 300498a2e24b7731b12fa2fbdea4e73dde80e7ea
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51524
* def unsafe_chunk() support and test in ops13.
* Use _unsqueeze_helper insteadof Unsqueeze operator
* Cast the splits into long.
* Change the test to a fixed dimension.
* Update test_pytorch_onnx_onnxruntime.py
* Disable test_loop_with_list for opset 13.
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D26203123
Pulled By: SplitInfinity
fbshipit-source-id: b273aeff8339faa0e8e9f1fcfbf877d1b703209f
Co-authored-by: Negin Raoof <neginmr@utexas.edu>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51518
* enable remaining test in opset13
* add comments for error version test info
* fix comments:opset12 unbind problem
* add ignore[no-redef]
* fix format
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D26203122
Pulled By: SplitInfinity
fbshipit-source-id: e7d95bd2ce13f79f11965be82f640379cd55ff0f
Co-authored-by: hwangdeyu <deyhuang@qq.com>