Current we are unable to utilize ONNX's SpaceToDepth operator due to the lack of the mode_s attribute, hence we add an alternative symbolic in opset 9 to support pixel_unshuffle
- Adds support for pixel_unshuffle in opset9
- Adds support for dynamic input shapes for pixel_shuffle and pixel_unshuffle
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72449
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69548
* Add Concat to Scalar type analysis pass
By using scalar type analysis for Concat, the exported model can do
automatic type promotion for Concat nodes, including mixed fp16 and fp32
inputs, for example.
Unit tests based on the original PR https://github.com/pytorch/pytorch/pull/24378/
* Fix UTs
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32994268
Pulled By: malfet
fbshipit-source-id: 0deab88b0bb1e396770690af27730accb64fcf63
(cherry picked from commit a99322cadf)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491
* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
* Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.
Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.
Test Plan: Imported from OSS
Reviewed By: jansel
Differential Revision: D32483782
Pulled By: malfet
fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a
Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
Summary:
Convert type comments in caffe2/test/onnx/
Produced by running:
```
python -m libcst.tool codemod convert_type_comments.ConvertTypeComment caffe2/test/onnx/
```
from the parent directory.
One question is whether we actually want to scrap type comment here. There are some jit tests where we're explicitly aiming to validate py2-style type comments; I don't think this test is one of those cases but if I'm misreading it I can close the PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72632
Reviewed By: msaroufim
Differential Revision: D34112196
Pulled By: stroxler
fbshipit-source-id: a3d18cb36e7eeb4af9be781e98776bf24b96b854
(cherry picked from commit 9301019e51)
Summary:
- PyTorch and ONNX has supported BFloat16, add this to unblock some mixed-precision training model.
- Support PyTorch TNLG model to use BFloat16 tensors for the inputs/outputs of the layers that run on the NPU.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66788
Reviewed By: jansel
Differential Revision: D32283510
Pulled By: malfet
fbshipit-source-id: 150d69b1465b2b917dd6554505eca58042c1262a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67808
torch.reciprocal implicitly casts the inputs to float, and ONNX
Reciprocal requires floating point inputs.
Also separate the reciprocal test from other tests, and test different
input types.
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32181307
Pulled By: malfet
fbshipit-source-id: 3e1109b3c85a49c51dc713656a900b4ee78c8340
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67806
Previously new_full would fail with errors like:
`TypeError: only integer tensors of a single element can be converted to an index`
And full_like would trigger warnings like:
`DeprecationWarning: an integer is required (got type float). Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python.`
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32181301
Pulled By: malfet
fbshipit-source-id: 2cf262cfef36c18e7b2423efe1e1d4fa3438f0ba
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805
Also fix Reduce ops on binary_cross_entropy_with_logits
The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.
This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32181304
Pulled By: malfet
fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67803
* Addresses comments from #63589
[ONNX] remove torch::onnx::PRODUCER_VERSION (#67107)
Use constants from version.h instead.
This simplifies things since we no longer have to update
PRODUCER_VERSION for each release.
Also add TORCH_VERSION to version.h so that a string is available for
this purpose.
[ONNX] Set `ir_version` based on opset_version. (#67128)
This increases the odds that the exported ONNX model will be usable.
Before this change, we were setting the IR version to a value which may
be higher than what the model consumer supports.
Also some minor clean-up in the test code:
* Fix string replacement.
* Use a temporary file so as to not leave files around in the test
current working directory.
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D32181306
Pulled By: malfet
fbshipit-source-id: 02f136d34ef8f664ade0bc1985a584f0e8c2b663
Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67270
* Add dim argument to all symbolic
* All symbolic depends on any symbolic
Test Plan: Imported from OSS
Reviewed By: msaroufim
Differential Revision: D31962518
Pulled By: malfet
fbshipit-source-id: f7ee05cf4eff5880fc508154267e060952b5b42d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66143
Delete test_list_remove. There's no point in testing conversion of
this model since TorchScript doesn't support it.
Add a link to an issue tracking test_embedding_bag_dynamic_input.
[ONNX] fix docs (#65379)
Mainly fix the sphinx build by inserting empty before
bulleted lists.
Also some minor improvements:
Remove superfluous descriptions of deprecated and ignored args.
The user doesn't need to know anything other than that they are
deprecated and ignored.
Fix custom_opsets description.
Make indentation of Raises section consistent with Args section.
[ONNX] publicize func for discovering unconvertible ops (#65285)
* [ONNX] Provide public function to discover all unconvertible ATen ops
This can be more productive than finding and fixing a single issue at a
time.
* [ONNX] Reorganize test_utility_funs
Move common functionality into a base class that doesn't define any
tests.
Add a new test for opset-independent tests. This lets us avoid running
the tests repeatedly for each opset.
Use simple inheritance rather than the `type()` built-in. It's more
readable.
* [ONNX] Use TestCase assertions rather than `assert`
This provides better error messages.
* [ONNX] Use double quotes consistently.
[ONNX] Fix code block formatting in doc (#65421)
Test Plan: Imported from OSS
Reviewed By: jansel
Differential Revision: D31424093
fbshipit-source-id: 4ced841cc546db8548dede60b54b07df9bb4e36e
Summary:
This moves it to where the user would expect it to be based on the
documentation and all the other public classes in the torch.onnx module.
Also rename it from ONNXCheckerError, since the qualified name
torch.onnx.ONNXCheckerError is otherwise redundant.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66644
Reviewed By: malfet
Differential Revision: D31662559
Pulled By: msaroufim
fbshipit-source-id: bc8a57b99c2980490ede3974279d1124228a7406
Summary:
All of the pooling modules except MaxUnpool and LPPool return either a
Tensor or [Tensor, Tensor]. The current type annotations are inaccurate,
and prevent scripting the module if return_indices is set as True in the
module.
There's not a great way to make this agree with mypy because the
overload is dependent on the value of return_indices, an attribute.
I tried changing the annotations from `Tensor` to
`Union[Tensor, Tuple[Tensor, Tensor]]`, but that breaks a bunch of uses
that have return_indices=False.
For example, this breaks:
4e94e84f65/torch/nn/modules/container.py (L139)
Also clean up how test names were being constructed in test_jit, since
otherwise we were getting name collisions when there were two tests on
the same nn.Module.
Fixes https://github.com/pytorch/pytorch/issues/45904
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65847
Reviewed By: ZolotukhinM
Differential Revision: D31462517
Pulled By: eellison
fbshipit-source-id: 6f9e8df1be6c75e5e1e9bae07cf3ad3603ba59bd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66513
These were missed in the migration of onnx to github actions.
Adds ort tests with 2 shards for the onnx workflow
Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D31599433
Pulled By: seemethere
fbshipit-source-id: 73dce0d3017c4280e64f0c8578e2be7ef6a168d6
Summary:
Addresses this network risk mitigation mentioned in https://github.com/pytorch/pytorch/issues/65439#issuecomment-924627239.
I didn't include any mobile app/benchmarking changes because I think the pretrained matters there.
I ended up removing the changes in test_utils because those were sensitive to the pretrained variable.
I am saving the quantization test changes for another PR because they are currently disabled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66312
Reviewed By: ejguan
Differential Revision: D31542992
Pulled By: janeyx99
fbshipit-source-id: 57b4f70247af25cc96c57abd9e689c34641672ff
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66149
Updated logic will be able to infer rank of slice output, when only rank is known for slice input. Enables cases where `ConstantValueMap::HasRank(input)` is `True`, while `ConstantValueMap::HasShape(input)` is `False`.
Test Plan: Imported from OSS
Reviewed By: malfet
Differential Revision: D31423232
Pulled By: ezyang
fbshipit-source-id: 516e3916aa71afda2b10e44620636e42ed837236
Co-authored-by: BowenBao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64578
* Fix remainder export for edge case when input is negative. New export relies on true_divide export.
* Simplified true_divide export. Cleaned up redundant code which is handled by scalar type analysis pass. Removed dependency on `onnx::Where`, thus supports opset 7 & 8.
Fixes#60179
Test Plan: Imported from OSS
Reviewed By: jansel
Differential Revision: D30919601
Pulled By: malfet
fbshipit-source-id: 0f78621c0ac3bdb6bf4225e049ba5f470dc8ab12
Co-authored-by: BowenBao <bowbao@microsoft.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64381
* Added new ONNX test for batched_nms
* Update test according to PR in torchvision
* Update test/onnx/test_pytorch_onnx_onnxruntime.py
Test Plan: Imported from OSS
Reviewed By: jansel
Differential Revision: D30919602
Pulled By: malfet
fbshipit-source-id: edfb5b9f75077429f7f242fd6ac06d962968dfba
Co-authored-by: Bowen Bao <imbowenbao@outlook.com>
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64382
* This `use_external_data_format` parameter is used for large models cannot be exported because of the 2GB protobuf limit.
* When `use_external_data_format` set to True, the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself.
* This PR will set this paramter to DEPRECATED and check the model proto sizes by code instead of by user, if the sizes lager than 2GB, then `use_external_data_format = True` automatically.
Test Plan: Imported from OSS
Reviewed By: ezyang
Differential Revision: D30905265
Pulled By: malfet
fbshipit-source-id: 82b4e17bfa6a8de2bfd700a5282c12f6835603cb
Co-authored-by: hwangdeyu <dejack953@outlook.com>