Commit Graph

282 Commits

Author SHA1 Message Date
Justin Chu
5dd1c67776 [ONNX] Format ONNX python with black
Format all onnx python code with black and isort with

```sh
isort torch/onnx/ test/onnx
black torch/onnx/ test/onnx
```

Updated lintrunner config to include these paths.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76754
Approved by: https://github.com/suo, https://github.com/BowenBao
2022-05-05 00:19:22 +00:00
BowenBao
679fc90cdb [ONNX] Support optional type (#68793) (#73284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
(cherry picked from commit 822e79f31ae54d73407f34f166b654f4ba115ea5)
2022-05-04 20:24:30 +00:00
Justin Chu
2e841e68b2 [ONNX] Documentation and typing annotations in registry
Updating the docstrings and type annotations as I walk through the code.

- Turned some comments into docstrings.
- Added type annotations for some functions in utils and the registry
- Removed direct function imports; importing functions makes name space collision easier to happen and refactoring/code analysis harder: https://google.github.io/styleguide/pyguide.html#22-imports
- Formatted touched files with black
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76255
Approved by: https://github.com/BowenBao
2022-04-28 18:24:24 +00:00
Thiago Crepaldi
90d31cb311 Emit ATen ops when symbolics raise + minor fixes
Currently `torch.onnx.export(.., operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK)` only issues ATen ops through explicit requests (e.g. `g.at()`) calls inside each op symbolic function. This is done based on specific conditions such as `operator_export_type==OperatorExportTypes.ONNX_ATEN_FALLBACK)` or `is_caffe2_aten_fallback()`

This PR extends the ATen fallback mechanism for scenarios when the symbolic function raises `RuntimeError` during export. The idea is that partial implementation of existing ONNX ops can fallback to ATen as a last resort. That is valuable because each operator can have many input combinations and not all are always implemented.

A minor fix was done to make sure the `overload_name` attribute is added to explicit ATen op fallback requests when a symbolic is not registered to a particular op.

ps: The behavior for builds with BUILD_CAFFE2=1 is not changed to ensure BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74759
Approved by: https://github.com/garymm, https://github.com/msaroufim
2022-04-23 21:24:25 +00:00
BowenBao
2c748b7573 [ONNX] Trace model if quantization is detected
Previously pre-tracing model is required for exporting quantized model.
e.g. calling `traced_m = torch.jit.trace(model, inputs)` and export `traced_m`.
The reason was quantized weights are stored in a unique `PackedParam` structure,
and they need to be handled by tracing to be exportable.
This PR enables export api to call tracing underneath if it detects quantization
in the model.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75921

Approved by: https://github.com/garymm
2022-04-22 17:27:32 +00:00
Thiago Crepaldi
9bbe1d632e Fix ONNX ATen fallback for non-caffe2 engines
This PR introduces 3 BC changes:

First, this PR propagates `BUILD_CAFFE2` flag to `libtorch` and `libtorch_python`, which is necessary for non-caffe2 ONNX runtimes when using `ONNX_ATEN_FALLBACK` operator export type.

Second, as a complement of https://github.com/pytorch/pytorch/pull/68490, this PR refactors Caffe2's Aten ops symbolics to consider not only the `operator_export_type` (aka `ONNX_ATEN_FALLBACK`) to emit Caffe2 Aten ops, but also whether `BUILD_CAFFE2` (which is called `torch.onnx._CAFFE2_ATEN_FALLBACK` in python binding) is set.

Lastly, it renames `onnx::ATen` to `aten::ATen` for ONNX spec consistency in a BC fashion.
ONNX doesn't have `ATen` op on its spec, but PyTorch ONNX converter emits them. Non-Caffe2 backend engines would be mislead by such operator's name/domain. A non-ideal workaround would be to have Aten ops handled based on its name and ignore the (non-complaint) domain. Moreover, users could incorrectly file bugs to either ONNX or ONNX Runtime when they inspect the model and notice the presence of an unspecified ONNX operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73954
Approved by: https://github.com/BowenBao, https://github.com/malfet, https://github.com/garymm, https://github.com/jiafatom
2022-04-14 23:18:45 +00:00
BowenBao
144b7de9dd [ONNX] Adjust is_train flag for onnx pass deduplicate initializers
Previous logic didn't consider the case for TrainingMode.PRESERVE.
A more direct way is to check `model.training`, which is the accurate
training mode, set by `exporter_context(model, training)`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74247
Approved by: https://github.com/garymm
2022-03-22 22:39:20 +00:00
BowenBao
54a6942f8d [ONNX] ONNX Exporter logging (#71342)
Summary:
Add ONNX exporter logging facility. Supporting both C++/Python logging api. Logging can be turned on/off. Logging output stream can be either set to `stdout` or `stderr`.

A few other changes:
* When exception is raised in passes, the current IR graph being processed will be logged.
* When exception is raised from `_jit_pass_onnx` (the pass that converts nodes from namespace `ATen` to `ONNX`), both ATen IR graph and ONNX IR graph under construction will be logged.
* Exception message for ConstantFolding is truncated to avoid being too verbose.
* Update the final printed IR graph with node name in ONNX ModelProto as node attribute. Torch IR Node does not have name. Adding this to printed IR graph helps debugging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71342

Reviewed By: msaroufim

Differential Revision: D34433473

Pulled By: malfet

fbshipit-source-id: 4b137dfd6a33eb681a5f2612f19aadf5dfe3d84a
(cherry picked from commit 67a8ebed5192c266f604bdcca931df6fe589699f)
2022-03-17 19:40:03 +00:00
BowenBao
9210e8f540 [ONNX] Adds overload_name to Aten op (#69378) (#73280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73280

This PR adds a new attribute overload_name to the Aten node so that third party applications can implement calls to libtorch without using PyTorch source code.

This is necessary because torch's torch::jit::findOperatorFor(fullname) requires a full name, including operator and overload names.

ATen op was originally created for Caffe2, which leveraged the availability of the pytorch yaml files to create calls to the aten oeprators directly, not relying on torch::jit::findOperatorFor

The first part of the PR refactors all symbolics that create Aten ops, so that there is a single helper for this operator. Next all symbolics are updated to pass in the relevant overload name, if empty string is not applicable

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D34625645

Pulled By: malfet

fbshipit-source-id: 37d58cfb5231833768172c122efc42edf7d8609a
(cherry picked from commit e92f09117d3645b38bc3235b30aba4b4c7c71dfa)
2022-03-09 14:26:18 +00:00
BowenBao
b3cfc74f0f [ONNX] Capture annotated attributes for local function
Enables local function export to capture annotated attributes.
For example:
```python
class M(torch.nn.Module):
    num_layers: int

    def __init__(self, num_layers):
        super().__init__()
        self.num_layers = num_layers

    def forward(self, args):
        ...
```
`num_layers` will now be captured as attribute of local function `M`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72883
2022-02-28 18:56:18 +00:00
BowenBao
28bf2f80cf Don't call _jit_pass_onnx_function_extraction if export_modules_as_functions is False (#69742)
* fix clang-format violations

* Don't call _jit_pass_onnx_function_extraction if export_modules_as_functions is False

It's just wasteful.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73100
2022-02-22 22:43:53 +00:00
BowenBao
2791725a84 Integrate full ONNX check into ONNX export API (#71125)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72988
2022-02-18 18:40:09 +00:00
BowenBao
32f6a1e2a2 [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
Co-authored-by: David Fan <jiafamicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72986
2022-02-18 18:27:26 +00:00
BowenBao
cc792746d2 [ONNX] De-duplicate initializers (#68202) (#69547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69547

ScriptModule export introduces duplicated ONNX initializers for shared weights, unnecessarily increases ONNX model size. This PR de-duplicates ONNX initializers for model exported in eval mode, by checking if the underlying tensors share the same `data_ptr`, `strides` and `sizes`.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994271

Pulled By: malfet

fbshipit-source-id: 10ac66638b6255890875272472aa9ed07a5b1d9a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit d7cbde940c)
2022-02-11 22:05:15 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
BowenBao
eb4238fc26 Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68490

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

 Co-authored-by: Nikita Shulga <nshulga@fb.com>

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483781

Pulled By: malfet

fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124
(cherry picked from commit 5fb1eb1b19)
2022-02-10 03:26:48 +00:00
BowenBao
cf70466970 [ONNX] Improve scope inference in function extraction
Cover more cases of scope inferencing where consecutive nodes don't have valid scope information. Usually these nodes are created in some pass where authors forgot to assign meaningful scope to them.
* One rule of `InferScope` is to check if the current node's outputs' users share the same scope. Recursively run `InferScope` on the user nodes if they are missing scope as well. Since the graph is SSA, the depth is finite.
* Fix one pass that missed scope information for a new node.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71897
2022-01-31 23:58:53 +00:00
BowenBao
804f13289e [ONNX] Update opset_version restriction for local function
Export should fail if export_modules_as_functions is set and opset_version<15.
This is because opeset_version < 15 implies IR version < 8, which means no local function support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71619
2022-01-27 00:21:13 +00:00
Emilio Castillo
e2dc2aca93 Export ONNX models with readable input/output names (#68976)
Summary:
For some ONNX exported models, the inputs/outputs names have sometimes a numeric value and this makes pretty hard to inspect the generated graphs in the case of large models.

The solution in this PR was initially submitted to our internal utilities library by take-cheeze https://github.com/pfnet/pytorch-pfn-extras/pull/102

Now we would like to upstream this change by adding an extra kwarg when exporting the model to allow replacing these numeric names with actual debuggable ones.

As an example, the following code shows that the module output is `3`

```python
g, p, o = _model_to_graph(module, torch.ones(1, 10))
for n in g.nodes():
    for v in n.outputs():
        print(v.debugName())
```
output
```
3
```

With this PR

```
v3_Gemm
```

This allows identifying this out as a value from the associated Gemm layer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68976

Reviewed By: jansel

Differential Revision: D33662246

Pulled By: msaroufim

fbshipit-source-id: 45f56eef2a84d9a318db20c6a6de6c2743b9cd99
(cherry picked from commit 513c1d28f1)
2022-01-21 00:34:56 +00:00
BowenBao
ff78c73286 [ONNX] Remove f arg from export_to_pretty_string (#69045) (#69546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69546

The arg is not used and was previously deprecated.

Also remove torch.onnx._export_to_pretty_string. It's redundant with the
public version.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994270

Pulled By: msaroufim

fbshipit-source-id: f8f3933b371a0d868d9247510bcd73c31a9d6fcc
2022-01-12 21:31:36 -08:00
Deyu Huang
d32efe8bc2 [ONNX] Remove the argument use_external_data_format of export() method entirely. (#67080) (#67811)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67811

* remove the argument use_external_data_format of export() method entirely

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181302

Pulled By: malfet

fbshipit-source-id: 4bc1448b7487bb9dfdad4e36008ff5b227fd64a3

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-11-15 17:20:04 -08:00
Thiago Crepaldi
9d25554d45 [ONNX] Allow registration of custom symbolics for aten namespace (#66481) (#67810)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67810

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181303

Pulled By: malfet

fbshipit-source-id: af2a715dc554b958fa3f5a7a8ae96cb3f7d112bb
2021-11-15 17:18:39 -08:00
Deyu Huang
48c8de45b0 [ONNX] Remove the argument example_outpus of export() method entirely. (#67082) (#67809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67809

* remove the argument example_outpus of export() method entirely

[ONNX] Follow-up: Remove the argument example_outpus of export() method entirely. (#67629)

* Resolve CI failure

* remove test after removing example_outputs

[ONNX] Follow-up: Follow-up: Remove the argument example_outpus of export() method entirely (#67719)

Removing unused import, resolving flake error.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181305

Pulled By: malfet

fbshipit-source-id: ba00547b7cb455ace86606b1bda643c02bdcfa1b

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-11-12 17:06:26 -08:00
Bowen Bao
02e35ce17b [ONNX] Update onnx function export with comments and clean up (#66817) (#67803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67803

* Addresses comments from #63589

[ONNX] remove torch::onnx::PRODUCER_VERSION (#67107)

Use constants from version.h instead.
This simplifies things since we no longer have to update
PRODUCER_VERSION for each release.

Also add TORCH_VERSION to version.h so that a string is available for
this purpose.

[ONNX] Set `ir_version` based on opset_version. (#67128)

This increases the odds that the exported ONNX model will be usable.
Before this change, we were setting the IR version to a value which may
be higher than what the model consumer supports.

Also some minor clean-up in the test code:
* Fix string replacement.
* Use a temporary file so as to not leave files around in the test
  current working directory.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181306

Pulled By: malfet

fbshipit-source-id: 02f136d34ef8f664ade0bc1985a584f0e8c2b663

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-11-05 10:35:35 -07:00
Jay Zhang
26241994b2 Remove the argument strip_doc_string of export() method entirely. (#66615) (#67278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67278

Remove the argument strip_doc_string of export() method entirely.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962512

Pulled By: malfet

fbshipit-source-id: 168ad3f157a80d1edd7a9053783b3f3deb2ecf43

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-10-28 19:25:07 -07:00
Jay Zhang
43d51254bf Deprecate the argument _retain_param_name of export() method entirely. (#66617) (#67277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67277

Remove the argument _retain_param_name of export() method entirely.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962514

Pulled By: malfet

fbshipit-source-id: 8ac5e3a4a7821cc580951a7f167fd20069116350

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-10-28 19:25:05 -07:00
Jay Zhang
40920185ac [ONNX] Remove the argument enable_onnx_checker of export() method entirely. (#66611) (#67276)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67276

[ONNX] Remove argument _retain_param_name from torch.onnx.export() function.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962520

Pulled By: malfet

fbshipit-source-id: 86ee15f525261c0da74175e47dd74eeb169ac47f

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-10-28 19:25:03 -07:00
Nikita Shulga
b18c298f24 ONNX: Delete or document skipped ORT tests (#64470) (#66143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66143

Delete test_list_remove. There's no point in testing conversion of
this model since TorchScript doesn't support it.

Add a link to an issue tracking test_embedding_bag_dynamic_input.

[ONNX] fix docs (#65379)

Mainly fix the sphinx build by inserting empty before
bulleted lists.

Also some minor improvements:
Remove superfluous descriptions of deprecated and ignored args.
The user doesn't need to know anything other than that they are
deprecated and ignored.

Fix custom_opsets description.

Make indentation of Raises section consistent with Args section.

[ONNX] publicize func for discovering unconvertible ops (#65285)

* [ONNX] Provide public function to discover all unconvertible ATen ops

This can be more productive than finding and fixing a single issue at a
time.

* [ONNX] Reorganize test_utility_funs

Move common functionality into a base class that doesn't define any
tests.

Add a new test for opset-independent tests. This lets us avoid running
the tests repeatedly for each opset.

Use simple inheritance rather than the `type()` built-in. It's more
readable.

* [ONNX] Use TestCase assertions rather than `assert`

This provides better error messages.

* [ONNX] Use double quotes consistently.

[ONNX] Fix code block formatting in doc (#65421)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424093

fbshipit-source-id: 4ced841cc546db8548dede60b54b07df9bb4e36e
2021-10-22 13:46:16 -07:00
Nikita Shulga
53a163a015 [ONNX] Export nn.Module call as ONNX local function (#63589) (#66140)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66140

* Add new argument to export api to enable users specifying `nn.Module` classes that they wish to be exported as local function in ONNX model.
* Refactor `torch/csrc/jit/serialization/export.cpp`, and remove redundant `EncoderBase` class.
* ~~Contains changes from #63268~~
* Depends on #63716 to update onnx submodule.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424098

fbshipit-source-id: c949d0b01c206c30b4182c2dd1a5b90e32b7a0d3

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-10-22 13:44:56 -07:00
BowenBao
1cf317b85f [ONNX] Support exporting with Apex O2 (#65374) (#66700)
Summary:
Apex O2 hook state_dict to return fp16 weights as fp32. Exporter cannot identify them as same tensors.
Since this hook is only used by optimizer, it is safe to remove this hook while exporting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66700

Reviewed By: zou3519

Differential Revision: D31695132

Pulled By: malfet

fbshipit-source-id: 977bdf57240002498f3ad0f1a8046c352e9860e6
2021-10-18 11:54:09 -07:00
Gary Miguel
2506baf9c2 [ONNX] move CheckerError from torch.onnx.utils to torch.onnx (#66644)
Summary:
This moves it to where the user would expect it to be based on the
documentation and all the other public classes in the torch.onnx module.

Also rename it from ONNXCheckerError, since the qualified name
torch.onnx.ONNXCheckerError is otherwise redundant.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66644

Reviewed By: malfet

Differential Revision: D31662559

Pulled By: msaroufim

fbshipit-source-id: bc8a57b99c2980490ede3974279d1124228a7406
2021-10-15 10:38:56 -07:00
Edward Yang
11bc435622 Allow registration of custom symbolics for prim namespace (#64460) (#66139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66139

[ONNX] Add prim::PythonOp check back in export.cpp (#64944)

Add prim::PythonOp check back in export.cpp

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424102

fbshipit-source-id: 6d2eef767fab846ed79ea509e97b714072bac9f4

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-08 07:41:06 -07:00
Edward Yang
53fefaa916 [ONNX] Fix duplicated output same name case (#64190) (#66137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66137

* fix duplicated output node same output name issue.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424100

fbshipit-source-id: b1b06a92c51744030788b651f3a597d987a8deda

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-10-08 07:41:01 -07:00
BowenBao
2d61009f4a [ONNX] Fix input sequence for pad op (#60554) (#64377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64377

* Fix for input primitive sequence

* Test mypy

* Fix for tracing tuples

* Fix for extra inputs

* flake8

* Rebase

* Fix for tracing tuples

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919606

Pulled By: malfet

fbshipit-source-id: a718c4a12cda77b968cb636acd7aa63d7b5ba326
2021-09-30 21:08:45 -07:00
BowenBao
20143bf07f [ONNX] Deprecate use_external_data_format param from torch.onnx.export() function. (#62257) (#64382)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64382

* This `use_external_data_format` parameter is used for large models cannot be exported because of the 2GB protobuf limit.

* When `use_external_data_format` set to True, the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself.

* This PR will set this paramter to DEPRECATED and check the model proto sizes by code instead of by user, if the sizes lager than 2GB, then `use_external_data_format = True` automatically.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905265

Pulled By: malfet

fbshipit-source-id: 82b4e17bfa6a8de2bfd700a5282c12f6835603cb

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-23 22:20:48 -07:00
BowenBao
478d4cf883 [ONNX] Deprecated the example_outputs param from torch.onnx.export() function. (#62815) (#64380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64380

* `example_outputs` used to determine the type and shape of the outputs without tracing the execution of the model. And it must be provided when exporting a ScriptModule or ScriptFunction when using export() function.

* Since we can work out `example_outputs` in internal function instead of being provided by user, so we deprecated this argument in the export() function to increase user experience of calling this function.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905266

Pulled By: malfet

fbshipit-source-id: d00b00d7d02b365d165028288ad915678caa51f2

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-23 22:20:46 -07:00
BowenBao
9323ea2195 [ONNX] minor doc improvements and cleanup (#62514) (#64373)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64373

* Fix some bad formatting and clarify things in onnx.rst.
* In `export_to_pretty_string`:
    * Add documentation for previously undocumented args.
    * Document that `f` arg is ignored and mark it deprecated.
    * Update tests to stop setting `f`.
    * Warn if `_retain_param_name` is set.
* Use double quotes for string literals in test_operators.py.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905271

Pulled By: malfet

fbshipit-source-id: 3627eeabf40b9516c4a83cfab424ce537b36e4b3
2021-09-23 22:20:44 -07:00
BowenBao
fb71ccf0f1 [ONNX] Remove strip_doc_string param from torch.onnx.export() function. (#61712) (#64371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64371

As of now, the "strip_doc_string" parameter was described as below:

strip_doc_string (bool, default True): do not include the field
doc_string``` from the exported model. Otherwise the field will mention the source code locations for model``.

This is usually useless to users who want to transform a PyTorch model to ONNX one. Only when the user wants to debug the export process, these source code locations could provide benefits.

To make the export() function more friendly by providing less parameters, we combined "strip_doc_string" into "verbose" parameter. If a user set verbose to True, it means the users need some log information for debugging the export process and this is similar with the purpose of strip_doc_string parameter.

But the usage of these 2 arguments are opposite: setting verbose to True means we want to print log information to help debug, which means strip_doc_string should be False. And this is how we replace strip_doc_string with verbose argument in this PR.

This PR will still keep it in torch.onnx.export() function for backward support while the usage of it has been combined with verbose argument.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905268

Pulled By: malfet

fbshipit-source-id: 2f06eb805c01fe15ff7a1b4f6595c937ba716d60

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-23 22:20:40 -07:00
BowenBao
47d1ed60e1 [ONNX] Remove argument _retain_param_name from torch.onnx.export() function. (#61702) (#64370)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64370

As of now, the "_retain_param_name" parameter has no description in PyTorch docs website. According to code, this argument determines if we keep the original parameter names of PyTorch model in the final ONNX graph. If this is False, those original parameter names will be replaced with a series of integers starting from 1.

Since setting numbers as parameter names make no sense to users, we remove this argument from the torch.onnx.export() function to increase user experience of calling this function.

This PR will still keep it in torch.onnx.export() function for backward support while all backend logic has been changed to work as _retain_param_name is set to True.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905270

Pulled By: malfet

fbshipit-source-id: ca60757ca17daaff937e9f08da42596086795f4a

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-23 22:18:52 -07:00
BowenBao
3d32dec5ba [ONNX] Deprecate enable_onnx_checker argument in torch.onnx.export() (#61708) (#64369)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64369

As of now, the "enable_onnx_checker" parameter was described as below:

enable_onnx_checker (bool, default True): If True the ONNX model checker will be run to ensure the exported model is a valid ONNX model.

An invalid ONNX graph is useless to users so such checker should be done for each call.

In this PR, we will still write the model to an ONNX file even it is invalid. And the exception will be thrown after the ONNX file has been created. This enables user output an invalid ONNX graph for debug.

This PR will still keep it in torch.onnx.export() function for backward support while all backend logic has been changed to work as enable_onnx_checker is set to True.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905267

Pulled By: malfet

fbshipit-source-id: 3ad3f68e77fcec012cc7ef674cc9a61755eebc9e

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-17 14:31:41 -07:00
BowenBao
6512838fab [ONNX] Enhance shape (two changes merged) (#64585)
Summary:
Enhanced shape inference by introducing typeReliableMap.
[ONNX] exporter changes for torch hub models (https://github.com/pytorch/pytorch/issues/62856)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64585

Reviewed By: ezyang

Differential Revision: D30870418

Pulled By: msaroufim

fbshipit-source-id: 87a294799cb87d649d1d13b6114a5cfbac9be15c

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-09-15 13:02:19 -07:00
Corey Levinson
db3fcf0af3 fix typo in torch/onnx/utils.py (#63396)
Summary:
fixes minor typo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63396

Reviewed By: pbelevich

Differential Revision: D30644295

Pulled By: SplitInfinity

fbshipit-source-id: c506f67383909aa2c0c7c533038446b4b3d76a3a
2021-09-10 09:37:44 -07:00
Meghan Lele
95d0b3199b Back out "[ONNX] Fix an issue that optimizations might adjust graph inputs unexpectedly. (#61280)" (#64004)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63904

Fixes T98808160

Test Plan: T98808160

Reviewed By: msaroufim

Differential Revision: D30527450

fbshipit-source-id: 6262901a78ca929cecda1cf740893139aa26f1b4
2021-08-26 12:49:42 -07:00
BowenBao
8760254911 [ONNX] Fix an issue that optimizations might adjust graph inputs unexpectedly. (#61280) (#62763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62763

This PR is to fix the issue that the graph inputs might be updated when we export the model in inference mode.

When a model is export in inference mode, some optimizations will be made. One side effect of these optimizations is: the inputs of graph might be adjusted. Such optimizatiosn include:

	1. Conv and BatchNorm op fusion.
	2. Do constant folding.

If the user sets export_params=False, or set keep_initializers_as_inputs=True, it's highly possible that the user wants to provide the corresponding parameters or initiliazers as the inputs of the graph.
In such situation, no matter the model is export in inference mode or training mode, exporter needs to prevent above optimizations from adjusting the graph inputs. By this, the inputs of graph could match inputs that users provided.

The changes in this PR, add an additional common judgement to see if the above optimizations needs to be done or not. From the value of export_params and keep_initializers_as_inputs arguments, infer if the graph inputs are allowed to be adjusted.
If no, these optimizations will be ignored, even other requirements are matched.

Besides these code changes, the comments of some parameters below have been updated so that users have more thoughts when they consider how to leverage these parameters for different purposes:

	1. export_params
	2. training
	3. do_constant_folding
	4. keep_initializers_as_inputs

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375183

Pulled By: msaroufim

fbshipit-source-id: 4db8b9695649eb32a3a0fefa950ee2e5651bdba0

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-08-20 12:46:52 -07:00
BowenBao
e182401062 [ONNX] Remove aten parameter (#61652) (#62759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62759

* remove aten argument in export()

* add export_to_pretty_string default value OperatorExportTypes.ONNX

* add DPYTORCH_ONNX_CAFFE2_BUNDLE description

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349062

Pulled By: msaroufim

fbshipit-source-id: d9738f3aa8b80eac54548d0b9494f9f1e544f20f

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-08-18 13:29:04 -07:00
BowenBao
8726f08e15 [ONNX] Update documentation (#58712) (#60249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60249

* Add introductory paragraph explaining what ONNX is and what the
  torch.onnx module does.
* In "Tracing vs Scripting" and doc-string for torch.onnx.export(),
  clarify that exporting always happens on ScriptModules and that
  tracing and scripting are the two ways to produce a ScriptModule.
* Remove examples of using Caffe2 to run exported models.
  Caffe2's website says it's deprecated, so it's probably best not to
  encourage people to use it by including it in examples.
* Remove a lot of content that's redundant:
  * The example of how to mix tracing and scripting, and instead
    link to Introduction to TorchScript, which includes very similar
    content.
  * "Type annotations" section. Link to TorchScript docs which explain
    that in more detail.
  * "Using dictionaries to handle Named Arguments as model inputs"
    section. It's redundant with the description of the `args` argument
    to `export()`, which appears on the same page once the HTML
    is generated.
  * Remove the list of supported Tensor indexing patterns. If it's not
    in the list of unsupported patterns, users can assume it's
    supported, so having both is redundant.
  * Remove the list of supported operators and models.
    I think the list of supported operators is not very useful.
    A list of supported model architectures may be useful, but in
    reality it's already very out of date. We should add it back if
    / when we have a system for keeping it up to date.
  * "Operator Export Type" section. It's redundant with the description
    of the `operator_export_type` arg to to `export()`, which appears on
    the same page once the HTML is generated.
  * "Use external data format" section. It's redundant with the
    description of the `use_external_data_format` arg to `export()`.
  * "Training" section.  It's redundant with the
    description of the `training` arg to `export()`.
* Move the content about different operator implementations producing
  different results from the "Limitations" section into the doc for the
  `operator_export_type` arg.
* Document "quantized" -> "caffe2" behavior of
  OperatorExportTypes.ONNX_ATEN_FALLBACK.
* Combing the text about using torch.Tensor.item() and the text about
  using NumPy types into a section titled
  "Avoid NumPy and built-in Python types", since they're both
  fundamentally about the same issue.
* Rename "Write PyTorch model in Torch way" to "Avoiding Pitfalls".
* Lots of minor fixes: spelling, grammar, brevity, fixing links, adding
  links.
* Clarify limitation on input and output types. Phrasing it in terms of
  PyTorch types is much more accessible than in terms of TorchScript
  types. Also clarify what actually happens when dict and str are used
  as inputs and outputs.
* In Supported operators, use torch function and class names and link
  to them. This is more user friendly than using the internal aten
  op names.
* Remove references to VariableType.h, which doesn't appear to contain
  the information that it once did. Instead refer to the generated
  .pyi files.
* Remove the text in the FAQ about appending to lists within loops.
  I think this limitation is no longer present
  (perhaps since https://github.com/pytorch/pytorch/pull/51577).
* Minor fixes to some code I read along the way.
* Explain the current rationale for the weird ::prim_PythonOp op name.

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494912

Pulled By: SplitInfinity

fbshipit-source-id: 7756c010b2320de0692369289604403d28877719

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-07-08 16:29:32 -07:00
Gary Miguel
4b91355232 [ONNX] remove raw export type (#59160)
Summary:
[ONNX] remove raw export type

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59160

Reviewed By: tugsbayasgalan

Differential Revision: D28937039

Pulled By: SplitInfinity

fbshipit-source-id: 79bf91605526aa32a7304e75f50fe55d872bd4e8
2021-06-11 00:08:06 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
Meghan Lele
0d5527de7a Back out "Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"" (#58923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58923

Original commit changeset: c54597b2048e
ghstack-source-id: 129842041

Test Plan: Sandcastle and OSS CI.

Reviewed By: snisarg

Differential Revision: D28432555

fbshipit-source-id: 2a9ec22cc004c7c6979f1cc8f3124b833cdc6634
2021-05-26 13:29:07 -07:00
Meghan Lele
c034bce979 Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"
Summary: Original commit changeset: 833dac7c71f2

Test Plan:
```
buck test mode/dev //pytext/fb/assistant/lite/test:test -- --exact
'pytext/fb/assistant/lite/test:test - test_export_bytes_model_to_caffe2
(pytext.fb.assistant.lite.test.test.TestExport)'
```

Reviewed By: jeanm

Differential Revision: D28431840

fbshipit-source-id: 0f1d530034404421a5d51691173e1cc0ee16fdd6
2021-05-14 13:45:49 -07:00
BowenBao
bfe7728f18 [ONNX] Process const folding progressively when converts to ONNX (#54569) (#57601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57601

This PR automatically solves onnx const attribute issue in PR https://github.com/pytorch/pytorch/pull/53784.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393525

Pulled By: SplitInfinity

fbshipit-source-id: 833dac7c71f24a88af62d5dd2be0a702ed34d053

Co-authored-by: David <jiafa@microsoft.com>
2021-05-13 13:42:51 -07:00
BowenBao
346dc88bfa [ONNX] Support registering custom export for prim::PythonOp from torch.autograd.Function (#55630) (#57600)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57600

Demo script:

```python
import torch

class MyReLU(torch.autograd.Function):
    staticmethod
    def forward(ctx, input, scalar_tuple, scalar, scalar_list):
        ctx.save_for_backward(input)
        return input.clamp(min=scalar)
    staticmethod
    def backward(ctx, grad_output):
        input, = ctx.saved_tensors
        grad_input = grad_output.clone()
        grad_input[input < 0] = 0
        return grad_input

class MyModule(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear_a = torch.nn.Linear(2, 2)
        self.linear_b = torch.nn.Linear(2, 2)
        self.relu = MyReLU.apply
    def forward(self, x):
        h = self.linear_a(x)
        h = self.relu(h, (5, 3), 2, [1, 2, 3])
        h = self.linear_b(h)
        return h

"""
User define how to export prim::PythonOp into custom op.
"""
def symbolic_pythonop(g, n, *args, **kwargs):
    # Print information:
    print('arguments of ', kwargs['name'], ':')
    print('original node: ', n)
    for i, out in enumerate(n.outputs()):
        print('original output {}: {}, requires grad: {}'.format(i, out, out.requiresGrad()))
    import torch.onnx.symbolic_helper as sym_helper
    for i, arg in enumerate(args):
        print('arg {}: {}, requires grad: {}'.format(i, arg, arg.requiresGrad() if sym_helper._is_value(arg) else False))
    for k, v in kwargs.items():
        print('key: ', k, ' v: ', v)

    # TODO: all inputs (tensors and scalars) are in args.
    #       backend can define CustomDomain::PythonOp and how info are stored however it deem fit.
    return g.op("CustomDomain::PythonOp", args[0], name_s=kwargs['name'])

torch.onnx.register_custom_op_symbolic("::prim_PythonOp", symbolic_pythonop, 9)

# Define input.
x = torch.tensor([[0.3971, 0.7544],
                  [0.5695, 0.4388]], requires_grad=True)

model = MyModule()
# Forward.
y = model(x)

torch.onnx.export(model, (x,), 'model.onnx', opset_version=12, verbose=True)
```

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393528

Pulled By: SplitInfinity

fbshipit-source-id: e0d55b7c737c5916fda08a3b26b3306037f970df

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-13 13:42:49 -07:00
neginraoof
1de3525ca8 [ONNX] Handle PackedParams inputs for _propagate_and_assign_input_shapes (#56449) (#57079)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57079

Testing onnx 1.9 release, we see that the old bug is triggered for the caffe2 test:
`pytest test/onnx/test_pytorch_onnx_caffe2_quantized.py::TestQuantizedOps::test_small_model`
This is because the graph inputs
```python
graph(%x.1 : Tensor,
      %conv1._packed_params : __torch__.torch.classes.quantized.Conv2dPackedParamsBase,
      %conv2._packed_params : __torch__.torch.classes.quantized.Conv2dPackedParamsBase,
      %fc.bias : Float(10, strides=[1], requires_grad=0, device=cpu),
      %fc.weight : Float(10, 72, strides=[72, 1], requires_grad=0, device=cpu)):
```
contains `Conv2dPackedParamsBase` which is a PackedParams.
When we do flatten, we will flatten to several tensors, then the shape inference for input misaligned.
This PR record how may tensors got flattened in PackeParams, and skip by these number rather than 1, then the UT passed.
Note that tuple case should still follow the original logic.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D28393949

Pulled By: malfet

fbshipit-source-id: 98d48aad27e5ca03fb10d260f8e625478d996ee2

Co-authored-by: David <jiafa@microsoft.com>
2021-05-12 15:20:26 -07:00
BowenBao
818ce1d0d2 Add standardOps match more input type in ORT (#53813) (#56172)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56172

Enable the standardOps include **Add\Sub\Mul\Div\Gemm\Pow\Mod**  with low precision input in ORT

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866136

Pulled By: SplitInfinity

fbshipit-source-id: f2cf5649fffefd68c0cc7b6dce94198751636727
2021-04-21 17:58:08 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
Negin Raoof
cd9dd653e9 [ONNX] Support primitive type input/outputs and attributes (#53550) (#54864)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54864

Support primitive type attributes. Needed for Silero model.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408982

Pulled By: SplitInfinity

fbshipit-source-id: 16b291eedbe9f9bb31d7664a29a484555df53755
2021-03-31 21:14:20 -07:00
Peiyuan Liao
3519625a34 Fix onnx warning message (#54371)
Summary:
Adding a space between "as" and "Dropout".

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54371

Reviewed By: radkris-git

Differential Revision: D27244053

Pulled By: heitorschueroff

fbshipit-source-id: 500ea719e239ce89e5ac4b54e5b32a36155e8544
2021-03-23 03:05:52 -07:00
BowenBao
7f17058894 [ONNX] Symbolic shape inference (#51481) (#53307)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53307

This PR did symbolic shape inference, in the onnx pass _jit_pass_onnx_graph_shape_type_inference.
It creates a singleton ConstantValueMap.
It leverages constant folding technique and did a per-op based handling for ConstantValueMap.
As a byproduct, it enables fold_if pass for dynamic axes cases, typically for faster-rcnn etc.

The core change is in `torch/csrc/jit/passes/onnx/shape_type_inference.cpp` and `torch/csrc/jit/passes/onnx/constant_map.cpp`.

We usually need copy tensor to store in the ConstantValueMap, otherwise the underlying value may change. I see this issue in (1) from_blob (2) get value from Constant node.

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922414

Pulled By: SplitInfinity

fbshipit-source-id: 7654dc13d1de8d9496ad4be89f1454260d7bdeb0
2021-03-12 02:49:14 -08:00
BowenBao
57d1df071f [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306

* [ONNX] Fix for sequence of mutations in blocks (#51577)

Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.

* Support inplace operations + indexing

* Clean up old pass for remove mutations

* Add loop test

* Fixes for set attr in loops

* Removing the new jit API flag

* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

* Fix after merge

* clang

* Fix clang

* Fix clang

* Fix warning message.

* Fixes for non-model param attributes

* Fix for caffe2

* Additional test

* clang

* Skip test for lower opsets

* fix clang-tidy

* Update init.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Fix for clang formatting

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922416

Pulled By: SplitInfinity

fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-03-12 02:49:11 -08:00
BowenBao
b9e900ee52 [ONNX] Update inputs/input_names formatting to avoid ValueError with scriptMethods (#53519) (#53548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53548

fixes issue faced in #53506

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26922415

Pulled By: malfet

fbshipit-source-id: b61842827bb14cef8c7a7089b2426fa53e642c90
2021-03-11 14:26:02 -08:00
BowenBao
3f9c803fe8 [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795) (#53304)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53304

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26922417

Pulled By: malfet

fbshipit-source-id: 14ed06158d539e2451c2e5e63ba1b32fb0f75095
2021-03-11 10:30:09 -08:00
Chester Liu
58eb23378f Clean up usage of torch._six partially (#49785)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49785

Reviewed By: mruberry

Differential Revision: D25963833

Pulled By: bugra

fbshipit-source-id: 11c90d6b8d3f206c9d0a4d8621b773beb10c6ba2
2021-02-08 13:58:34 -08:00
BowenBao
3f185ac18e [ONNX] Export get/set attribute nodes (#50768) (#51517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517

Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203116

Pulled By: SplitInfinity

fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
2021-02-04 12:43:59 -08:00
BowenBao
70dcfe2991 [ONNX] Enable _jit_pass_onnx_fold_if only when dynamic_axes is None (#50582) (#50910)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50910

Fixing pytorch/vision#3251 (PR #49410 triggers the torch vision test build failure, on three tests test_faster_rcnn, test_mask_rcnn, test_keypoint_rcnn. )

The offending PR is fine on pytorch UT, because the torchvision and pytorch test has a gap when we merge them - we are using different test API on two sides, therefore causing some discrepancy.

This PR bridge the gap for the above three tests, and disable _jit_pass_onnx_fold_if pass until it gets fixed.
Allow _jit_pass_onnx_fold_if only when dynamic_axes is None.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050886

Pulled By: SplitInfinity

fbshipit-source-id: b765ffe30914261866dcc761f0d0999fd16169e3
2021-01-27 17:48:58 -08:00
BowenBao
1c9347c666 [ONNX] Use parameter values in onnx shape inference (#49706) (#50905)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50905

Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050881

Pulled By: SplitInfinity

fbshipit-source-id: 9e5d69c52b647133cd3a0781988e2ad1d1a9c09d
2021-01-27 17:45:32 -08:00
neginraoof
137f2a385a [ONNX] Handle sequence output for models (#50599)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/46542

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50599

Reviewed By: SplitInfinity

Differential Revision: D25928897

Pulled By: bzinodev

fbshipit-source-id: a898cef7b2d15a287aedd9798ce1423cebf378d4
2021-01-21 15:36:41 -08:00
Brian Vaughan
a9db2f8e7a Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference
Test Plan: revert-hammer

Differential Revision:
D24924236 (adc65e7c8d)

Original commit changeset: 506e70a38cfe

fbshipit-source-id: 78069a33fb3df825af1cb482da06a07f7b26ab48
2021-01-15 05:58:35 -08:00
Negin Raoof
adc65e7c8d [ONNX] Handle sequence output shape and type inference (#46542)
Summary:
Handle sequence output shape and type inference.

This PR fixes value type of sequence outputs. Prior to this, all model sequence type outputs were unfolded for ONNX models.
This PR also enable shape inference for sequence outputs to represent the dynamic shape of these values.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46542

Reviewed By: ezyang

Differential Revision: D24924236

Pulled By: bzinodev

fbshipit-source-id: 506e70a38cfe31069191d7f40fc6375239c6aafe
2021-01-14 21:12:35 -08:00
Spandan Tiwari
aeefe2ce31 [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (https://github.com/pytorch/pytorch/issues/49270)
- Symbolic function for torch.square (https://github.com/pytorch/pytorch/issues/49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (https://github.com/pytorch/pytorch/issues/49783) …
- [ONNX] Enable export af aten::__derive_index (https://github.com/pytorch/pytorch/issues/49514) …
- [ONNX] Update symbolic for unfold (https://github.com/pytorch/pytorch/issues/49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (https://github.com/pytorch/pytorch/issues/49798)
- [ONNX] Enable opset 13 ops (https://github.com/pytorch/pytorch/issues/49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (https://github.com/pytorch/pytorch/issues/50119)
- [ONNX] Add a post-pass for If folding (https://github.com/pytorch/pytorch/issues/49410)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
2021-01-13 13:51:21 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
shubhambhokare1
e1c1a7e964 [ONNX] Changes to export API to better handle named arguments (#47367)
Summary:
The args parameter of ONNX export is changed to better support optional arguments such that args is represented as:
args (tuple of arguments or torch.Tensor, a dictionary consisting of named arguments (optional)):
            a dictionary to specify the input to the corresponding named parameter:
            - KEY: str, named parameter
            - VALUE: corresponding input

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47367

Reviewed By: H-Huang

Differential Revision: D25432691

Pulled By: bzinodev

fbshipit-source-id: 9d4cba73cbf7bef256351f181f9ac5434b77eee8
2020-12-10 12:31:00 -08:00
Guilherme Leobas
34cc77a811 Torch onnx (#48980)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

This is a follow up PR of https://github.com/pytorch/pytorch/issues/45258 and https://github.com/pytorch/pytorch/issues/48782

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48980

Reviewed By: zhangguanheng66

Differential Revision: D25399823

Pulled By: ezyang

fbshipit-source-id: 798055f4abbbffecdfab0325884193c81addecec
2020-12-08 19:41:44 -08:00
Edward Yang
88ebf6f894 Revert D25304229: [pytorch][PR] Add type annotations to torch.onnx.* modules
Test Plan: revert-hammer

Differential Revision:
D25304229 (8bc6023d7a)

Original commit changeset: b01b21ddbf86

fbshipit-source-id: bc3308176e2c70423f29f694e9db94828213e7d6
2020-12-07 11:58:03 -08:00
Guilherme Leobas
8bc6023d7a Add type annotations to torch.onnx.* modules (#48782)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

This is a follow up PR of https://github.com/pytorch/pytorch/issues/45258

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48782

Reviewed By: heitorschueroff

Differential Revision: D25304229

Pulled By: ezyang

fbshipit-source-id: b01b21ddbf86f908ca08173e68b81fb25851bc81
2020-12-07 08:23:02 -08:00
neginraoof
15bc21c280 [ONNX] Track and list model params for scripting (#47348)
Summary:
List model parameters as inputs following freezing script module.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47348

Reviewed By: heitorschueroff

Differential Revision: D25309756

Pulled By: bzinodev

fbshipit-source-id: cbe679ece934d5e6c418a22f08c1662256914c4c
2020-12-03 23:07:28 -08:00
Mike Ruberry
6299c870ee Revert D25254920: [pytorch][PR] Add type annotations to torch.onnx.* modules
Test Plan: revert-hammer

Differential Revision:
D25254920 (40a2dd7e1e)

Original commit changeset: dc9dc036da43

fbshipit-source-id: c17cb282ebf90ecbae4023aa63ecbb443a87037d
2020-12-02 02:25:31 -08:00
Guilherme Leobas
40a2dd7e1e Add type annotations to torch.onnx.* modules (#45258)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

Still need to resolve a few mypy issues before a review. In special, there is an error which I don't know how to solve, see:
```python
torch/onnx/utils.py:437: error: Name 'is_originally_training' is not defined  [name-defined]
        if training is None or training == TrainingMode.EVAL or (training == TrainingMode.PRESERVE and not is_originally_training):
```

`is_originally_training` is used but never defined/imported on [`torch/onnx/utils.py`](ab5cc97fb0/torch/onnx/utils.py (L437)),

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45258

Reviewed By: zhangguanheng66

Differential Revision: D25254920

Pulled By: ezyang

fbshipit-source-id: dc9dc036da43dd56b23bd6141e3ab92e1a16e3b8
2020-12-01 20:41:39 -08:00
BowenBao
6a4d55f23c [ONNX] Enable onnx shape inference in export by default (#46629)
Summary:
* Enable ONNX shape inference by default.
* ONNX could potentially set inferred shape in output instead of value_infos, checking both to be sure.
* Small fix in symbol_map to avoid overlooking dup symbols.
* Fix scalar_type_analysis to be consistent with PyTorch scalar type promotion logic.
* Correctly handle None dim_param from ONNX inferred shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46629

Reviewed By: ailzhang

Differential Revision: D24900171

Pulled By: bzinodev

fbshipit-source-id: 83d37fb9daf83a2c5969d8383e4c8aac986c35fb
2020-11-13 15:09:46 -08:00
Negin Raoof
da2e2336b6 [ONNX] Export and shape inference for prim uninitialized in If subblock (#46094)
Summary:
Enable export of prim::Uninitialized in If subblock outputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46094

Reviewed By: houseroad

Differential Revision: D24838537

Pulled By: bzinodev

fbshipit-source-id: d0719b140393595e6df114ef5cc1bb845e919c14
2020-11-11 12:10:49 -08:00
Bowen Bao
e26c1726cf [ONNX] Fix scripting rand/randn/where (#45793)
Summary:
- rand/randn: the type signature of int[] is different in scripting, thus failing the check.
- where: scripting produces dynamic cases which are supported by `unbind` export of higher opsets.
- test_list_pass: this test fails when using new scripting api, should be fixed by https://github.com/pytorch/pytorch/issues/45369

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45793

Reviewed By: mrshenli

Differential Revision: D24566096

Pulled By: bzinodev

fbshipit-source-id: 6fe0925c66dee342106d71c9cbc3c95cabe639f7
2020-11-09 12:39:31 -08:00
neginraoof
5ce31b6f3f [ONNX] Improve error handling for adaptive_pool (#45874)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/43032
This update would also improve error handling for interpolate with 'area' mode.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45874

Reviewed By: albanD

Differential Revision: D24141266

Pulled By: bzinodev

fbshipit-source-id: 7559f1d6af4f1ef3507c15a1aee76fe01fa433cd
2020-10-07 09:20:35 -07:00
Ansley Ussery
5072728d88 Fix stride printing/parsing formatting (#45156)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45156

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078695

Pulled By: ansley

fbshipit-source-id: dab993277d43b31105c38d12098c37653747b42a
2020-10-06 15:06:46 -07:00
Dmytro Dzhulgakov
5177f8de2b Revert D23398534: [pytorch][PR] [ONNX] Improve error handling for adaptive_pool
Test Plan: revert-hammer

Differential Revision:
D23398534 (45ddeb5ce6)

Original commit changeset: f2d60d40340f

fbshipit-source-id: acc9d6c3d031662c37447fcee027b0c97b8492a7
2020-10-05 15:16:59 -07:00
Negin Raoof
45ddeb5ce6 [ONNX] Improve error handling for adaptive_pool (#43032)
Summary:
This would also improve error handling for interpolate with 'area' mode.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43032

Reviewed By: malfet

Differential Revision: D23398534

Pulled By: bzinodev

fbshipit-source-id: f2d60d40340f46e7c0499ea73c1e39945713418d
2020-10-05 11:53:14 -07:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Negin Raoof
6b42ca2d69 [ONNX] Update embedding_bag export (#44693)
Summary:
Export of embedding bag with dynamic list of offsets.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44693

Reviewed By: malfet

Differential Revision: D23831980

Pulled By: bzinodev

fbshipit-source-id: 3eaff1a0f20d1bcfb8039e518d78c491be381e1a
2020-09-30 13:36:40 -07:00
shubhambhokare1
0063512a4b [ONNX] Updates to diagnostic tool to find missing ops (#44124)
Summary:
Moved description of tool and changes in function name

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44124

Reviewed By: albanD

Differential Revision: D23674618

Pulled By: bzinodev

fbshipit-source-id: 5db0bb14fc106fc96358b1e0590f08e975388c6d
2020-09-18 10:32:30 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
BowenBao
43406e218a [ONNX] Update ONNX shape inference (#43929)
Summary:
* Support sequence type (de)serialization, enables onnx shape inference on sequence nodes.
* Fix shape inference with block input/output: e.g. Loop and If nodes.
* Fix bugs in symbolic discovered by coverage of onnx shape inference.
* Improve debuggability: added more jit logs. For simplicity, the default log level, when jit log is enabled, will not dump ir graphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43929

Reviewed By: albanD

Differential Revision: D23674604

Pulled By: bzinodev

fbshipit-source-id: ab6aacb16d0e3b9a4708845bce27c6d65e567ba7
2020-09-14 15:36:19 -07:00
Elias Ellison
1f0dcf39fc [JIT] dont optimize device dtype on inline (#43363)
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/36404

Adding prim::device and prim::dtype to list of skipped peepholes when we run inlining. In the long term another fix may not be to encode shape / dtype info on the traced graph, because it is not guaranteed to be correct. This is blocked by ONNX currently.

Partial fix for https://github.com/pytorch/pytorch/issues/43134

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43363

Reviewed By: glaringlee

Differential Revision: D23383987

Pulled By: eellison

fbshipit-source-id: 2e9c5160d39d690046bd9904be979d58af8d3a20
2020-09-11 17:29:54 -07:00
neginraoof
3d7c22a2ce [ONNX] Enable new scripting passes for functionalization and remove_mutation (#43791)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/41413
This PR initiates the process of updating the torchsciprt backend interface used by ONNX exporter.

Replace jit lower graph pass by freeze module pass

Enable ScriptModule tests for ONNX operator tests (ORT backend) and model tests by default.

Replace jit remove_inplace_ops pass with remove_mutation and consolidation all passes for handling inplace ops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43791

Reviewed By: houseroad

Differential Revision: D23421872

Pulled By: bzinodev

fbshipit-source-id: a98710c45ee905748ec58385e2a232de2486331b
2020-09-04 15:21:45 -07:00
Akihiro Nitta
f17d7a5556 Fix exception chaining in torch/ (#43836)
Summary:
## Motivation
Fixes https://github.com/pytorch/pytorch/issues/43770.

## Description of the change
This PR fixes exception chaining only in files under `torch/` where appropriate.
To fix exception chaining, I used either:
1. `raise new_exception from old_exception` where `new_exception` itself seems not descriptive enough to debug or `old_exception` delivers valuable information.
2. `raise new_exception from None` where raising both of `new_exception` and `old_exception` seems a bit noisy and redundant.
I subjectively chose which one to use from the above options.

## List of lines containing raise in except clause:
I wrote [this simple script](https://gist.github.com/akihironitta/4223c1b32404b36c1b349d70c4c93b4d) using [ast](https://docs.python.org/3.8/library/ast.html#module-ast) to list lines where `raise`ing in `except` clause.

- [x] 000739c31a/torch/jit/annotations.py (L35)
- [x] 000739c31a/torch/jit/annotations.py (L150)
- [x] 000739c31a/torch/jit/annotations.py (L158)
- [x] 000739c31a/torch/jit/annotations.py (L231)
- [x] 000739c31a/torch/jit/_trace.py (L432)
- [x] 000739c31a/torch/nn/utils/prune.py (L192)
- [x] 000739c31a/torch/cuda/nvtx.py (L7)
- [x] 000739c31a/torch/utils/cpp_extension.py (L1537)
- [x] 000739c31a/torch/utils/tensorboard/_pytorch_graph.py (L292)
- [x] 000739c31a/torch/utils/data/dataloader.py (L835)
- [x] 000739c31a/torch/utils/data/dataloader.py (L849)
- [x] 000739c31a/torch/utils/data/dataloader.py (L856)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L186)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L189)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L424)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1279)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1283)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1356)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1388)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1391)
- [ ] 000739c31a/torch/testing/_internal/common_utils.py (L1412)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L310)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L329)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L332)
- [x] 000739c31a/torch/testing/_internal/jit_utils.py (L183)
- [x] 000739c31a/torch/testing/_internal/common_nn.py (L4789)
- [x] 000739c31a/torch/onnx/utils.py (L367)
- [x] 000739c31a/torch/onnx/utils.py (L659)
- [x] 000739c31a/torch/onnx/utils.py (L892)
- [x] 000739c31a/torch/onnx/utils.py (L897)
- [x] 000739c31a/torch/serialization.py (L108)
- [x] 000739c31a/torch/serialization.py (L754)
- [x] 000739c31a/torch/distributed/rpc/_testing/faulty_agent_backend_registry.py (L76)
- [x] 000739c31a/torch/distributed/rpc/backend_registry.py (L260)
- [x] 000739c31a/torch/distributed/distributed_c10d.py (L184)
- [x] 000739c31a/torch/_utils_internal.py (L57)
- [x] 000739c31a/torch/hub.py (L494)
- [x] 000739c31a/torch/contrib/_tensorboard_vis.py (L16)
- [x] 000739c31a/torch/distributions/lowrank_multivariate_normal.py (L100)
- [x] 000739c31a/torch/distributions/constraint_registry.py (L142)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43836

Reviewed By: ailzhang

Differential Revision: D23431212

Pulled By: malfet

fbshipit-source-id: 5f7f41b391164a5ad0efc06e55cd58c23408a921
2020-08-31 20:26:23 -07:00
BowenBao
08126c9153 [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628)
Summary:
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

* Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`.
* Moved helper functions from `export.cpp` to a common place for re-use.
* This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`.
* Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40628

Reviewed By: mrshenli

Differential Revision: D22709746

Pulled By: bzinodev

fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
2020-08-30 18:35:46 -07:00
shubhambhokare1
6aaae3b08b [ONNX] Addition of diagnostic tool API (#43020)
Summary:
Added initial diagnostic tool API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43020

Reviewed By: malfet

Differential Revision: D23398459

Pulled By: bzinodev

fbshipit-source-id: 7a6d9164a19e3ba51676fbcf645c4d358825eb42
2020-08-28 23:04:59 -07:00
Yael Dekel
3c5e3966f4 [ONNX] Squeeze operator should give an error when trying to apply to a dimension with shape > 1 (#38476)
Summary:
The ONNX spec for the Squeeze operator:

> Remove single-dimensional entries from the shape of a tensor. Takes a parameter axes with a list of axes to squeeze. If axes is not provided, all the single dimensions will be removed from the shape. If an axis is selected with shape entry not equal to one, an error is raised.

Currently, as explained in issue https://github.com/pytorch/pytorch/issues/36796, it is possible to export such a model to ONNX, and this results in an exception from ONNX runtime.

Fixes https://github.com/pytorch/pytorch/issues/36796.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38476

Reviewed By: hl475

Differential Revision: D22158024

Pulled By: houseroad

fbshipit-source-id: bed625f3c626eabcbfb2ea83ec2f992963defa19
2020-08-17 17:41:46 -07:00
Ksenija Stanojevic
e845b0ab51 [Resending] [ONNX] Add eliminate_unused_items pass (#42743)
Summary:
This PR:

- Adds eliminate_unused_items pass that removes unused inputs and initializers.
- Fixes run_embed_params function so it doesn't export unnecessary parameters.
- Removes test_modifying_params in test_verify since it's no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42743

Reviewed By: hl475

Differential Revision: D23058954

Pulled By: houseroad

fbshipit-source-id: cd1e81463285a0bf4e60766c8c87fc9a350d9c7e
2020-08-11 20:30:50 -07:00
BowenBao
a6c8730045 [ONNX] Add preprocess pass for onnx export (#41832)
Summary:
in `_jit_pass_onnx`, symbolic functions are called for each node for conversion. However, there are nodes that cannot be converted without additional context. For example, the number of outputs from split (and whether it is static or dynamic) is unknown until the point where it is unpacked by listUnpack node. This pass does a preprocess, and prepares the nodes such that enough context can be received by the symbolic function.
* After preprocessing, `_jit_pass_onnx` should have enough context to produce valid ONNX nodes, instead of half baked nodes that replies on fixes from later postpasses.
* `_jit_pass_onnx_peephole` should be a pass that does ONNX specific optimizations instead of ONNX specific fixes.
* Producing more valid ONNX nodes in `_jit_pass_onnx` enables better utilization of the ONNX shape inference https://github.com/pytorch/pytorch/issues/40628.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41832

Reviewed By: ZolotukhinM

Differential Revision: D22968334

Pulled By: bzinodev

fbshipit-source-id: 8226f03c5b29968e8197d242ca8e620c6e1d42a5
2020-08-06 20:34:12 -07:00
Mike Ruberry
ae67f4c8b8 Revert D22845258: [pytorch][PR] [ONNX] Enable scripting tests and update jit passes
Test Plan: revert-hammer

Differential Revision:
D22845258 (04e55d69f9)

Original commit changeset: d57fd4086f27

fbshipit-source-id: 15aa5cdae496a5e8ce2d8739a06dd4a7edc2200c
2020-08-03 23:15:06 -07:00
BowenBao
842759591d [ONNX] Refactor ONNX fixup for Loop and If (#40943)
Summary:
* move both under new file `fixup_onnx_controlflow`
* move the fixup to where the ONNX loop/if node is created, as oppose to running the fixup as postpass. This will help with enable onnx shape inference later.
* move `fuseSequenceSplitConcat` to `Peephole`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40943

Reviewed By: mrshenli

Differential Revision: D22709999

Pulled By: bzinodev

fbshipit-source-id: 51d316991d25dc4bb4047a6bb46ad1e2401d3d2d
2020-08-03 22:33:17 -07:00
Negin Raoof
04e55d69f9 [ONNX] Enable scripting tests and update jit passes (#41413)
Summary:
This PR initiates the process of updating the torchsciprt backend interface used by ONNX exporter.

- Replace jit lower graph pass by freeze module pass

- Enable ScriptModule tests for ONNX operator tests (ORT backend) and model tests by default.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41413

Reviewed By: VitalyFedyunin

Differential Revision: D22845258

Pulled By: bzinodev

fbshipit-source-id: d57fd4086f27bd0c3bf5f70af7fd0daa39a2814a
2020-08-03 18:51:19 -07:00