Commit Graph

1025 Commits

Author SHA1 Message Date
Ryan Spring
4f8b986e28 Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: VitalyFedyunin

Differential Revision: D33894937

Pulled By: jbschlosser

fbshipit-source-id: b65e8fb6ea66168af8f34f45ed50e92737a33851
(cherry picked from commit 6e986f91a9)
2022-02-14 03:40:32 +00:00
BowenBao
9257de7efa [ONNX] Minor doc update (#69501) (#69550)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69550

Fix the wiki URL.

Also minor reorganization in onnx.rst.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994269

Pulled By: malfet

fbshipit-source-id: 112acfe8b7c778d7e3c2cef684023fdaf2c6ec9c
(cherry picked from commit f0787fabde)
2022-02-11 22:05:15 +00:00
BowenBao
cc792746d2 [ONNX] De-duplicate initializers (#68202) (#69547)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69547

ScriptModule export introduces duplicated ONNX initializers for shared weights, unnecessarily increases ONNX model size. This PR de-duplicates ONNX initializers for model exported in eval mode, by checking if the underlying tensors share the same `data_ptr`, `strides` and `sizes`.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994271

Pulled By: malfet

fbshipit-source-id: 10ac66638b6255890875272472aa9ed07a5b1d9a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit d7cbde940c)
2022-02-11 22:05:15 +00:00
BowenBao
ce5b155ccb [ONNX] Link to the wiki (#68505) (#72663)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72663

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D34150535

Pulled By: malfet

fbshipit-source-id: 230b786f6235549fff764083eac2c3744c6bff88

Co-authored-by: Gary Miguel <garymiguelmicrosoft.com>
(cherry picked from commit c848c582d1)
2022-02-11 22:05:15 +00:00
BowenBao
308de30abc [ONNX] Support embedding_renorm ONNX export
Composite using ONNX operators for same logic from here 0a07488ed2/aten/src/ATen/native/Embedding.cpp (L153)

Replaced #72560
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72738
2022-02-11 22:02:22 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
BowenBao
eb4238fc26 Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68490

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

 Co-authored-by: Nikita Shulga <nshulga@fb.com>

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483781

Pulled By: malfet

fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124
(cherry picked from commit 5fb1eb1b19)
2022-02-10 03:26:48 +00:00
BowenBao
cf70466970 [ONNX] Improve scope inference in function extraction
Cover more cases of scope inferencing where consecutive nodes don't have valid scope information. Usually these nodes are created in some pass where authors forgot to assign meaningful scope to them.
* One rule of `InferScope` is to check if the current node's outputs' users share the same scope. Recursively run `InferScope` on the user nodes if they are missing scope as well. Since the graph is SSA, the depth is finite.
* Fix one pass that missed scope information for a new node.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71897
2022-01-31 23:58:53 +00:00
Nikita Shulga
74c44ba9d6 Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33850228 (23d03025dc)

Original commit changeset: 3cc33fb298e4

Original Phabricator Diff: D33850228 (23d03025dc)

fbshipit-source-id: 9436e7df73c2b2e2011f321674f24973316d3692
(cherry picked from commit c9efb58223)
2022-01-31 17:44:19 +00:00
Ryan Spring
23d03025dc Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: cpuhrsch

Differential Revision: D33850228

Pulled By: jbschlosser

fbshipit-source-id: 3cc33fb298e480d7ecc5c67716da019d60c6ab33
(cherry picked from commit 3a53b3e94f)
2022-01-31 17:07:45 +00:00
Joel Schlosser
cb823d9f07 Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33744717 (f499ab9cef)

Original commit changeset: d64532a562ed

Original Phabricator Diff: D33744717 (f499ab9cef)

fbshipit-source-id: 396c3f63de5865f894dbc353d0790a01a624be93
(cherry picked from commit e9fb2d1db1)
2022-01-28 18:35:01 +00:00
Ryan Spring
f499ab9cef Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: mikaylagawarecki

Differential Revision: D33744717

Pulled By: jbschlosser

fbshipit-source-id: d64532a562ed53247bb4fa52bb16722634d5c187
(cherry picked from commit 4713dd9cca)
2022-01-28 16:59:09 +00:00
BowenBao
804f13289e [ONNX] Update opset_version restriction for local function
Export should fail if export_modules_as_functions is set and opset_version<15.
This is because opeset_version < 15 implies IR version < 8, which means no local function support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71619
2022-01-27 00:21:13 +00:00
Michael Suo
9f0227a0eb
Revert "[ONNX] Minor doc update (#69501)" (#71615)
This reverts commit 114c13d020.
2022-01-20 17:35:04 -08:00
Emilio Castillo
e2dc2aca93 Export ONNX models with readable input/output names (#68976)
Summary:
For some ONNX exported models, the inputs/outputs names have sometimes a numeric value and this makes pretty hard to inspect the generated graphs in the case of large models.

The solution in this PR was initially submitted to our internal utilities library by take-cheeze https://github.com/pfnet/pytorch-pfn-extras/pull/102

Now we would like to upstream this change by adding an extra kwarg when exporting the model to allow replacing these numeric names with actual debuggable ones.

As an example, the following code shows that the module output is `3`

```python
g, p, o = _model_to_graph(module, torch.ones(1, 10))
for n in g.nodes():
    for v in n.outputs():
        print(v.debugName())
```
output
```
3
```

With this PR

```
v3_Gemm
```

This allows identifying this out as a value from the associated Gemm layer.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/68976

Reviewed By: jansel

Differential Revision: D33662246

Pulled By: msaroufim

fbshipit-source-id: 45f56eef2a84d9a318db20c6a6de6c2743b9cd99
(cherry picked from commit 513c1d28f1)
2022-01-21 00:34:56 +00:00
BowenBao
114c13d020 [ONNX] Minor doc update (#69501)
Fix the wiki URL.

Also minor reorganization in onnx.rst.

[ONNX] restore documentation of public functions (#69623)

The build-docs check requires all public functions to be documented.
These should really not be public, but we'll fix that later.'

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71609
2022-01-21 00:13:40 +00:00
Tran N.M. Hoang
06838ce8b1 fix: do_constant_folding arg when exporting ONNX (#71348)
Summary:
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/71348

Reviewed By: H-Huang

Differential Revision: D33662228

Pulled By: msaroufim

fbshipit-source-id: a69c72838b7ff41a2305453ef00666c060ade593
(cherry picked from commit 75dd62b406)
2022-01-20 05:42:35 +00:00
BowenBao
81f693d509 [ONNX] minor clarifications of docstrings (#69260) (#69549)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69549

[ONNX] minor clarifications of docstrings

1. Make description of ONNX_ATEN_FALLBACK more accurate (after #67460).
2. Specify minimum and maximum values for opset_version. This is pretty
   important information and we should make users dig through source
   code to find it.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994267

Pulled By: msaroufim

fbshipit-source-id: ba641404107baa23506d337eca742fc1fe9f0772
2022-01-13 18:03:27 -08:00
BowenBao
ff78c73286 [ONNX] Remove f arg from export_to_pretty_string (#69045) (#69546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69546

The arg is not used and was previously deprecated.

Also remove torch.onnx._export_to_pretty_string. It's redundant with the
public version.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994270

Pulled By: msaroufim

fbshipit-source-id: f8f3933b371a0d868d9247510bcd73c31a9d6fcc
2022-01-12 21:31:36 -08:00
Alban Desmaison
3c2ae2b47c Revert D32994274: [ONNX] Link to the wiki (#68505)
Test Plan: revert-hammer

Differential Revision:
D32994274 (a606ea73d6)

Original commit changeset: 34d54f935799

Original Phabricator Diff: D32994274 (a606ea73d6)

fbshipit-source-id: 81fc96c2aff9d14efb5e092fffd0685e507837e6
2022-01-11 07:40:14 -08:00
BowenBao
a606ea73d6 [ONNX] Link to the wiki (#68505) (#69544)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69544

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994274

Pulled By: msaroufim

fbshipit-source-id: 34d54f935799fa94516a541a241900ec205c7427

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2022-01-10 15:51:04 -08:00
BowenBao
4b47047dae [ONNX] Add support for shrink ops (#66969) (#68492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68492

* Initial commit

* Fix flake issue

* Add test tags

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483827

Pulled By: msaroufim

fbshipit-source-id: 41c623712524465b877d0fe0e2f4001d475bf2ce
2022-01-10 11:38:31 -08:00
BowenBao
d26e5ced72 Add missing docstrings for ONNX converter API. Fixes #67393 (#67640) (#68489)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/68489

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483783

Pulled By: msaroufim

fbshipit-source-id: 512e4495040a6a9833d501de2301f1709b0352b9
2022-01-07 16:43:09 -08:00
hwangdeyu
c76c6e9bd3 [ONNX] Add BFloat16 type support when export to ONNX (#66788)
Summary:
- PyTorch and ONNX has supported BFloat16, add this to unblock some mixed-precision training model.
- Support PyTorch TNLG model to use BFloat16 tensors for the inputs/outputs of the layers that run on the NPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66788

Reviewed By: jansel

Differential Revision: D32283510

Pulled By: malfet

fbshipit-source-id: 150d69b1465b2b917dd6554505eca58042c1262a
2021-12-14 12:23:32 -08:00
Brian Hirsh
457ba1dd3e Porting index_add to structured kernels, add an out variant (#65993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65993

This PR attempts to port `index_add` to structured kernels, but does more than that:

* Adds an `out=` variant to `index_add`
* Revises `native_functions.yaml` registrations, to not have multiple entries and instead pass default value to `alpha`.
* Changes in `derivatives.yaml` file for autograd functioning
* Revises error messages, please see: https://github.com/pytorch/pytorch/pull/65993#issuecomment-945441615

Follow-up PRs in near future will attempt to refactor the OpInfo test, and will give another look at tests in `test/test_torch.py` for this function. (hence the use of ghstack for this)

~This is WIP because there are tests failing for `Dimname` variant on mobile/android builds, and I'm working on fixing them.~

Issue tracker: https://github.com/pytorch/pytorch/issues/55070

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32646426

fbshipit-source-id: b035ecf843a9a27d4d1e18b202b035adc2a49ab5
2021-12-14 11:57:13 -08:00
vfdev-5
3da2e09c9b Added antialias flag to interpolate (CPU only, bilinear) (#65142)
Summary:
Description:
- Added antialias flag to interpolate (CPU only)
  - forward and backward for bilinear mode
  - added tests

### Benchmarks

<details>
<summary>
Forward pass, CPU. PTH interpolation vs PIL
</summary>

Cases:
- PTH RGB 3 Channels, float32 vs PIL RGB uint8 (apply vs pears)
- PTH 1 Channel, float32 vs PIL 1 Channel Float

Code: https://gist.github.com/vfdev-5/b173761a567f2283b3c649c3c0574112

```
# OMP_NUM_THREADS=1 python bench_interp_aa_vs_pillow.py

Torch config: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.1
  - NVCC architecture flags: -gencode;arch=compute_75,code=sm_75
  - CuDNN 8.0.5
  - Build settings: BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=1, USE_CUDNN=1, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=0, USE_OPENMP=ON,

Num threads: 1
[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (320, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.9                |          3.1
      channels_last non-contiguous torch.float32  |                2.6                |          3.6

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (460, 220) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                3.4                |          4.0
      channels_last non-contiguous torch.float32  |                3.4                |          4.8

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 96) -------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                1.6                |          1.8
      channels_last non-contiguous torch.float32  |                1.6                |          1.9

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (1200, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                9.0                |          11.3
      channels_last non-contiguous torch.float32  |                8.9                |          12.5

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 1200) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.1                |          1.8
      channels_last non-contiguous torch.float32  |                2.1                |          3.4

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (320, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.2               |          1.0

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (460, 220) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.4               |          1.3

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 96) ---------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              719.9              |         599.9

Times are in microseconds (us).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (1200, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               3.7               |          3.5

Times are in milliseconds (ms).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 1200) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              834.4              |         605.7

Times are in microseconds (us).

```

</details>

Code is moved from torchvision: https://github.com/pytorch/vision/pull/4208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65142

Reviewed By: mrshenli

Differential Revision: D32432405

Pulled By: jbschlosser

fbshipit-source-id: b66c548347f257c522c36105868532e8bc1d4c6d
2021-11-17 09:10:15 -08:00
Deyu Huang
d32efe8bc2 [ONNX] Remove the argument use_external_data_format of export() method entirely. (#67080) (#67811)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67811

* remove the argument use_external_data_format of export() method entirely

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181302

Pulled By: malfet

fbshipit-source-id: 4bc1448b7487bb9dfdad4e36008ff5b227fd64a3

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-11-15 17:20:04 -08:00
Thiago Crepaldi
9d25554d45 [ONNX] Allow registration of custom symbolics for aten namespace (#66481) (#67810)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67810

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181303

Pulled By: malfet

fbshipit-source-id: af2a715dc554b958fa3f5a7a8ae96cb3f7d112bb
2021-11-15 17:18:39 -08:00
Deyu Huang
48c8de45b0 [ONNX] Remove the argument example_outpus of export() method entirely. (#67082) (#67809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67809

* remove the argument example_outpus of export() method entirely

[ONNX] Follow-up: Remove the argument example_outpus of export() method entirely. (#67629)

* Resolve CI failure

* remove test after removing example_outputs

[ONNX] Follow-up: Follow-up: Remove the argument example_outpus of export() method entirely (#67719)

Removing unused import, resolving flake error.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181305

Pulled By: malfet

fbshipit-source-id: ba00547b7cb455ace86606b1bda643c02bdcfa1b

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-11-12 17:06:26 -08:00
Gary Miguel
f57c63032e [ONNX] Fix reciprocal when input is not floating point (#67471) (#67808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67808

torch.reciprocal implicitly casts the inputs to float, and ONNX
Reciprocal requires floating point inputs.

Also separate the reciprocal test from other tests, and test different
input types.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181307

Pulled By: malfet

fbshipit-source-id: 3e1109b3c85a49c51dc713656a900b4ee78c8340
2021-11-08 14:37:07 -08:00
Gary Miguel
eb22d06e5e [ONNX] Use human readable enum for dtype scalars (#66822) (#67807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67807

Also make quoting of string literals consistent.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181309

Pulled By: malfet

fbshipit-source-id: e1053701e3589f0310d8b5ef920359c03c6713f0
2021-11-08 14:37:05 -08:00
Gary Miguel
958d517643 [ONNX] Fix new_full and full_like for Python 3.9 (#67124) (#67806)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67806

Previously new_full would fail with errors like:
`TypeError: only integer tensors of a single element can be converted to an index`

And full_like would trigger warnings like:
`DeprecationWarning: an integer is required (got type float).  Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python.`

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181301

Pulled By: malfet

fbshipit-source-id: 2cf262cfef36c18e7b2423efe1e1d4fa3438f0ba

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:03 -08:00
Gary Miguel
37688148ae [ONNX] Support opset 15 (#67121) (#67805)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67805

Also fix Reduce ops on binary_cross_entropy_with_logits

The graph says the output is a scalar but with `keepdims=1`
(the default), the output should be a tensor of rank 1. We set keep
`keepdims=0` to make it clear that we want a scalar output.

This previously went unnoticed because ONNX Runtime does not strictly
enforce shape inference mismatches if the model is not using the latest
opset version.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181304

Pulled By: malfet

fbshipit-source-id: 1462d8a313daae782013097ebf6341a4d1632e2c

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:00 -08:00
Bowen Bao
02e35ce17b [ONNX] Update onnx function export with comments and clean up (#66817) (#67803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67803

* Addresses comments from #63589

[ONNX] remove torch::onnx::PRODUCER_VERSION (#67107)

Use constants from version.h instead.
This simplifies things since we no longer have to update
PRODUCER_VERSION for each release.

Also add TORCH_VERSION to version.h so that a string is available for
this purpose.

[ONNX] Set `ir_version` based on opset_version. (#67128)

This increases the odds that the exported ONNX model will be usable.
Before this change, we were setting the IR version to a value which may
be higher than what the model consumer supports.

Also some minor clean-up in the test code:
* Fix string replacement.
* Use a temporary file so as to not leave files around in the test
  current working directory.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181306

Pulled By: malfet

fbshipit-source-id: 02f136d34ef8f664ade0bc1985a584f0e8c2b663

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-11-05 10:35:35 -07:00
Jay Zhang
26241994b2 Remove the argument strip_doc_string of export() method entirely. (#66615) (#67278)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67278

Remove the argument strip_doc_string of export() method entirely.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962512

Pulled By: malfet

fbshipit-source-id: 168ad3f157a80d1edd7a9053783b3f3deb2ecf43

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-10-28 19:25:07 -07:00
Jay Zhang
43d51254bf Deprecate the argument _retain_param_name of export() method entirely. (#66617) (#67277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67277

Remove the argument _retain_param_name of export() method entirely.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962514

Pulled By: malfet

fbshipit-source-id: 8ac5e3a4a7821cc580951a7f167fd20069116350

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-10-28 19:25:05 -07:00
Jay Zhang
40920185ac [ONNX] Remove the argument enable_onnx_checker of export() method entirely. (#66611) (#67276)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67276

[ONNX] Remove argument _retain_param_name from torch.onnx.export() function.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962520

Pulled By: malfet

fbshipit-source-id: 86ee15f525261c0da74175e47dd74eeb169ac47f

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-10-28 19:25:03 -07:00
Gary Miguel
9deb602726 [ONNX] Use Reciprocal operator instead of Div(1, x). (#65382) (#67271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67271

* [ONNX] Use Reciprocal operator instead of Div(1, x).

This is a more readable and perhaps more performant way to export
torch.reciprocal.

* Use Reciprocal in caffe to operator to import onnx

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962519

Pulled By: malfet

fbshipit-source-id: d926e75b1c8312b9a980c9a1207a1a93ba0c71e0

Co-authored-by: take-cheeze <takechi101010@gmail.com>
2021-10-28 08:01:21 -07:00
Shubham Bhokare
d9a5668983 [ONNX] Add dim argument to all symbolic (#66093) (#67270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67270

* Add dim argument to all symbolic

* All symbolic depends on any symbolic

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962518

Pulled By: malfet

fbshipit-source-id: f7ee05cf4eff5880fc508154267e060952b5b42d
2021-10-27 13:46:31 -07:00
Nikita Shulga
0bc9928f31 [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66147

Symbolic: dynamic input for OneHot, bool for Einsum

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424094

fbshipit-source-id: 76bea22b29c93d1621c597fe7ab59deb3685087f

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-22 13:46:24 -07:00
Nikita Shulga
2c0fe338da [ONNX] Modify softplus symbolic to support beta!=1 (#65001) (#66146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66146

* Modify softplus symbolic to support beta!=1

* Remove parse args

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424096

fbshipit-source-id: 971af54a28141737ccb17510ada03b0651be2a63
2021-10-22 13:46:22 -07:00
Nikita Shulga
a0fc14c20f [ONNX] Add diagonal symbolic (#64454) (#66144)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66144

* Add logic and tests

* minor edits

* Eliminate expand ops

* Fix flake and editing

* Modified errant message

* Add overrun check

* Add overrun descriptions

* Remove emptyline

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424095

fbshipit-source-id: 5b8ef6ac21c32d43c3dbc8e51e1ef30bffb19c25
2021-10-22 13:46:18 -07:00
Nikita Shulga
b18c298f24 ONNX: Delete or document skipped ORT tests (#64470) (#66143)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66143

Delete test_list_remove. There's no point in testing conversion of
this model since TorchScript doesn't support it.

Add a link to an issue tracking test_embedding_bag_dynamic_input.

[ONNX] fix docs (#65379)

Mainly fix the sphinx build by inserting empty before
bulleted lists.

Also some minor improvements:
Remove superfluous descriptions of deprecated and ignored args.
The user doesn't need to know anything other than that they are
deprecated and ignored.

Fix custom_opsets description.

Make indentation of Raises section consistent with Args section.

[ONNX] publicize func for discovering unconvertible ops (#65285)

* [ONNX] Provide public function to discover all unconvertible ATen ops

This can be more productive than finding and fixing a single issue at a
time.

* [ONNX] Reorganize test_utility_funs

Move common functionality into a base class that doesn't define any
tests.

Add a new test for opset-independent tests. This lets us avoid running
the tests repeatedly for each opset.

Use simple inheritance rather than the `type()` built-in. It's more
readable.

* [ONNX] Use TestCase assertions rather than `assert`

This provides better error messages.

* [ONNX] Use double quotes consistently.

[ONNX] Fix code block formatting in doc (#65421)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424093

fbshipit-source-id: 4ced841cc546db8548dede60b54b07df9bb4e36e
2021-10-22 13:46:16 -07:00
Nikita Shulga
136abf5aff [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66141

* Update aten::sum symbolic for dtype

* Remove nesting and modify opeartor tests

* Fix expect files

[ONNX] Fix expect files added in #64289 (#65356)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424091

fbshipit-source-id: d4af21e9f0d7e1c68bf6ef2f3e385db84b4c53f3
2021-10-22 13:46:12 -07:00
Nikita Shulga
53a163a015 [ONNX] Export nn.Module call as ONNX local function (#63589) (#66140)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66140

* Add new argument to export api to enable users specifying `nn.Module` classes that they wish to be exported as local function in ONNX model.
* Refactor `torch/csrc/jit/serialization/export.cpp`, and remove redundant `EncoderBase` class.
* ~~Contains changes from #63268~~
* Depends on #63716 to update onnx submodule.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424098

fbshipit-source-id: c949d0b01c206c30b4182c2dd1a5b90e32b7a0d3

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-10-22 13:44:56 -07:00
BowenBao
1cf317b85f [ONNX] Support exporting with Apex O2 (#65374) (#66700)
Summary:
Apex O2 hook state_dict to return fp16 weights as fp32. Exporter cannot identify them as same tensors.
Since this hook is only used by optimizer, it is safe to remove this hook while exporting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66700

Reviewed By: zou3519

Differential Revision: D31695132

Pulled By: malfet

fbshipit-source-id: 977bdf57240002498f3ad0f1a8046c352e9860e6
2021-10-18 11:54:09 -07:00
Gary Miguel
2506baf9c2 [ONNX] move CheckerError from torch.onnx.utils to torch.onnx (#66644)
Summary:
This moves it to where the user would expect it to be based on the
documentation and all the other public classes in the torch.onnx module.

Also rename it from ONNXCheckerError, since the qualified name
torch.onnx.ONNXCheckerError is otherwise redundant.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66644

Reviewed By: malfet

Differential Revision: D31662559

Pulled By: msaroufim

fbshipit-source-id: bc8a57b99c2980490ede3974279d1124228a7406
2021-10-15 10:38:56 -07:00
Edward Yang
11bc435622 Allow registration of custom symbolics for prim namespace (#64460) (#66139)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66139

[ONNX] Add prim::PythonOp check back in export.cpp (#64944)

Add prim::PythonOp check back in export.cpp

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424102

fbshipit-source-id: 6d2eef767fab846ed79ea509e97b714072bac9f4

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-08 07:41:06 -07:00
Edward Yang
9b09a5f7ba [ONNX] Enable scripting tests (#64780) (#66138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66138

* Scripting tests

* Fixed scripting tests for lower opsets

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424099

fbshipit-source-id: 67095b7ac67b9da986961788392aa92c95cf11f2
2021-10-08 07:41:03 -07:00
Edward Yang
53fefaa916 [ONNX] Fix duplicated output same name case (#64190) (#66137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66137

* fix duplicated output node same output name issue.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424100

fbshipit-source-id: b1b06a92c51744030788b651f3a597d987a8deda

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-10-08 07:41:01 -07:00
BowenBao
d39790340d [ONNX] Enable export of __xor_ (#64042) (#64581)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64581

* Enbale xor

* Update test_pytorch_onnx_onnxruntime.py

* Update symbolic_opset9.py

* Update symbolic_opset9.py

* Update test_pytorch_onnx_onnxruntime.py

* Update symbolic_opset9.py

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919598

Pulled By: malfet

fbshipit-source-id: 044e55d0697da0050f26a6ceccd1517493d7e8a6
2021-09-30 21:09:01 -07:00
BowenBao
d4ff344fae [ONNX] Fix remainder export (#64230) (#64578)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64578

* Fix remainder export for edge case when input is negative. New export relies on true_divide export.
* Simplified true_divide export. Cleaned up redundant code which is handled by scalar type analysis pass. Removed dependency on `onnx::Where`, thus supports opset 7 & 8.

Fixes #60179

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919601

Pulled By: malfet

fbshipit-source-id: 0f78621c0ac3bdb6bf4225e049ba5f470dc8ab12

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-09-30 21:08:54 -07:00
BowenBao
2d61009f4a [ONNX] Fix input sequence for pad op (#60554) (#64377)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64377

* Fix for input primitive sequence

* Test mypy

* Fix for tracing tuples

* Fix for extra inputs

* flake8

* Rebase

* Fix for tracing tuples

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919606

Pulled By: malfet

fbshipit-source-id: a718c4a12cda77b968cb636acd7aa63d7b5ba326
2021-09-30 21:08:45 -07:00
BowenBao
f17ee368b3 Fix empty size constant creation (#63607) (#64376)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64376

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919608

Pulled By: malfet

fbshipit-source-id: 0e789e8470ce0f130148df764ce77f6d4fd0a274
2021-09-30 21:08:43 -07:00
BowenBao
84190dafa8 [ONNX] Update instance_norm implementation and support training (#60538) (#64375)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64375

* Update the instance_norm track_running_stats=True implementation and support the training mode
* Reference: 9baf75c86e/aten/src/ATen/native/Normalization.cpp (L532)
* Fix https://github.com/pytorch/pytorch/issues/53887

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919605

Pulled By: malfet

fbshipit-source-id: 306eb2a1122bb5d90dcb7c18260a3a2057a21c34

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-30 21:07:26 -07:00
BowenBao
20143bf07f [ONNX] Deprecate use_external_data_format param from torch.onnx.export() function. (#62257) (#64382)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64382

* This `use_external_data_format` parameter is used for large models cannot be exported because of the 2GB protobuf limit.

* When `use_external_data_format` set to True, the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself.

* This PR will set this paramter to DEPRECATED and check the model proto sizes by code instead of by user, if the sizes lager than 2GB, then `use_external_data_format = True` automatically.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905265

Pulled By: malfet

fbshipit-source-id: 82b4e17bfa6a8de2bfd700a5282c12f6835603cb

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-23 22:20:48 -07:00
BowenBao
478d4cf883 [ONNX] Deprecated the example_outputs param from torch.onnx.export() function. (#62815) (#64380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64380

* `example_outputs` used to determine the type and shape of the outputs without tracing the execution of the model. And it must be provided when exporting a ScriptModule or ScriptFunction when using export() function.

* Since we can work out `example_outputs` in internal function instead of being provided by user, so we deprecated this argument in the export() function to increase user experience of calling this function.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905266

Pulled By: malfet

fbshipit-source-id: d00b00d7d02b365d165028288ad915678caa51f2

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-23 22:20:46 -07:00
BowenBao
9323ea2195 [ONNX] minor doc improvements and cleanup (#62514) (#64373)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64373

* Fix some bad formatting and clarify things in onnx.rst.
* In `export_to_pretty_string`:
    * Add documentation for previously undocumented args.
    * Document that `f` arg is ignored and mark it deprecated.
    * Update tests to stop setting `f`.
    * Warn if `_retain_param_name` is set.
* Use double quotes for string literals in test_operators.py.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905271

Pulled By: malfet

fbshipit-source-id: 3627eeabf40b9516c4a83cfab424ce537b36e4b3
2021-09-23 22:20:44 -07:00
BowenBao
9965163751 [ONNX] Add supplementary tests and description for custom_opsets param from torch.onnx.export() function. (#62085) (#64372)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64372

custom_opsets arg from torch.onnx.export() is no needed to be removed.

Add some supplementary description and tests for easier understanding.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905269

Pulled By: malfet

fbshipit-source-id: 489fbee0e2c1d6c5405c9bf7dfd85223ed981a44

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-23 22:20:42 -07:00
BowenBao
fb71ccf0f1 [ONNX] Remove strip_doc_string param from torch.onnx.export() function. (#61712) (#64371)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64371

As of now, the "strip_doc_string" parameter was described as below:

strip_doc_string (bool, default True): do not include the field
doc_string``` from the exported model. Otherwise the field will mention the source code locations for model``.

This is usually useless to users who want to transform a PyTorch model to ONNX one. Only when the user wants to debug the export process, these source code locations could provide benefits.

To make the export() function more friendly by providing less parameters, we combined "strip_doc_string" into "verbose" parameter. If a user set verbose to True, it means the users need some log information for debugging the export process and this is similar with the purpose of strip_doc_string parameter.

But the usage of these 2 arguments are opposite: setting verbose to True means we want to print log information to help debug, which means strip_doc_string should be False. And this is how we replace strip_doc_string with verbose argument in this PR.

This PR will still keep it in torch.onnx.export() function for backward support while the usage of it has been combined with verbose argument.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905268

Pulled By: malfet

fbshipit-source-id: 2f06eb805c01fe15ff7a1b4f6595c937ba716d60

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-23 22:20:40 -07:00
BowenBao
47d1ed60e1 [ONNX] Remove argument _retain_param_name from torch.onnx.export() function. (#61702) (#64370)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64370

As of now, the "_retain_param_name" parameter has no description in PyTorch docs website. According to code, this argument determines if we keep the original parameter names of PyTorch model in the final ONNX graph. If this is False, those original parameter names will be replaced with a series of integers starting from 1.

Since setting numbers as parameter names make no sense to users, we remove this argument from the torch.onnx.export() function to increase user experience of calling this function.

This PR will still keep it in torch.onnx.export() function for backward support while all backend logic has been changed to work as _retain_param_name is set to True.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905270

Pulled By: malfet

fbshipit-source-id: ca60757ca17daaff937e9f08da42596086795f4a

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-23 22:18:52 -07:00
Keyrat06
27faa7a560 [ONNX] Support torch.isfinite export (#64759)
Summary:
Pull Request resolved:  https://github.com/pytorch/pytorch/issues/64754

1. onnx::IsInf is introduced in opset 10, onnx:isnan is introduced in opset 9 -> isfinite = not(or(isinf,isnan)) -> opset 10

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64759

Test Plan: Imported from OSS

Reviewed By: seemethere, bdhirsh

Differential Revision: D31060760

Pulled By: malfet

fbshipit-source-id: 499ecd6cc55ea881b8a57e6a9a4fb38eaaee5242
2021-09-21 15:47:48 -07:00
BowenBao
73c4bfc30a [ONNX] Add log10 symbolic (#63418) (#64374)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64374

Fixes #61332

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D30919609

Pulled By: msaroufim

fbshipit-source-id: f474376bbf7b59677b10565f316384eca59dba43

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-09-21 13:30:59 -07:00
BowenBao
3d32dec5ba [ONNX] Deprecate enable_onnx_checker argument in torch.onnx.export() (#61708) (#64369)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64369

As of now, the "enable_onnx_checker" parameter was described as below:

enable_onnx_checker (bool, default True): If True the ONNX model checker will be run to ensure the exported model is a valid ONNX model.

An invalid ONNX graph is useless to users so such checker should be done for each call.

In this PR, we will still write the model to an ONNX file even it is invalid. And the exception will be thrown after the ONNX file has been created. This enables user output an invalid ONNX graph for debug.

This PR will still keep it in torch.onnx.export() function for backward support while all backend logic has been changed to work as enable_onnx_checker is set to True.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905267

Pulled By: malfet

fbshipit-source-id: 3ad3f68e77fcec012cc7ef674cc9a61755eebc9e

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-17 14:31:41 -07:00
Nikita Shulga
340531f2e0 [ONNX] Do not use numpy in ONNX opsets (#65188)
Summary:
Replace `torch.tensor([numpy.arange(a, b, c)])` with `torch.arange(a, b, c).unsqueeze(0)`
Replace `tuple(numpy.add(a, b))` with `tuple( x + y for (x, y) in zip(a, b)`

As `numpy` is an optional dependency, it shouldn't be used in PyTorch core by default

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65188

Reviewed By: mruberry

Differential Revision: D31009490

Pulled By: malfet

fbshipit-source-id: 528e48f055bf9ac1de1fd7e94c0be41915df9a0b
2021-09-17 11:28:44 -07:00
Michael Dagitses
aaffcfe9cd implement "xy" indexing for torch.meshgrid (#62724)
Summary:
This is step 4/7 of https://github.com/pytorch/pytorch/issues/50276. This allows the use of `"xy"` indexing but doesn't change any defaults.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62724

Reviewed By: heitorschueroff

Differential Revision: D30995290

Pulled By: dagitses

fbshipit-source-id: 08a6a6144b20bc019f68bc3c52e3bbf967976d8f
2021-09-17 08:31:17 -07:00
Michael Dagitses
2c57bbf521 add support for indexing to meshgrid (#62722)
Summary:
This is step 3/7 of https://github.com/pytorch/pytorch/issues/50276. It only adds support for the argument but doesn't implement new indexing modes yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62722

Test Plan:
Verified this is not FC breaking by adding logging to both meshgrid
overloads and then called meshgrid twice:

`meshgrid(*tensors)`
  and
`meshgrid(*tensors, indexing='ij')`

This confirmed that the former signature triggered the original native
function and the latter signature triggered the new native function.

Reviewed By: H-Huang

Differential Revision: D30394313

Pulled By: dagitses

fbshipit-source-id: e265cb114d8caae414ee2305dc463b34fdb57fa6
2021-09-16 09:59:49 -07:00
BowenBao
6512838fab [ONNX] Enhance shape (two changes merged) (#64585)
Summary:
Enhanced shape inference by introducing typeReliableMap.
[ONNX] exporter changes for torch hub models (https://github.com/pytorch/pytorch/issues/62856)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64585

Reviewed By: ezyang

Differential Revision: D30870418

Pulled By: msaroufim

fbshipit-source-id: 87a294799cb87d649d1d13b6114a5cfbac9be15c

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-09-15 13:02:19 -07:00
Corey Levinson
db3fcf0af3 fix typo in torch/onnx/utils.py (#63396)
Summary:
fixes minor typo

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63396

Reviewed By: pbelevich

Differential Revision: D30644295

Pulled By: SplitInfinity

fbshipit-source-id: c506f67383909aa2c0c7c533038446b4b3d76a3a
2021-09-10 09:37:44 -07:00
Thomas J. Fan
d3bcba5f85 ENH Adds label_smoothing to cross entropy loss (#63122)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/7455

Partially resolves pytorch/vision#4281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63122

Reviewed By: iramazanli

Differential Revision: D30586076

Pulled By: jbschlosser

fbshipit-source-id: 06afc3aa1f8b9edb07fe9ed68c58968ad1926924
2021-08-29 23:33:04 -07:00
Meghan Lele
95d0b3199b Back out "[ONNX] Fix an issue that optimizations might adjust graph inputs unexpectedly. (#61280)" (#64004)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63904

Fixes T98808160

Test Plan: T98808160

Reviewed By: msaroufim

Differential Revision: D30527450

fbshipit-source-id: 6262901a78ca929cecda1cf740893139aa26f1b4
2021-08-26 12:49:42 -07:00
BowenBao
1dd648f1c4 [ONNX] Suppport torch.dot and torch.nn.utils.spectral_norm (#62596) (#62765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62765

Fixes #27723

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375181

Pulled By: msaroufim

fbshipit-source-id: 715f4745899757ec405877980cd20c826028eb2c

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-08-20 12:46:56 -07:00
BowenBao
db0771b05d [ONNX] Update repeat_interleave for dynamic repeats (#59979) (#62764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62764

Fixes #58733

- Support dynamic interleave for cases with dynamic repeat values
- Moved repeat_interleave symbolic from opset 11 to opset 13, as sequence as output types for loop outputs is needed for this change

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375179

Pulled By: msaroufim

fbshipit-source-id: 787f96bf91d124fd0483761088c5f4ae930d96a9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-08-20 12:46:54 -07:00
BowenBao
8760254911 [ONNX] Fix an issue that optimizations might adjust graph inputs unexpectedly. (#61280) (#62763)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62763

This PR is to fix the issue that the graph inputs might be updated when we export the model in inference mode.

When a model is export in inference mode, some optimizations will be made. One side effect of these optimizations is: the inputs of graph might be adjusted. Such optimizatiosn include:

	1. Conv and BatchNorm op fusion.
	2. Do constant folding.

If the user sets export_params=False, or set keep_initializers_as_inputs=True, it's highly possible that the user wants to provide the corresponding parameters or initiliazers as the inputs of the graph.
In such situation, no matter the model is export in inference mode or training mode, exporter needs to prevent above optimizations from adjusting the graph inputs. By this, the inputs of graph could match inputs that users provided.

The changes in this PR, add an additional common judgement to see if the above optimizations needs to be done or not. From the value of export_params and keep_initializers_as_inputs arguments, infer if the graph inputs are allowed to be adjusted.
If no, these optimizations will be ignored, even other requirements are matched.

Besides these code changes, the comments of some parameters below have been updated so that users have more thoughts when they consider how to leverage these parameters for different purposes:

	1. export_params
	2. training
	3. do_constant_folding
	4. keep_initializers_as_inputs

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375183

Pulled By: msaroufim

fbshipit-source-id: 4db8b9695649eb32a3a0fefa950ee2e5651bdba0

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-08-20 12:46:52 -07:00
BowenBao
2aa19f33c6 [ONNX] Fix for batchnorm training op mode (#52758) (#62760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62760

* Rebase

# Conflicts:
#	torch/csrc/jit/passes/onnx/eval_peephole.cpp

# Conflicts:
#	test/onnx/test_utility_funs.py
#	torch/onnx/symbolic_opset9.py

* Update symbolic_opset12.py

* Update test.sh
# Conflicts:
#	.jenkins/caffe2/test.sh

* Merge

* Fix utility tests

# Conflicts:
#	test/onnx/test_pytorch_onnx_onnxruntime.py
#	test/onnx/test_utility_funs.py

* Fix for comment

* Enable BN tests

* Fix for test

* Update test_pytorch_onnx_onnxruntime.py

* Update test_pytorch_onnx_onnxruntime.py

* Update test_utility_funs.py

* Update test_pytorch_onnx_onnxruntime.py

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349060

Pulled By: msaroufim

fbshipit-source-id: 93312c17607974731c17099ae181acb6e4c1c409
2021-08-18 13:29:07 -07:00
BowenBao
e182401062 [ONNX] Remove aten parameter (#61652) (#62759)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62759

* remove aten argument in export()

* add export_to_pretty_string default value OperatorExportTypes.ONNX

* add DPYTORCH_ONNX_CAFFE2_BUNDLE description

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349062

Pulled By: msaroufim

fbshipit-source-id: d9738f3aa8b80eac54548d0b9494f9f1e544f20f

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-08-18 13:29:04 -07:00
BowenBao
3a7bbf5fb7 [ONNX] Add support for opset14 in PT-ONNX exporter (#59486) (#62758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62758

* Add initial changes for opset14

* Fixed flake

* Add onnx submodule changes and removed utility func tests

* Add updated batchNorm symbolic

* Add triu/tril symbolics

* Fix lint

* Fixed test failures

* Add reshape with allowzero

* Added tests/refactored opset versioning

* Bump onnxruntime version

* Fix clang/lint failures

* Add reshape shape inference for opset 14

* Changes for allowzero

* Fix lint/clang and test failures

* Updated PR

* Flake fixes

* Fix flake

* Remove new_jit_api tests

* Add opset14 models

* Update allowzero

* Fix test failures

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349063

Pulled By: msaroufim

fbshipit-source-id: 54724246149b01a2f627c43d7396253a7e9c9eb9

Co-authored-by: Shubham Bhokare <sbhokare@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-08-18 13:29:01 -07:00
BowenBao
99b154b8be [ONNX] Support lstm_cell symbolic (#61476) (#62757)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62757

Support lstm_cell symbolic

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349061

Pulled By: msaroufim

fbshipit-source-id: f236177e3e5c62a30b7e4d91a623bcaef21b5eb1

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-08-18 13:27:46 -07:00
Peter Lin
8d7786ada6 Simplify hardswish ONNX export graph. (#60080)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58301

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60080

Reviewed By: suo

Differential Revision: D30002939

Pulled By: SplitInfinity

fbshipit-source-id: 8b4ca6f62d51b72e9d86534592e3c82ed6608c9d
2021-08-05 11:15:14 -07:00
BowenBao
6f08ddfc28 [ONNX] Enable aten:normal op and add tests for aten:uniform op. (#60441) (#61560)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61560

1. Add a new symbolic function broadcast_tensors() to support exporting torch.broadcast_tensors() function. This is required by exporting torch.distribution.normal() function.
2. Add a new symbolic function normal() to support exporting torch.distribution.normal() function.
3. Add relative tests for normal and uniform ops as well.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767995

Pulled By: SplitInfinity

fbshipit-source-id: acfe5e7801d00c0df8ca46966bbd6015fed0045e

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-07-21 15:10:35 -07:00
BowenBao
f0054e1a6e [ONNX] Update expand_as for dynamic shape (#61084) (#61559)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61559

Update expand_as for dynamic shape

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767990

Pulled By: SplitInfinity

fbshipit-source-id: 3f1e3f68fd17c5ffbd4a50fccff224fd9d6c84fb

Co-authored-by: Negin Raoof <neginmr@utexas.edu>
2021-07-21 15:10:33 -07:00
BowenBao
34075e2c8b [ONNX] Fix the issue of converting empty list to sequence. (#58651) (#61558)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61558

When we construct an empty list by python list comprehension, we need to avoid converting the node without inputs to onnx::Concat in shape_type_inference.cpp and peephole.cpp because it will create an invalid Concat node which doesn't have inputs.

In addition, update the code to avoid passing a Sequence input to an onnx::Cast node which doesn't accept Sequence data type as an input.

Add tests for the validation as well.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767989

Pulled By: SplitInfinity

fbshipit-source-id: f97f172ff20eebda4c3744c7a934df36716f12a2

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-07-21 15:10:31 -07:00
BowenBao
8726f08e15 [ONNX] Update documentation (#58712) (#60249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60249

* Add introductory paragraph explaining what ONNX is and what the
  torch.onnx module does.
* In "Tracing vs Scripting" and doc-string for torch.onnx.export(),
  clarify that exporting always happens on ScriptModules and that
  tracing and scripting are the two ways to produce a ScriptModule.
* Remove examples of using Caffe2 to run exported models.
  Caffe2's website says it's deprecated, so it's probably best not to
  encourage people to use it by including it in examples.
* Remove a lot of content that's redundant:
  * The example of how to mix tracing and scripting, and instead
    link to Introduction to TorchScript, which includes very similar
    content.
  * "Type annotations" section. Link to TorchScript docs which explain
    that in more detail.
  * "Using dictionaries to handle Named Arguments as model inputs"
    section. It's redundant with the description of the `args` argument
    to `export()`, which appears on the same page once the HTML
    is generated.
  * Remove the list of supported Tensor indexing patterns. If it's not
    in the list of unsupported patterns, users can assume it's
    supported, so having both is redundant.
  * Remove the list of supported operators and models.
    I think the list of supported operators is not very useful.
    A list of supported model architectures may be useful, but in
    reality it's already very out of date. We should add it back if
    / when we have a system for keeping it up to date.
  * "Operator Export Type" section. It's redundant with the description
    of the `operator_export_type` arg to to `export()`, which appears on
    the same page once the HTML is generated.
  * "Use external data format" section. It's redundant with the
    description of the `use_external_data_format` arg to `export()`.
  * "Training" section.  It's redundant with the
    description of the `training` arg to `export()`.
* Move the content about different operator implementations producing
  different results from the "Limitations" section into the doc for the
  `operator_export_type` arg.
* Document "quantized" -> "caffe2" behavior of
  OperatorExportTypes.ONNX_ATEN_FALLBACK.
* Combing the text about using torch.Tensor.item() and the text about
  using NumPy types into a section titled
  "Avoid NumPy and built-in Python types", since they're both
  fundamentally about the same issue.
* Rename "Write PyTorch model in Torch way" to "Avoiding Pitfalls".
* Lots of minor fixes: spelling, grammar, brevity, fixing links, adding
  links.
* Clarify limitation on input and output types. Phrasing it in terms of
  PyTorch types is much more accessible than in terms of TorchScript
  types. Also clarify what actually happens when dict and str are used
  as inputs and outputs.
* In Supported operators, use torch function and class names and link
  to them. This is more user friendly than using the internal aten
  op names.
* Remove references to VariableType.h, which doesn't appear to contain
  the information that it once did. Instead refer to the generated
  .pyi files.
* Remove the text in the FAQ about appending to lists within loops.
  I think this limitation is no longer present
  (perhaps since https://github.com/pytorch/pytorch/pull/51577).
* Minor fixes to some code I read along the way.
* Explain the current rationale for the weird ::prim_PythonOp op name.

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494912

Pulled By: SplitInfinity

fbshipit-source-id: 7756c010b2320de0692369289604403d28877719

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-07-08 16:29:32 -07:00
BowenBao
81f95cce59 [ONNX] Extend chunk for dynamic chunk values (#59644) (#60247)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60247

Related to #42785

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494914

Pulled By: SplitInfinity

fbshipit-source-id: 51ddb876d00185e59cfe54a8af5a9c8dd073a09f

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-07-08 16:29:28 -07:00
BowenBao
d9dc94406f [ONNX] Add linspace symbolic (#58854) (#60246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60246

* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494911

Pulled By: SplitInfinity

fbshipit-source-id: bddff18a90f8a78121c8ecdd1dafc15c69962d66

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-07-08 16:29:26 -07:00
BowenBao
4ccfa3ffeb [ONNX] Fix sum export with attribute keepdims (#59316) (#60245)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60245

Fix after b9bdb07a0261ab5a0b1038f290fa03af6ce0415f. Improving previous fix on two aspects
* Not only checks 0 on first dimension for empty tensor.
* Do not assume empty tensor when shape is not accessible.

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494917

Pulled By: SplitInfinity

fbshipit-source-id: 02587c3c3be0510312c1a1959f28cab12d81812d

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-07-08 16:29:24 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
fca931d181 List striding with arbitrary step size (#58537)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58537

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28531721

Pulled By: tugsbayasgalan

fbshipit-source-id: 8c8ed32ca00366603bfb5086e87dfa62736ff4b2
2021-06-22 11:25:23 -07:00
BowenBao
3995fb1840 Add new_ones symbolic (#59255) (#59539)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59539

Add new_ones symbolic in PT-ONNX exporter

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D29046603

Pulled By: SplitInfinity

fbshipit-source-id: e7420c7b543c33e3640e62461d08ff4d5843eda7

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-06-17 15:49:24 -07:00
BowenBao
044b519a80 Symbolic for ReLu6 (#58560) (#59538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59538

Four mealv2 models can export in torch 1.8.1, but fails when torch master introduces relu6 a few months back.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046607

Pulled By: SplitInfinity

fbshipit-source-id: d9cf7050e4ac0dad892441305ffebc19ba84e2be

Co-authored-by: David <jiafa@microsoft.com>
2021-06-15 12:24:17 -07:00
BowenBao
5d00c374dd [ONNX] Sum empty tensor could not be exported to ONNX successfully. (#58141) (#59537)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59537

PyTorch sum over empty tensor gives 0, while ONNX produces an error.

torch.sum will be translated into onnx::ReduceSum op. Per the definition of ReduceSum, update the keepdims attribute for this scenario.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046604

Pulled By: SplitInfinity

fbshipit-source-id: 6f5f3a66cb8eda8b5114b8474dda6fcdbae73469

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-06-15 12:24:16 -07:00
BowenBao
83450aa11d [ONNX] Add support for torch.bernoulli() export (#57003) (#59536)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59536

Support export HuggingFace - Training DeBERTa model.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046609

Pulled By: SplitInfinity

fbshipit-source-id: df87e0c6ed0f13463297bdeba73967fcf2aa37ca

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-06-15 12:24:14 -07:00
BowenBao
cd5f142af4 fix error message for type_as (#57948) (#59535)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59535

Improve error message for type_as and add unit test.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046605

Pulled By: SplitInfinity

fbshipit-source-id: 978bceeb62e4d3c68815cd5fdf160909a99d00f2

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-06-15 12:24:12 -07:00
Gary Miguel
4b91355232 [ONNX] remove raw export type (#59160)
Summary:
[ONNX] remove raw export type

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59160

Reviewed By: tugsbayasgalan

Differential Revision: D28937039

Pulled By: SplitInfinity

fbshipit-source-id: 79bf91605526aa32a7304e75f50fe55d872bd4e8
2021-06-11 00:08:06 -07:00
Joel Schlosser
ef32a29c97 Back out "[pytorch][PR] ENH Adds dtype to nn.functional.one_hot" (#59080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59080

Original commit changeset: 3686579517cc

Test Plan: None; reverting diff

Reviewed By: albanD

Differential Revision: D28746799

fbshipit-source-id: 75a7885ab0bf3abadde9a42b56d479f71f57c89c
2021-05-27 15:40:52 -07:00
BowenBao
2c17b6a0fe [ONNX] Enable support for roll() op. (#58389) (#58697)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58697

1. Add a symbolic function for aten::roll() op in symbolic_opset9.py.
2. Add a test with multiple scenarios as well.

Test Plan: Imported from OSS

Reviewed By: driazati, bhosmer

Differential Revision: D28714807

Pulled By: SplitInfinity

fbshipit-source-id: eae85f2dcf02737c9256a180f6905a935ca3f57e

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-05-27 12:06:45 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
BowenBao
b8c96e6b08 Support symbolic for conv_tbc (#58359) (#58692)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58692

This is a fix for exporting fairseq models, see:
```python
model = torch.hub.load(github, 'conv.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt')
model = torch.hub.load(github, 'conv.wmt17.en-de', tokenizer='moses', bpe='subword_nmt')
```
With this fix, and comment out model script one line `GradMultiply`, these two models can be exported successfully with perf met.

The original PR https://github.com/pytorch/pytorch/pull/57708 has merging issue, use this one instead.

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714809

Pulled By: SplitInfinity

fbshipit-source-id: 71c2de6cec7ee05af68560996acf47d97af46fb2

Co-authored-by: David <jiafa@microsoft.com>
2021-05-27 12:06:37 -07:00
BowenBao
d101816fdd [ONNX] RNN scripting (#57564) (#58691)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58691

Note the first commit in this PR has its own pull request here since it seemed self-contained:
https://github.com/pytorch/pytorch/pull/57082

* [ONNX] simplify batch_first logic in RNN tests

* [ONNX] support GRU with packed input in scripting mode

This required two changes:
* Add as_tensor to symbolic_opset9.py
* Change torch::jit::pushPackingPastRnn to recognize and properly
  replace another use of the batch_sizes output of prim::PackPadded.
  Previously the code assumed that the first use was as input to the
  RNN operator. However in some cases, it is also used to compute
  max_batch_size. For example in this code:
  https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815

With these changes the GRU tests now pass in scripting mode for opset
version >= 11.

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714805

Pulled By: SplitInfinity

fbshipit-source-id: f19647a04533d9ec76399a8793b3f712ea0337d2

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:35 -07:00
BowenBao
51d14b6859 [ONNX] Update instance_norm2 symbolic to handle track_running_stats=True (#55051) (#58690)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58690

Fixes [#53887](https://github.com/pytorch/pytorch/issues/53887)
Not input calling running_mean and running_var when track_running_stats=True

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714812

Pulled By: SplitInfinity

fbshipit-source-id: 3f2f2ec9a7eaf8a1432a552d751cbd5974b20195

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-05-27 12:05:26 -07:00
Meghan Lele
0d5527de7a Back out "Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"" (#58923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58923

Original commit changeset: c54597b2048e
ghstack-source-id: 129842041

Test Plan: Sandcastle and OSS CI.

Reviewed By: snisarg

Differential Revision: D28432555

fbshipit-source-id: 2a9ec22cc004c7c6979f1cc8f3124b833cdc6634
2021-05-26 13:29:07 -07:00
Adnios
09a8f22bf9 Add mish activation function (#58648)
Summary:
See issus: https://github.com/pytorch/pytorch/issues/58375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
2021-05-25 10:36:21 -07:00
Thomas J. Fan
a7f4f80903 ENH Adds dtype to nn.functional.one_hot (#58090)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/33046
Related to https://github.com/pytorch/pytorch/issues/53785

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58090

Reviewed By: zou3519

Differential Revision: D28640893

Pulled By: jbschlosser

fbshipit-source-id: 3686579517ccc75beaa74f0f6d167f5e40a83fd2
2021-05-24 13:48:25 -07:00
Serhat Yilmaz
4ca4640bae [torch][repeat_interleave] remove stream syncronization if output size is given (#58417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58417

Same as title.

Test Plan:
Rely on CI signal.

Update unit test to exercise new code path as well.

Reviewed By: ngimel

Differential Revision: D28482927

fbshipit-source-id: 3ec8682810ed5c8547b1e8d3869924480ce63dcd
2021-05-22 20:53:28 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
9db64e6e56 Revert "Striding for lists Part 2 (#49352)" (#58523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58523

This reverts commit fee7e8b91d.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D28528023

Pulled By: tugsbayasgalan

fbshipit-source-id: 9fa1d86f0c81fcc6fd3798e0d51a712a3c9b3952
2021-05-20 13:20:33 -07:00
BowenBao
3d12ab452e [ONNX] Fix split export in opset13 (#56277) (#57605)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57605

Fix split export in opset13

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393522

Pulled By: SplitInfinity

fbshipit-source-id: 4de83345ec7bc9bafe778fe534d9a8760ce16ab3

Co-authored-by: Ksenija Stanojevic <ksenija.stanojevic@gmail.com>
Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-17 14:50:33 -07:00
Meghan Lele
c034bce979 Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"
Summary: Original commit changeset: 833dac7c71f2

Test Plan:
```
buck test mode/dev //pytext/fb/assistant/lite/test:test -- --exact
'pytext/fb/assistant/lite/test:test - test_export_bytes_model_to_caffe2
(pytext.fb.assistant.lite.test.test.TestExport)'
```

Reviewed By: jeanm

Differential Revision: D28431840

fbshipit-source-id: 0f1d530034404421a5d51691173e1cc0ee16fdd6
2021-05-14 13:45:49 -07:00
BowenBao
0d11dbf511 [ONNX] Support index_add_ function. (#56867) (#57830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57830

This is PR is aiming to support tensor.index_add_() method in symbolic function. We leverage scatter_add() to implement this function while ONNX doesn't have a corresponding operator.

Notes:

      1.  4 tests have been added for some scenarios.
      2.  If there are duplicated value in 'index' parameter, the export will still execute successfully but the results are wrong. Add a warning for every call to this symbolic function. And if we detect that the rank of 'index' is greater than the size of the 'dim' dimension, will raise an exception to stop exporting an incorrect ONNX file.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393518

Pulled By: SplitInfinity

fbshipit-source-id: f487ca2c63fec47c6ab74f1a7783dae7f3b8d1ef

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-05-14 09:51:55 -07:00
BowenBao
520f90f359 [ONNX] Handling incorrect format for example_outputs (#55802) (#57829)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57829

Handling incorrect format for example_outputs, fixing exception behavior.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393521

Pulled By: SplitInfinity

fbshipit-source-id: aa518483f94e31194b951198aefa7c897932356e

Co-authored-by: Ksenija Stanojevic <ksenija.stanojevic@gmail.com>
Co-authored-by: Negin Raoof <neginmr@utexas.edu>
2021-05-14 09:50:58 -07:00
BowenBao
51cd89ecc6 [ONNX] Handle mixed mask, index input for index_put (#57604)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57604

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393524

Pulled By: SplitInfinity

fbshipit-source-id: 6c0cd9db981a7e4ece2fdd375a814a13449e1ab0

Co-authored-by: David <jiafa@microsoft.com>
2021-05-13 13:42:56 -07:00
BowenBao
bfe7728f18 [ONNX] Process const folding progressively when converts to ONNX (#54569) (#57601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57601

This PR automatically solves onnx const attribute issue in PR https://github.com/pytorch/pytorch/pull/53784.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393525

Pulled By: SplitInfinity

fbshipit-source-id: 833dac7c71f24a88af62d5dd2be0a702ed34d053

Co-authored-by: David <jiafa@microsoft.com>
2021-05-13 13:42:51 -07:00
BowenBao
346dc88bfa [ONNX] Support registering custom export for prim::PythonOp from torch.autograd.Function (#55630) (#57600)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57600

Demo script:

```python
import torch

class MyReLU(torch.autograd.Function):
    staticmethod
    def forward(ctx, input, scalar_tuple, scalar, scalar_list):
        ctx.save_for_backward(input)
        return input.clamp(min=scalar)
    staticmethod
    def backward(ctx, grad_output):
        input, = ctx.saved_tensors
        grad_input = grad_output.clone()
        grad_input[input < 0] = 0
        return grad_input

class MyModule(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear_a = torch.nn.Linear(2, 2)
        self.linear_b = torch.nn.Linear(2, 2)
        self.relu = MyReLU.apply
    def forward(self, x):
        h = self.linear_a(x)
        h = self.relu(h, (5, 3), 2, [1, 2, 3])
        h = self.linear_b(h)
        return h

"""
User define how to export prim::PythonOp into custom op.
"""
def symbolic_pythonop(g, n, *args, **kwargs):
    # Print information:
    print('arguments of ', kwargs['name'], ':')
    print('original node: ', n)
    for i, out in enumerate(n.outputs()):
        print('original output {}: {}, requires grad: {}'.format(i, out, out.requiresGrad()))
    import torch.onnx.symbolic_helper as sym_helper
    for i, arg in enumerate(args):
        print('arg {}: {}, requires grad: {}'.format(i, arg, arg.requiresGrad() if sym_helper._is_value(arg) else False))
    for k, v in kwargs.items():
        print('key: ', k, ' v: ', v)

    # TODO: all inputs (tensors and scalars) are in args.
    #       backend can define CustomDomain::PythonOp and how info are stored however it deem fit.
    return g.op("CustomDomain::PythonOp", args[0], name_s=kwargs['name'])

torch.onnx.register_custom_op_symbolic("::prim_PythonOp", symbolic_pythonop, 9)

# Define input.
x = torch.tensor([[0.3971, 0.7544],
                  [0.5695, 0.4388]], requires_grad=True)

model = MyModule()
# Forward.
y = model(x)

torch.onnx.export(model, (x,), 'model.onnx', opset_version=12, verbose=True)
```

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393528

Pulled By: SplitInfinity

fbshipit-source-id: e0d55b7c737c5916fda08a3b26b3306037f970df

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-13 13:42:49 -07:00
BowenBao
2b0f481d3f Add support to to(device) op. (#56857) (#57599)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57599

Currently, if we call tensor.to() method and pass a device as the parameter. It will fail, because in symbolic function of to() we didn't handle such case.

So add a check in the beginning of this symbolic function, if this is a device cast, we return self directly. A test has also been added.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393523

Pulled By: SplitInfinity

fbshipit-source-id: c41e3c0293932fc90dedb544daadd9c5d4b48792

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-05-13 13:42:48 -07:00
BowenBao
9e56314d2c onnx.symbolic_helper.parse_args: document and clean up (#56956) (#57598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57598

Add a doc string to explain what it does and how to use it.

Remove hack around a bug in Python 2's functools.wrap().
Python 2 is no longer supported.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393519

Pulled By: SplitInfinity

fbshipit-source-id: aae8c5e7b49e2ad2d24a0e86f8ba47f1cd080e46

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-13 13:42:46 -07:00
BowenBao
ac9e79e561 Add a new operator for fill_() function. (#56859) (#57596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57596

Add the corresponding symbolic function and test for fill_() function.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393520

Pulled By: SplitInfinity

fbshipit-source-id: 3e177f88d3776d0d4a9d5e7ec7df4e6629738799

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-05-13 13:42:43 -07:00
BowenBao
3bc8a2264d [ONNX] Support .item() export & NumberType to tensor conversion (#55697) (#57594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57594

Support .item() export & NumberType to tensor conversion

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393516

Pulled By: SplitInfinity

fbshipit-source-id: 94d0aec0a8fe144ee2567dc3c9c19fcb18ed21fa

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-13 13:41:29 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
fee7e8b91d Striding for lists Part 2 (#49352)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49352

In this PR, we replace all definitions of slice to take None parameters for the start, end, and step. This will simplify the compiler logic

Test Plan:
test_jit test cases

Imported from OSS

Reviewed By: jamesr66a, nikithamalgifb

Differential Revision: D25929903

fbshipit-source-id: 5bfc6bad514a8aafbef2dacc706f95f867fe85f1
2021-05-13 00:16:02 -07:00
neginraoof
1de3525ca8 [ONNX] Handle PackedParams inputs for _propagate_and_assign_input_shapes (#56449) (#57079)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57079

Testing onnx 1.9 release, we see that the old bug is triggered for the caffe2 test:
`pytest test/onnx/test_pytorch_onnx_caffe2_quantized.py::TestQuantizedOps::test_small_model`
This is because the graph inputs
```python
graph(%x.1 : Tensor,
      %conv1._packed_params : __torch__.torch.classes.quantized.Conv2dPackedParamsBase,
      %conv2._packed_params : __torch__.torch.classes.quantized.Conv2dPackedParamsBase,
      %fc.bias : Float(10, strides=[1], requires_grad=0, device=cpu),
      %fc.weight : Float(10, 72, strides=[72, 1], requires_grad=0, device=cpu)):
```
contains `Conv2dPackedParamsBase` which is a PackedParams.
When we do flatten, we will flatten to several tensors, then the shape inference for input misaligned.
This PR record how may tensors got flattened in PackeParams, and skip by these number rather than 1, then the UT passed.
Note that tuple case should still follow the original logic.

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D28393949

Pulled By: malfet

fbshipit-source-id: 98d48aad27e5ca03fb10d260f8e625478d996ee2

Co-authored-by: David <jiafa@microsoft.com>
2021-05-12 15:20:26 -07:00
Peter Bell
2043093217 Add correction parameter to std/var (#50903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50903

First part of #50010. Also fixes #51127.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27911345

Pulled By: mruberry

fbshipit-source-id: 7138fddc935802918ab9ff19f4bc1b9f4d745d41
2021-05-07 14:40:28 -07:00
Peter Bell
33eea146ee torch.clamp with tensor min and max (#52695)
Summary:
Fixes gh-2793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52695

Reviewed By: mruberry

Differential Revision: D27395977

Pulled By: ezyang

fbshipit-source-id: f86aa240feb034d42e4c45447e72218f6a773c24
2021-05-03 12:56:16 -07:00
BowenBao
913f1f75b3 Revert "Revert [ONNX] Redesign inplace conversion" (#56675)
Summary:
Adjust how MutationRemover is used to avoid creating aliasDb multiple times for the same graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56675

Reviewed By: pbelevich

Differential Revision: D27945692

Pulled By: SplitInfinity

fbshipit-source-id: a6c548438e88ddee18ef03a6f0461ab9eaaaa829
2021-04-22 22:22:16 -07:00
Nikita Shulga
36828aa0ff Revert D27866138: [ONNX] Redesign inplace conversion (#55033)
Test Plan: revert-hammer

Differential Revision:
D27866138 (24ff92f76d)

Original commit changeset: ab5c9188740c

fbshipit-source-id: b99bf5b12e109089ebd5748c1dc152c6af1cebdb
2021-04-21 21:11:06 -07:00
BowenBao
24ff92f76d [ONNX] Redesign inplace conversion (#55033) (#56173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56173

* Create `InplaceConverter` and `ValueTracker` to keep track of aliases of values throughout the graph. For a given value, a new alias is created every time when there is an inplace operation, SetAttr, or through nested blocks owned by If/Loop nodes.
* Fix bug where controlflow node output types are not set, when the complete node is unable to run ONNX shape inference due to containing non-onnx node.
* Add symbolic for `__not__` ~~and `prim_min`~~(update: moved to a separate PR), and update `index_put` opset9 to support case of assignment without providing indices.
* Bump ORT version in CI test.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866138

Pulled By: SplitInfinity

fbshipit-source-id: ab5c9188740c50f783ceba4d54fda43c26e2fde7
2021-04-21 17:59:11 -07:00
BowenBao
818ce1d0d2 Add standardOps match more input type in ORT (#53813) (#56172)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56172

Enable the standardOps include **Add\Sub\Mul\Div\Gemm\Pow\Mod**  with low precision input in ORT

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866136

Pulled By: SplitInfinity

fbshipit-source-id: f2cf5649fffefd68c0cc7b6dce94198751636727
2021-04-21 17:58:08 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
BowenBao
0b0fca3c59 [ONNX] Export mv op (#55470) (#56169)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56169

Adding matrix-vector multiplication op

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866141

Pulled By: SplitInfinity

fbshipit-source-id: 40e8f65c590bc5354b764b51e0c3cd8386fdc33b
2021-04-20 23:00:46 -07:00
BowenBao
90e63cc41f [ONNX] Add support for prim::min (#55259) (#56168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56168

Add support for prim::min operator and update full_like

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866144

Pulled By: SplitInfinity

fbshipit-source-id: f4af4b8171ed8bd7980fa3141f5fc9811e2bc367
2021-04-20 23:00:44 -07:00
BowenBao
5a455dc717 [ONNX] Enable tensordot symbolic function. (#55654) (#56166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56166

Support tensordot in symbolic function of opset 12, and add tests accordingly.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866140

Pulled By: SplitInfinity

fbshipit-source-id: 68e218cfbd630900fb92871fc7c0de3e7e8c8c3d
2021-04-20 23:00:41 -07:00
BowenBao
f804b65d4e [ONNX] Update repeat_interleave symbolic (#54312) (#56165)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56165

Add implementation for cases when
- interleaving happens along dim which consist of dynamic axes

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866137

Pulled By: SplitInfinity

fbshipit-source-id: 7fef1b2c614f2e24a677b7ca0886bb37bd0ab479
2021-04-20 23:00:39 -07:00
BowenBao
75995e4bf6 [ONNX] Add support for hann_window operator. (#54587) (#56163)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56163

* [ONNX] Improve index_put symbolic to handle singular Bool updates (#53690)

Adds support for cases where the updates to the index_put node is a single Bool value, such as the case shown below

```
mask[indices] = True
```

Fixes #53507

* [ONNX] Support primitive type input/outputs and attributes (#53550)

Support primitive type attributes. Needed for Silero model.

* [ONNX] Fix if output shape mismatch error & Fix graph input directly used as output (#53219)

Fix if output shape mismatch error & Fix graph input directly used as output

* Add support for hann_window operator.

* [ONNX] Replace decomposeLinear pre process pass with a symbolic (#53077)

Replace decomposeLinear pre process pass with a symbolic

* Add a test case for dtype is None.

* Resolve flake8 issue.

* Remove one unused test case.

* Add support for hann_window operator.

* Add a test case for dtype is None.

* Remove one unused test case.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866145

Pulled By: SplitInfinity

fbshipit-source-id: e0b43df9ecd1a95cd7ac297213aba453bbaf2913

Co-authored-by: Shubham Bhokare <32080845+shubhambhokare1@users.noreply.github.com>
Co-authored-by: Negin Raoof <neginmr@utexas.edu>
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Co-authored-by: Ksenija Stanojevic <KsenijaS@users.noreply.github.com>
2021-04-20 22:59:31 -07:00
Kurt Mohler
3fe4718d16 Add padding_idx argument to EmbeddingBag (#49237)
Summary:
This PR adds a `padding_idx` parameter to `nn.EmbeddingBag` and `nn.functional.embedding_bag`. As with `nn.Embedding`'s `padding_idx` argument, if an embedding's index is equal to `padding_idx` it is ignored, so it is not included in the reduction.

This PR does not add support for `padding_idx` for quantized or ONNX `EmbeddingBag` for opset10/11 (opset9 is supported). In these cases, an error is thrown if `padding_idx` is provided.

Fixes https://github.com/pytorch/pytorch/issues/3194

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49237

Reviewed By: walterddr, VitalyFedyunin

Differential Revision: D26948258

Pulled By: jbschlosser

fbshipit-source-id: 3ca672f7e768941f3261ab405fc7597c97ce3dfc
2021-04-14 09:38:01 -07:00
whiteking64
e6bfff679d [ONNX] Add hardsigmoid symbolic in opset 9 #49649 (#54193)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49649
Adds support for torch.nn.Hardsigmoid operator in torch.onnx.export

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54193

Reviewed By: anjali411

Differential Revision: D27522969

Pulled By: SplitInfinity

fbshipit-source-id: 33abcec578f4bc3cf5c3ee1c1bed7d94816bee96
2021-04-07 14:28:31 -07:00
Nikita Shulga
add49e7e4e Enforce PEP263 for PyTorch python codebase (#55346)
Summary:
All python files containing non-ASCII characters should be correctly annotated with `# -*- coding: utf-8 -*-` comment

Delete number of superfluous UTF-8 characters, most commonly UTF-8 opening closing quotation mark U+2019 (’) instead of ascii apostrophe ', for example `Module’s`->`Module's`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55346

Reviewed By: samestep

Differential Revision: D27582044

Pulled By: malfet

fbshipit-source-id: c1cd89655915858ff3a41f675cdfffff795a8e44
2021-04-06 18:31:38 -07:00
Peter Bell
2ee02b30b1 Replace rounding_mode="true" with rounding_mode=None (#51988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51988

* **#51988 Replace rounding_mode="true" with rounding_mode=None**

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27561817

Pulled By: mruberry

fbshipit-source-id: 60d1d9c389570f60d599fc1876518717367fb368
2021-04-05 14:53:43 -07:00
Antonio Cuni
980d6f2589 torch.linalg.det (#53119)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51652.
In particular:
- the main implementation is in `torch.linalg.det` now. `torch.det` is just a deprecated alias to it
- add a new `OpInfo` for `torch.linalg.det`
- remove the old-style tests for `torch.det` (this is similar to what we did for `torch.linalg.slogdet`, see https://github.com/pytorch/pytorch/issues/49194)
- added a `out=` argument to `torch.linalg.det`, but **not** to `torch.det`.

It is worth noting that I had to skip few tests:
- `TestGradientsCuda::test_fn_gradgrad_linalg_det_cuda_float64`. This is not a regression: the functionality is broken also on master, but the test is not executed properly due to https://github.com/pytorch/pytorch/issues/53361.

And the following tests which fails only on ROCm:
- `test_variant_consistency_jit_cuda_{float64,float32}`
- `test_fn_grad_cuda_float64`

I think that the ROCm tests fail because the current linalg.det backward is unstable if the matrix has repeated singular values, see https://github.com/pytorch/pytorch/issues/53364 .

(At the moment of writing some CI jobs are still running but I believe the build will be green, since the only difference wrt the last push is the skip of the ROCm tests)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53119

Reviewed By: H-Huang

Differential Revision: D27441999

Pulled By: mruberry

fbshipit-source-id: 5eab14c4f0a165e0cf9ec626c3f4bb23359f2a9e
2021-04-05 08:45:27 -07:00
Ksenija Stanojevic
7824d8277a [ONNX] Fix export of copy_ operator (#51938) (#54870)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54870

copy_operator before going into onnx exporter is being decomposed into aten::expand_as and aten::index_put.
There is a scenario where inputs to copy are not of the same type, but copy op in torch does implicit casting that is not currently reflected inside onnx exporter. This PR is adding casting inside index_put symbolic in case when tensor self is not of the same type as values.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408975

Pulled By: SplitInfinity

fbshipit-source-id: 15022703e76b9c98b02285c06b13d44f3c4a3f00
2021-03-31 21:14:32 -07:00
DeyuHuang
40869884cd Add outer export to onnx (#53603) (#54869)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54869

Add symbolic fuction to support torch.outer export to onnx.
Support for transfo-xl-wt103 model.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408978

Pulled By: SplitInfinity

fbshipit-source-id: 70c89a9fc1a5e4a4ddcf674afb1e82e492a7d3b9
2021-03-31 21:14:29 -07:00
Ksenija Stanojevic
cb0cee4a3d [ONNX] Replace decomposeLinear pre process pass with a symbolic (#53077) (#54866)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54866

Replace decomposeLinear pre process pass with a symbolic

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408981

Pulled By: SplitInfinity

fbshipit-source-id: d2d76cab3383122a60df1f356742a33db56adc71
2021-03-31 21:14:25 -07:00
Negin Raoof
cd9dd653e9 [ONNX] Support primitive type input/outputs and attributes (#53550) (#54864)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54864

Support primitive type attributes. Needed for Silero model.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408982

Pulled By: SplitInfinity

fbshipit-source-id: 16b291eedbe9f9bb31d7664a29a484555df53755
2021-03-31 21:14:20 -07:00
Shubham Bhokare
ce48b14060 [ONNX] Improve index_put symbolic to handle singular Bool updates (#53690) (#54863)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54863

Adds support for cases where the updates to the index_put node is a single Bool value, such as the case shown below

```
mask[indices] = True
```

Fixes #53507

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408977

Pulled By: SplitInfinity

fbshipit-source-id: bcfb55b50ce76b3d4913ffbc16cdef1f98cb7a84
2021-03-31 21:12:53 -07:00
Jeff Yang
59d1f08b4c docs: fix docstring signature of torch.{onnx,utils} (#54662)
Summary:
fixes https://github.com/pytorch/pytorch/issues/50018
fixes https://github.com/pytorch/pytorch/issues/50017
https://11811000-65600975-gh.circle-artifacts.com/0/docs/onnx.html#functions
https://11811000-65600975-gh.circle-artifacts.com/0/docs/mobile_optimizer.html

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54662

Reviewed By: ailzhang

Differential Revision: D27328485

Pulled By: zou3519

fbshipit-source-id: e658542072ba633b9c309145fc5182edf895d0a6
2021-03-29 10:07:42 -07:00
Bel H
645119eaef Lowering NLLLoss/CrossEntropyLoss to ATen code (#53789)
Summary:
* Lowering NLLLoss/CrossEntropyLoss to ATen dispatch
* This allows the MLC device to override these ops
* Reduce code duplication between the Python and C++ APIs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53789

Reviewed By: ailzhang

Differential Revision: D27345793

Pulled By: albanD

fbshipit-source-id: 99c0d617ed5e7ee8f27f7a495a25ab4158d9aad6
2021-03-26 07:31:08 -07:00
Peiyuan Liao
3519625a34 Fix onnx warning message (#54371)
Summary:
Adding a space between "as" and "Dropout".

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54371

Reviewed By: radkris-git

Differential Revision: D27244053

Pulled By: heitorschueroff

fbshipit-source-id: 500ea719e239ce89e5ac4b54e5b32a36155e8544
2021-03-23 03:05:52 -07:00
Yukio Siraichi
27048c1dfa Remove legacy constructor calls from _torch_ folder. (#53889)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53146
Related to https://github.com/pytorch/pytorch/issues/47112

As mentioned in https://github.com/pytorch/pytorch/issues/47112, the plan is to:

1. Verify that all `torch.Tensor()` scenarios are covered by other functions
2. Scrub internal `torch.Tensor()` uses
3. Update the docs and throw `TORCH_WARN_ONCE` if someone uses `torch.Tensor()`

In this PR, I replaced all occurrences of `torch.Tensor` present in the _torch_ folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53889

Reviewed By: walterddr, zou3519

Differential Revision: D27190743

Pulled By: jbschlosser

fbshipit-source-id: 7ecc201d57935b8dbb98ae3718b60d95cb55a010
2021-03-19 15:20:19 -07:00
BowenBao
ad8d1b2aaa [ONNX] Update embedding export wrt padding_idx (#53931)
Summary:
To be in-sync with https://github.com/pytorch/pytorch/issues/53447

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53931

Reviewed By: ngimel

Differential Revision: D27026616

Pulled By: malfet

fbshipit-source-id: 4c50b29fa296c90aeeeb1757bdaada92cbba33d4
2021-03-15 10:03:53 -07:00
Nikita Shulga
5b648ef909 Revert D26922420: [ONNX] fix export of embedding with padding_idx (#53053)
Test Plan: revert-hammer

Differential Revision:
D26922420 (ee4ce8e9d9)

Original commit changeset: b8b867a96a13

fbshipit-source-id: 501392f419f2735658001c96f83d9754acd8e476
2021-03-12 14:51:01 -08:00
BowenBao
ee4ce8e9d9 [ONNX] fix export of embedding with padding_idx (#53053) (#53530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53530

fix export of embedding with padding_idx

Test Plan: Imported from OSS

Reviewed By: navahgar, jamesr66a, malfet

Differential Revision: D26922420

Pulled By: SplitInfinity

fbshipit-source-id: b8b867a96a13cf810f9c0ae88fcc5c95072bb390
2021-03-12 02:49:46 -08:00
BowenBao
a572f70f2f [ONNX] Support torch.isinf, torch.any and torch.all export to ONNX (#53328) (#53529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53529

Supported for ONNX export after opset 10.
This is not exportable to opsets < 10 due to
1. onnx::IsInf is introduced in opset 10
2. onnx::Equal does not accept float tensor prior to opset 11

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922418

Pulled By: SplitInfinity

fbshipit-source-id: 69bcba50520fa3d69db4bd4c2b9f88c00146fca7
2021-03-12 02:49:41 -08:00
BowenBao
a6a811f23a [ONNX] Add repeat_interleave symbolic (#52855) (#53312)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53312

- Add support for aten::repeat_interleave
- NOTE: Also adds fix for cases with split op where input tensor sizes are not known but _outputs is provided

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922422

Pulled By: SplitInfinity

fbshipit-source-id: 5362d0d8ccfdc14c15e1ae73fd70c4c113f823e6
2021-03-12 02:49:34 -08:00
BowenBao
4c1d9e58c2 Fix copy_ export (#53046) (#53310)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53310

Fixes export of torch.copy_

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922424

Pulled By: SplitInfinity

fbshipit-source-id: f509e531f5064d2be7f55e1681813f10f17475d2
2021-03-12 02:49:26 -08:00
BowenBao
7f17058894 [ONNX] Symbolic shape inference (#51481) (#53307)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53307

This PR did symbolic shape inference, in the onnx pass _jit_pass_onnx_graph_shape_type_inference.
It creates a singleton ConstantValueMap.
It leverages constant folding technique and did a per-op based handling for ConstantValueMap.
As a byproduct, it enables fold_if pass for dynamic axes cases, typically for faster-rcnn etc.

The core change is in `torch/csrc/jit/passes/onnx/shape_type_inference.cpp` and `torch/csrc/jit/passes/onnx/constant_map.cpp`.

We usually need copy tensor to store in the ConstantValueMap, otherwise the underlying value may change. I see this issue in (1) from_blob (2) get value from Constant node.

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922414

Pulled By: SplitInfinity

fbshipit-source-id: 7654dc13d1de8d9496ad4be89f1454260d7bdeb0
2021-03-12 02:49:14 -08:00
BowenBao
57d1df071f [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306

* [ONNX] Fix for sequence of mutations in blocks (#51577)

Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.

* Support inplace operations + indexing

* Clean up old pass for remove mutations

* Add loop test

* Fixes for set attr in loops

* Removing the new jit API flag

* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

* Fix after merge

* clang

* Fix clang

* Fix clang

* Fix warning message.

* Fixes for non-model param attributes

* Fix for caffe2

* Additional test

* clang

* Skip test for lower opsets

* fix clang-tidy

* Update init.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Fix for clang formatting

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922416

Pulled By: SplitInfinity

fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-03-12 02:49:11 -08:00
BowenBao
38414d29a1 [ONNX] Remove the last Cast in pow symbolic_opset9 (#52646) (#53305)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53305

Fixes #52436
For opset 9 of onnx Pow, if X is int32, Y is float, we will cast back to int32 which is consistent with X type.
However, pytorch is still float. The aten graph sometimes does not bind with the type for operators,
we are fine with the float type and don't want to cast back.
Even if X, Y are int32, the resulting float32 and int32 makes no difference.

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922425

Pulled By: SplitInfinity

fbshipit-source-id: f8c09524acee0de615df10a14310ca1dd583831e
2021-03-12 02:47:19 -08:00
BowenBao
b9e900ee52 [ONNX] Update inputs/input_names formatting to avoid ValueError with scriptMethods (#53519) (#53548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53548

fixes issue faced in #53506

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26922415

Pulled By: malfet

fbshipit-source-id: b61842827bb14cef8c7a7089b2426fa53e642c90
2021-03-11 14:26:02 -08:00
BowenBao
3f9c803fe8 [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795) (#53304)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53304

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D26922417

Pulled By: malfet

fbshipit-source-id: 14ed06158d539e2451c2e5e63ba1b32fb0f75095
2021-03-11 10:30:09 -08:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
Shubham Bhokare
49a923c8b5 [ONNX] Update LayerNorm symbolic to handle autocasting (#52199) (#52350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52350

When onnx export creates a 0-dim tensor of constant type, this action overrides the type promotion logic as quoted in #9515. In order to prevent this from happening this PR adds the following functionality.
If the data type is a floating point type, it is converted to a 0-dim double tensor, else it is converted to a 0-dim tensor of its original type

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D26490325

Pulled By: SplitInfinity

fbshipit-source-id: 4c47c69c9b6523d2e45b74c2541d6d8ca7e28fc9
2021-02-19 10:57:15 -08:00
Mike Ruberry
594a66d778 Warn about floor_divide performing incorrect rounding (#50281) (#50281)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50281

Pull Request resolved: https://github.com/pytorch/pytorch/pull/51745

Test Plan: Imported from OSS

Reviewed By: ngimel

Pulled By: mruberry

Differential Revision: D26257855

fbshipit-source-id: e5d497cf07b0c746838ed081c5d0e82fb4cb701b
2021-02-10 03:13:34 -08:00
Chester Liu
58eb23378f Clean up usage of torch._six partially (#49785)
Summary:
See https://github.com/pytorch/pytorch/issues/42919

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49785

Reviewed By: mruberry

Differential Revision: D25963833

Pulled By: bugra

fbshipit-source-id: 11c90d6b8d3f206c9d0a4d8621b773beb10c6ba2
2021-02-08 13:58:34 -08:00
Hao Wu
7363da7c57 onnx export of per channel fake quantize functions (#42835)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/39502

This PR adds support for exporting **fake_quantize_per_channel_affine** to a pair of QuantizeLinear and DequantizeLinear. Per tensor support was added by PR https://github.com/pytorch/pytorch/pull/39738.

`axis` attribute of QuantizeLinear and DequantizeLinear, which is required for per channel support, is added in opset13 added by https://github.com/onnx/onnx/pull/2772.

[update 1/20/2021]: opset13 is being supported on master, the added function is now properly tested. Code also rebased to new master.

The function is also tested offline with the following code
```python
import torch
from torch import quantization

from torchvision import models
qat_resnet18 = models.resnet18(pretrained=True).eval().cuda()

qat_resnet18.qconfig = quantization.QConfig(
    activation=quantization.default_fake_quant, weight=quantization.default_per_channel_weight_fake_quant)
quantization.prepare_qat(qat_resnet18, inplace=True)
qat_resnet18.apply(quantization.enable_observer)
qat_resnet18.apply(quantization.enable_fake_quant)

dummy_input = torch.randn(16, 3, 224, 224).cuda()
_ = qat_resnet18(dummy_input)
for module in qat_resnet18.modules():
    if isinstance(module, quantization.FakeQuantize):
        module.calculate_qparams()
qat_resnet18.apply(quantization.disable_observer)

qat_resnet18.cuda()

input_names = [ "actual_input_1" ]
output_names = [ "output1" ]

torch.onnx.export(qat_resnet18, dummy_input, "quant_model.onnx", verbose=True, opset_version=13)
```
It can generate the desired graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42835

Reviewed By: houseroad

Differential Revision: D26293823

Pulled By: SplitInfinity

fbshipit-source-id: 300498a2e24b7731b12fa2fbdea4e73dde80e7ea
2021-02-08 13:09:50 -08:00
BowenBao
c7f1595b19 fix bug (#51222) (#51527)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51527

Fix bug in scatter_add symbolic

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203119

Pulled By: SplitInfinity

fbshipit-source-id: e61f024e2daa7bc396fb264b8823a72ebf94ccdb
2021-02-04 12:44:44 -08:00
BowenBao
25b18bb5d7 [ONNX] Support list remove for onnx export (#51373) (#51526)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51526

* Support aten::Delete
* Refactor prepare_inplace_ops_for_onnx into one pass.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203114

Pulled By: SplitInfinity

fbshipit-source-id: ce940bca54a30c39f4b0810f62b0e7b497508f59
2021-02-04 12:44:40 -08:00
BowenBao
6d47e2cff8 [ONNX] Fix opset 11 ConstantChunk with negative dim (#51396) (#51525)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51525

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203115

Pulled By: SplitInfinity

fbshipit-source-id: d76942f7cc5812c8a1cc16891e4956cc658283d8
2021-02-04 12:44:35 -08:00
BowenBao
ba824eb2d6 [ONNX] Update unsafe_chunk() method to support new version 13 of Split operator. (#51415) (#51524)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51524

* def unsafe_chunk() support and test in ops13.

* Use _unsqueeze_helper insteadof Unsqueeze operator

* Cast the splits into long.

* Change the test to a fixed dimension.

* Update test_pytorch_onnx_onnxruntime.py

* Disable test_loop_with_list for opset 13.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203123

Pulled By: SplitInfinity

fbshipit-source-id: b273aeff8339faa0e8e9f1fcfbf877d1b703209f

Co-authored-by: Negin Raoof <neginmr@utexas.edu>
2021-02-04 12:44:31 -08:00
BowenBao
8ae6b0c5f9 [ONNX] Enable Constant Folding for ONNX Opset 13 (#51096) (#51523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51523

* Enable Constant Folding for ONNX Opset 13

* fix CI clang-diagnostic

* fix integers type

* fix comments:sort axes and support negative number

* update squeeze op constant folding

* fix format warning

* fix clang-format issue

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203111

Pulled By: SplitInfinity

fbshipit-source-id: c33637ab39db614207bd442c6ab464bd09339b4a

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-02-04 12:44:26 -08:00
BowenBao
1c7d966432 Update error message that displays when encountering an op unsupported for ONNX export. (#51387) (#51522)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51522

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203121

Pulled By: SplitInfinity

fbshipit-source-id: 5920995b735cecb500b12948b8ad91803e576dcb
2021-02-04 12:44:22 -08:00
BowenBao
0e7e4d4217 [ONNX] Add silu operator support for onnx (#51193) (#51519)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51519

Support for yolov5 compound-scaled object detection models export.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203120

Pulled By: SplitInfinity

fbshipit-source-id: c70bd730ee5d6f8bdebaf8ff764b94ffe7673808
2021-02-04 12:44:08 -08:00
BowenBao
9191b639ba [ONNX] Enable remaining failed tests in opset13 (#50806) (#51518)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51518

* enable remaining test in opset13

* add comments for error version test info

* fix comments:opset12 unbind problem

* add ignore[no-redef]

* fix format

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203122

Pulled By: SplitInfinity

fbshipit-source-id: e7d95bd2ce13f79f11965be82f640379cd55ff0f

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-02-04 12:44:04 -08:00
BowenBao
3f185ac18e [ONNX] Export get/set attribute nodes (#50768) (#51517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517

Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203116

Pulled By: SplitInfinity

fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
2021-02-04 12:43:59 -08:00
BowenBao
1829268e7f [ONNX] Improve error message for parse_arg in symbolic functions (#50512) (#51516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51516

previous error message looks like this
```
RuntimeError: Unexpected node type: onnx::Gather
```
now
```
RuntimeError: Expected node type 'onnx::Constant' for argument 'groups' of node 'conv1d', got 'onnx::Gather'.
```

Repro example:
```python
    torch.jit.script
    def conv(x, w):
        return F.conv1d(x, w, groups=x.shape[0])

    class Net(nn.Module):
        def forward(self, x, w):
            return conv(x, w)

    model = Net()

    x = torch.randn(8, 8, 512)
    w = torch.randn(8, 1, 3)
    torch.onnx.export(model,
                        (x, w),
                        "file.onnx",
                        opset_version=12)
```

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203118

Pulled By: SplitInfinity

fbshipit-source-id: 607b22f4cba4baa24154f197914b6817449ab9f8
2021-02-04 12:43:54 -08:00
BowenBao
8dd9fefacb [ONNX] Fix bug in unfold symbolic (#50504) (#51515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51515

Fix bug in unfold symbolic

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203113

Pulled By: SplitInfinity

fbshipit-source-id: 3a1b0013624d918de762a88ac6de8c9cafa0f732
2021-02-04 12:43:50 -08:00
BowenBao
84e9bff85d [ONNX] Replace optional parameters of Resize with placeholder for ops13. (#50574) (#50954)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50954

* Replace optional parameters of Resize with placeholder for ops13.

* Use common methods to handle different versions.

* Correct flake8 issue.

* Update per comments.

* Add something to trigger CI again.

* Trigger another round of CI.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050882

Pulled By: SplitInfinity

fbshipit-source-id: aea6205a1ba4a0621fe1ac9e0c7d94b92b6d8f21
2021-01-27 17:49:07 -08:00
BowenBao
68034197e8 [ONNX] Support gelu for fp16 export (#50487) (#50911)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50911

Need to replace dtype of export created scalars from float to double. (In torch implicit conversion logic, python numbers are double)

Test case skipped in CI due to that current CI job env does not have CUDA support.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050889

Pulled By: SplitInfinity

fbshipit-source-id: 1fdde23a68d4793e6b9a82840acc213e5c3aa760
2021-01-27 17:49:02 -08:00
BowenBao
70dcfe2991 [ONNX] Enable _jit_pass_onnx_fold_if only when dynamic_axes is None (#50582) (#50910)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50910

Fixing pytorch/vision#3251 (PR #49410 triggers the torch vision test build failure, on three tests test_faster_rcnn, test_mask_rcnn, test_keypoint_rcnn. )

The offending PR is fine on pytorch UT, because the torchvision and pytorch test has a gap when we merge them - we are using different test API on two sides, therefore causing some discrepancy.

This PR bridge the gap for the above three tests, and disable _jit_pass_onnx_fold_if pass until it gets fixed.
Allow _jit_pass_onnx_fold_if only when dynamic_axes is None.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050886

Pulled By: SplitInfinity

fbshipit-source-id: b765ffe30914261866dcc761f0d0999fd16169e3
2021-01-27 17:48:58 -08:00
BowenBao
e90a480d40 [ONNX] Add logical_and, logical_or, logical_xor torch op support in pytorch exporter (#50570) (#50909)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50909

Fixes #{}
Add logical_and, logical_or, logical_xor torch op support in pytorch exporter.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050884

Pulled By: SplitInfinity

fbshipit-source-id: 2db564e9726c18a3477f9268a0ff862cd2c40e4d
2021-01-27 17:48:53 -08:00
BowenBao
b308fb78d1 [ONNX] Add binary_cross_entropy_with_logits op to ONNX opset version 12 (#49675) (#50908)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50908

Fixes #{#47997}
Exporting the operator binary_cross_entropy_with_logits to ONNX opset version 12.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050885

Pulled By: SplitInfinity

fbshipit-source-id: e4167895eed804739aa50481679500a4d564b360
2021-01-27 17:48:49 -08:00
BowenBao
1723ab53c4 [ONNX] Update Reducesum operator for opset 13 (#50532) (#50907)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50907

* udpate symbolic for squeeze/unsqueeze

* update c++ unsqueeze/squeeze creation

* clang format

* enable tests

* clang format

* remove prints

* remove magic number

* add helper function

* fix build issue

* update opset9 symbolic with helper function

* fix utility test

* fix prim_fallthrough opset skip

* enable reducesum opset 13

* enable embedding_bag which contain reducesum op

* add ReduceSum helper

* remove block_listed_operators

* remove local test code

* remove embedding_bag() in opset13 file

* remove unuse import

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050888

Pulled By: SplitInfinity

fbshipit-source-id: 88307af6a7880abf94eac126ec1638e962de8c1f

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-01-27 17:48:45 -08:00
BowenBao
7e4c956955 [ONNX] Support opset13 Squeeze and Unsqueeze (#50150) (#50906)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50906

In opset 13, squeeze/unsqueeze is updated to take axes as input, instead of attribute.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050883

Pulled By: SplitInfinity

fbshipit-source-id: 7b5faf0e016d476bc75cbf2bfee6918d77e8aecd
2021-01-27 17:48:40 -08:00
BowenBao
1c9347c666 [ONNX] Use parameter values in onnx shape inference (#49706) (#50905)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50905

Adds an additional run of onnx shape inference after constant folding, since initializer may have changed and affected shape inference.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050881

Pulled By: SplitInfinity

fbshipit-source-id: 9e5d69c52b647133cd3a0781988e2ad1d1a9c09d
2021-01-27 17:45:32 -08:00
neginraoof
137f2a385a [ONNX] Handle sequence output for models (#50599)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/46542

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50599

Reviewed By: SplitInfinity

Differential Revision: D25928897

Pulled By: bzinodev

fbshipit-source-id: a898cef7b2d15a287aedd9798ce1423cebf378d4
2021-01-21 15:36:41 -08:00
Brian Vaughan
a9db2f8e7a Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference
Test Plan: revert-hammer

Differential Revision:
D24924236 (adc65e7c8d)

Original commit changeset: 506e70a38cfe

fbshipit-source-id: 78069a33fb3df825af1cb482da06a07f7b26ab48
2021-01-15 05:58:35 -08:00
Negin Raoof
adc65e7c8d [ONNX] Handle sequence output shape and type inference (#46542)
Summary:
Handle sequence output shape and type inference.

This PR fixes value type of sequence outputs. Prior to this, all model sequence type outputs were unfolded for ONNX models.
This PR also enable shape inference for sequence outputs to represent the dynamic shape of these values.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46542

Reviewed By: ezyang

Differential Revision: D24924236

Pulled By: bzinodev

fbshipit-source-id: 506e70a38cfe31069191d7f40fc6375239c6aafe
2021-01-14 21:12:35 -08:00
Spandan Tiwari
aeefe2ce31 [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (https://github.com/pytorch/pytorch/issues/49270)
- Symbolic function for torch.square (https://github.com/pytorch/pytorch/issues/49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (https://github.com/pytorch/pytorch/issues/49783) …
- [ONNX] Enable export af aten::__derive_index (https://github.com/pytorch/pytorch/issues/49514) …
- [ONNX] Update symbolic for unfold (https://github.com/pytorch/pytorch/issues/49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (https://github.com/pytorch/pytorch/issues/49798)
- [ONNX] Enable opset 13 ops (https://github.com/pytorch/pytorch/issues/49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (https://github.com/pytorch/pytorch/issues/50119)
- [ONNX] Add a post-pass for If folding (https://github.com/pytorch/pytorch/issues/49410)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
2021-01-13 13:51:21 -08:00
Erjia Guan
ca5d9617ba Fix remainder type promotion (#48668)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48668

Combine tests for `fmod` and `remainder`.

## BC-breaking Note:
In order to make `remainder` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.remainder(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.remainder(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D25869136

Pulled By: ejguan

fbshipit-source-id: 8e5e87eec605a15060f715952de140f25644008c
2021-01-12 22:09:30 -08:00
shubhambhokare1
65122173ab [ONNX] Modified var_mean symbolic to support more combinations of dims (#48949)
Summary:
Based on existing implementation of var_mean, values of dim have to be sequential and start with zero. The formats listed below are cause scenarios with incompatible dimension for the Sub node.
-> dim[1, 2]
-> dim[0, 2]
-> dim[2, 0]

The changes in this PR allow such formats to be supported in var_mean

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48949

Reviewed By: houseroad

Differential Revision: D25540272

Pulled By: SplitInfinity

fbshipit-source-id: 59813a77ff076d138655cc8c17953358f62cf137
2021-01-04 18:10:39 -08:00
David
c439a6534d [ONNX] Handle Sub-block index_put in _jit_pass_onnx_remove_inplace_ops_for_onnx (#48734)
Summary:
For the added UT and existing UTs, this code is independent and ready for review.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48734

Reviewed By: izdeby

Differential Revision: D25502677

Pulled By: bzinodev

fbshipit-source-id: 788b4eaa5e5e8b5df1fb4956fbd25928127bb199
2021-01-04 15:11:10 -08:00
Samuel Marks
e6779d4357 [*.py] Rename "Arguments:" to "Args:" (#49736)
Summary:
I've written custom parsers and emitters for everything from docstrings to classes and functions. However, I recently came across an issue when I was parsing/generating from the TensorFlow codebase: inconsistent use of `Args:` and `Arguments:` in its docstrings.

```sh
(pytorch#c348fae)$ for name in 'Args:' 'Arguments:'; do
    printf '%-10s %04d\n' "$name" "$(rg -IFtpy --count-matches "$name" | paste -s -d+ -- | bc)"; done
Args:      1095
Arguments: 0336
```

It is easy enough to extend my parsers to support both variants, however it looks like `Arguments:` is wrong anyway, as per:

  - https://google.github.io/styleguide/pyguide.html#doc-function-args @ [`ddccc0f`](https://github.com/google/styleguide/blob/ddccc0f/pyguide.md)

  - https://chromium.googlesource.com/chromiumos/docs/+/master/styleguide/python.md#describing-arguments-in-docstrings @ [`9fc0fc0`](https://chromium.googlesource.com/chromiumos/docs/+/9fc0fc0/styleguide/python.md)

  - https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html @ [`c0ae8e3`](https://github.com/sphinx-contrib/napoleon/blob/c0ae8e3/docs/source/example_google.rst)

Therefore, only `Args:` is valid. This PR replaces them throughout the codebase.

PS: For related PRs, see tensorflow/tensorflow/pull/45420

PPS: The trackbacks automatically appearing below are sending the same changes to other repositories in the [PyTorch](https://github.com/pytorch) organisation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49736

Reviewed By: albanD

Differential Revision: D25710534

Pulled By: soumith

fbshipit-source-id: 61e8ff01abb433e9f78185c2d1d0cbd7c22c1619
2020-12-28 09:34:47 -08:00
Igor Gitman
1b6d18aa7c Adding support for CuDNN-based LSTM with projections (#47725)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/46213

I didn't yet update the documentation, will add those change soon. A few other things that I didn't do, but want to clarify if I maybe should.

1. I didn't expose projections in c++ API: torch/csrc/api/src/nn/modules/rnn.cpp. Let me know if this is desirable and I will add those changes.
2. I didn't expose projections in "lstm_cell" function and "_thnn_differentiable_lstm_cell_backward" functions from aten/src/ATen/native/RNN.cpp. As far as I understand, they are not needed for nn.LSTM CPU execution. For lstm_cell, projections don't bring any real benefit, since if cell is used separately, it can be easily added in Python. For "_thnn_differentiable_lstm_cell_backward", I'm actually not sure where exactly that function is used, so I also disabled projections there for now. Please let me know if I should change that.
3. I added check that projections are not supported for quantized LSTMs to quantized_lstm_<data/input> functions. But I didn't add any checks to LSTMCell code. It seems that since I disabled projections in "lstm_cell" function, they should also not be available for quantized models through any other API than quantized_lstm_<data/input>. Please let me know if I'm not correct and I will add checks to other places.
4. Projections are not supported for CuDNN versions < 7.1.2. Should I add the check for CuDNN version and disable projections in that case? If so, what will be the best way to do that?
5. Currently I added projection weight as the last weight, so the layout is "w_ih, w_hh, b_ih, b_hh, w_hr". This breaks the assumption that biases come after weights and thus I had to add additional if-s in various places. Alternative way would be to have "w_ih, w_hh, w_hr, b_ih, b_hh" layout, in which case the assumption will be true. But in that case I will need to split the loop in get_parameters function from aten/src/ATen/native/cudnn/RNN.cpp. And in some cases, I will still need to add an "undefined" tensor in the 3rd position, because we get all 5 weights from CuDNN most of the time. So I'm not sure which way is better. Let me know if you think I should change to the weights-then-biases layout.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47725

Reviewed By: zou3519

Differential Revision: D25449794

Pulled By: ngimel

fbshipit-source-id: fe6ce59e481d1f5fd861a8ff7fa13d1affcedb0c
2020-12-16 11:27:02 -08:00
shubhambhokare1
e1c1a7e964 [ONNX] Changes to export API to better handle named arguments (#47367)
Summary:
The args parameter of ONNX export is changed to better support optional arguments such that args is represented as:
args (tuple of arguments or torch.Tensor, a dictionary consisting of named arguments (optional)):
            a dictionary to specify the input to the corresponding named parameter:
            - KEY: str, named parameter
            - VALUE: corresponding input

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47367

Reviewed By: H-Huang

Differential Revision: D25432691

Pulled By: bzinodev

fbshipit-source-id: 9d4cba73cbf7bef256351f181f9ac5434b77eee8
2020-12-10 12:31:00 -08:00
BowenBao
e5a98c5ab0 [ONNX] Remove usage of isCompleteTensor() in symbolic functions (#48162)
Summary:
`isCompleteTensor()` only returns true when both scalar type and shape is present. All dimensions in the shape must be static. This high requirement is unnecessary for many use cases such as when only rank or scalar type needs to be known.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48162

Reviewed By: malfet

Differential Revision: D25340823

Pulled By: bzinodev

fbshipit-source-id: 1fef61f44918f4339dd6654fb725b18cd58d99cf
2020-12-09 11:37:19 -08:00
Guilherme Leobas
34cc77a811 Torch onnx (#48980)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

This is a follow up PR of https://github.com/pytorch/pytorch/issues/45258 and https://github.com/pytorch/pytorch/issues/48782

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48980

Reviewed By: zhangguanheng66

Differential Revision: D25399823

Pulled By: ezyang

fbshipit-source-id: 798055f4abbbffecdfab0325884193c81addecec
2020-12-08 19:41:44 -08:00
Edward Yang
88ebf6f894 Revert D25304229: [pytorch][PR] Add type annotations to torch.onnx.* modules
Test Plan: revert-hammer

Differential Revision:
D25304229 (8bc6023d7a)

Original commit changeset: b01b21ddbf86

fbshipit-source-id: bc3308176e2c70423f29f694e9db94828213e7d6
2020-12-07 11:58:03 -08:00
Guilherme Leobas
8bc6023d7a Add type annotations to torch.onnx.* modules (#48782)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

This is a follow up PR of https://github.com/pytorch/pytorch/issues/45258

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48782

Reviewed By: heitorschueroff

Differential Revision: D25304229

Pulled By: ezyang

fbshipit-source-id: b01b21ddbf86f908ca08173e68b81fb25851bc81
2020-12-07 08:23:02 -08:00
shubhambhokare1
5fd61de99e [ONNX] Added hardswish symbolic in opset 9 (#48423)
Summary:
Adds support for torch.nn.Hardswish operator in Export

Fixes https://github.com/pytorch/pytorch/issues/43665

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48423

Reviewed By: heitorschueroff

Differential Revision: D25309868

Pulled By: bzinodev

fbshipit-source-id: f5583eb01b1b0e8f0bc95d5054941dd29605d6a5
2020-12-03 23:22:21 -08:00
neginraoof
15bc21c280 [ONNX] Track and list model params for scripting (#47348)
Summary:
List model parameters as inputs following freezing script module.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47348

Reviewed By: heitorschueroff

Differential Revision: D25309756

Pulled By: bzinodev

fbshipit-source-id: cbe679ece934d5e6c418a22f08c1662256914c4c
2020-12-03 23:07:28 -08:00
David
f065087567 [ONNX] Handle dynamic input axes for prim_ConstantChunk (#48176)
Summary:
When converting a model that uses `torch.chunk`, it does not work when we have a dynamic input axes, because `Split` split attr is static for opset 11. Therefore, we convert it using `Slice` (support opset 11+). This PR also handles the cases that the input axes cannot be divided by the number of outputs. Pytorch works a way that fit the first (n-1) outputs for the same dim, and remaining for the last one. Added UT for it.

The existing code on `sequence` `split` cannot be leveraged here, because `start`, `end` of `Slice` are static there, but dynamic here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48176

Reviewed By: bdhirsh

Differential Revision: D25274862

Pulled By: bzinodev

fbshipit-source-id: 7d213a7605ad128aca133c057d6dd86c65cc6de9
2020-12-03 21:59:26 -08:00
shubhambhokare1
16fd1c32c5 [ONNX] Update batch_norm symbolic to handle track_running_stats=False (#47903)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45333

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47903

Reviewed By: ejguan

Differential Revision: D25097509

Pulled By: bzinodev

fbshipit-source-id: 5584dac1150b13d4e0a6e0c39ac2f2caf41d3b38
2020-12-03 17:31:03 -08:00
David
befab0d9d4 [ONNX] Cast Gather index to Long if needed (#47653)
Summary:
Onnx op Gather index need be int32 or int64. However, we don't have this Cast in our converter.
Therefore, it fails the following UT (for opset 11+)
`seq_length.type().scalarType()` is None, so `_arange_cast_helper()` cannot treat it as all integral, then it will cast all to float. Then this float value will be used as Gather index, hence it throws error in ORT about float type index.
The fix is that we need cast Gather index type to Long if it is not int/long.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47653

Reviewed By: heitorschueroff

Differential Revision: D25298056

Pulled By: mruberry

fbshipit-source-id: 05e3a70ccfd74612233c63ec5bb78e060b211909
2020-12-03 09:34:59 -08:00
Mike Ruberry
6299c870ee Revert D25254920: [pytorch][PR] Add type annotations to torch.onnx.* modules
Test Plan: revert-hammer

Differential Revision:
D25254920 (40a2dd7e1e)

Original commit changeset: dc9dc036da43

fbshipit-source-id: c17cb282ebf90ecbae4023aa63ecbb443a87037d
2020-12-02 02:25:31 -08:00
Nikita Shulga
44016e66c4 Revert D25097324: [pytorch][PR] [ONNX] Cast Gather index to Long if needed
Test Plan: revert-hammer

Differential Revision:
D25097324 (55fc0e9e53)

Original commit changeset: 42da1412d1b9

fbshipit-source-id: 491994a35a8aaf207dd5905191847171586aa4b7
2020-12-01 20:59:28 -08:00
Guilherme Leobas
40a2dd7e1e Add type annotations to torch.onnx.* modules (#45258)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45215

Still need to resolve a few mypy issues before a review. In special, there is an error which I don't know how to solve, see:
```python
torch/onnx/utils.py:437: error: Name 'is_originally_training' is not defined  [name-defined]
        if training is None or training == TrainingMode.EVAL or (training == TrainingMode.PRESERVE and not is_originally_training):
```

`is_originally_training` is used but never defined/imported on [`torch/onnx/utils.py`](ab5cc97fb0/torch/onnx/utils.py (L437)),

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45258

Reviewed By: zhangguanheng66

Differential Revision: D25254920

Pulled By: ezyang

fbshipit-source-id: dc9dc036da43dd56b23bd6141e3ab92e1a16e3b8
2020-12-01 20:41:39 -08:00
David
55fc0e9e53 [ONNX] Cast Gather index to Long if needed (#47653)
Summary:
Onnx op Gather index need be int32 or int64. However, we don't have this Cast in our converter.
Therefore, it fails the following UT (for opset 11+)
`seq_length.type().scalarType()` is None, so `_arange_cast_helper()` cannot treat it as all integral, then it will cast all to float. Then this float value will be used as Gather index, hence it throws error in ORT about float type index.
The fix is that we need cast Gather index type to Long if it is not int/long.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47653

Reviewed By: ejguan

Differential Revision: D25097324

Pulled By: bzinodev

fbshipit-source-id: 42da1412d1b972d4d82c17fb525879c2575820c9
2020-11-30 21:36:17 -08:00
BowenBao
02e58aabe1 [ONNX] Support nonzero(*, as_tuple=True) export (#47421)
Summary:
Support exporting with `as_tuple = true`

Example:
`torch.nonzero(x, as_tuple=True)`

This is the same as

`torch.unbind(torch.nonzero(x), 1)`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47421

Reviewed By: malfet

Differential Revision: D24870760

Pulled By: bzinodev

fbshipit-source-id: 06ca1e7ecf95fbf7c28eebce800df958c83264c8
2020-11-30 21:27:43 -08:00
Greg Tarr
550973b675 Missing curly bracket. (#47855)
Summary:
Typo fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47855

Reviewed By: smessmer

Differential Revision: D24951767

Pulled By: ezyang

fbshipit-source-id: 8884390370d4d71efd6cee10c3e0b8f55d7e5739
2020-11-13 21:17:24 -08:00
BowenBao
6a4d55f23c [ONNX] Enable onnx shape inference in export by default (#46629)
Summary:
* Enable ONNX shape inference by default.
* ONNX could potentially set inferred shape in output instead of value_infos, checking both to be sure.
* Small fix in symbol_map to avoid overlooking dup symbols.
* Fix scalar_type_analysis to be consistent with PyTorch scalar type promotion logic.
* Correctly handle None dim_param from ONNX inferred shape.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46629

Reviewed By: ailzhang

Differential Revision: D24900171

Pulled By: bzinodev

fbshipit-source-id: 83d37fb9daf83a2c5969d8383e4c8aac986c35fb
2020-11-13 15:09:46 -08:00
Ksenija Stanojevic
9fa681c5e0 [ONNX] Add export of prim::dtype, prim::tolist (#46019)
Summary:
Add export of prim::dtype, prim::tolist.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46019

Reviewed By: malfet

Differential Revision: D24870870

Pulled By: bzinodev

fbshipit-source-id: 7f59e2c8f5ac2dbf83c889c73bd61f96587a296e
2020-11-12 20:34:40 -08:00
David
85c43c3da1 [ONNX] Convert _len based on the first dimension length (#47538)
Summary:
This PR is a bug fix.
As UT shows, for multiple-dimensional tensors, the current conversion for _len returns the total number of the tensors. But it should return the first dimension length, as pytorch _len defines.
Need `Squeeze` op at the end to ensure it outputs a scalar value.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47538

Reviewed By: malfet

Differential Revision: D24870717

Pulled By: bzinodev

fbshipit-source-id: c53c745baa6d2fb7cc1de55a19bd2eedb2ad5272
2020-11-12 20:25:39 -08:00
shubhambhokare1
1abe6e5ad4 [ONNX] Bool inputs to index_put updated symbolic (#46866)
Summary:
Cases with bool inputs to index_put nodes were handled for tracing purposes. This PR adds support for similar situations in scripting

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46866

Reviewed By: malfet

Differential Revision: D24870818

Pulled By: bzinodev

fbshipit-source-id: 2d75ca6f5f4b79d8c5ace337633c5aed3bdc4be7
2020-11-11 12:45:31 -08:00
Negin Raoof
da2e2336b6 [ONNX] Export and shape inference for prim uninitialized in If subblock (#46094)
Summary:
Enable export of prim::Uninitialized in If subblock outputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46094

Reviewed By: houseroad

Differential Revision: D24838537

Pulled By: bzinodev

fbshipit-source-id: d0719b140393595e6df114ef5cc1bb845e919c14
2020-11-11 12:10:49 -08:00
BowenBao
4de40dad5d [ONNX] Improve stability of gemm export (#46570)
Summary:
Export as `onnx::MatMul` if possible since it has less constraint. Resolves issue with exporting `weight_norm` in scripting that fails onnx shape inference with `onnx::Gemm` in unreachable `if` subgraph.

Updates skipped tests list.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46570

Reviewed By: ngimel

Differential Revision: D24657480

Pulled By: bzinodev

fbshipit-source-id: 08d47cc9fc01c4a73a9d78c964fef102d12cc21c
2020-11-10 18:32:33 -08:00
Bowen Bao
14f0675903 [ONNX] Fix dtype for log_softmax export (#46627)
Summary:
Previously dtype was not converted from constant node to python number properly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46627

Reviewed By: houseroad

Differential Revision: D24657535

Pulled By: bzinodev

fbshipit-source-id: 33b0b9087d969f2cb0a2fa608fcf6e10956c06bf
2020-11-10 14:34:46 -08:00
BowenBao
0fb1356a98 [ONNX] Fix eye export (#47016)
Summary:
Previously did not considered the case were optional `m` is not provided in `torch.eye(n, m)`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47016

Reviewed By: ejguan

Differential Revision: D24735916

Pulled By: bzinodev

fbshipit-source-id: ec9b410fc59f27d77d4ae40cb38a67537abb3cd8
2020-11-10 14:24:33 -08:00
Richard Zou
5ce9c70631 Revert D24735802: [pytorch][PR] [ONNX] Update batch_norm symbolic to handle track_running_stats=False
Test Plan: revert-hammer

Differential Revision:
D24735802 (1a55f5b3ea)

Original commit changeset: bbb29d92d46a

fbshipit-source-id: dcd7af6d50e2776e63ee4bfcb9e4baf08a4771b4
2020-11-10 14:04:06 -08:00
shubhambhokare1
1a55f5b3ea [ONNX] Update batch_norm symbolic to handle track_running_stats=False (#47135)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/45333

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47135

Reviewed By: ejguan

Differential Revision: D24735802

Pulled By: bzinodev

fbshipit-source-id: bbb29d92d46a8a74dac0cb01639ddd4ec121a54c
2020-11-10 11:31:33 -08:00
Ksenija Stanojevic
79f8582289 [ONNX] Add export of aten::is_floating point (#46442)
Summary:
Add export of aten::is_floating point

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46442

Reviewed By: mrshenli

Differential Revision: D24566156

Pulled By: bzinodev

fbshipit-source-id: 91ea95e2c4d4866e2ef51bffe07461de2e31c110
2020-11-09 18:02:47 -08:00
Bowen Bao
e26c1726cf [ONNX] Fix scripting rand/randn/where (#45793)
Summary:
- rand/randn: the type signature of int[] is different in scripting, thus failing the check.
- where: scripting produces dynamic cases which are supported by `unbind` export of higher opsets.
- test_list_pass: this test fails when using new scripting api, should be fixed by https://github.com/pytorch/pytorch/issues/45369

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45793

Reviewed By: mrshenli

Differential Revision: D24566096

Pulled By: bzinodev

fbshipit-source-id: 6fe0925c66dee342106d71c9cbc3c95cabe639f7
2020-11-09 12:39:31 -08:00
David Fan
6e69a24a1d [ONNX] Reimplement _var_mean to ensure non-negative (#47240)
Summary:
The current `_var_mean` implementation cannot ensure non-negative for variance, because it is actually `E(X^2)-(E(X))^2`: numerically when the dimension number is large and X is close to 0, it can have negative numbers (like our UT shows). The new implementation is `(E(X-E(X))^2)`, it ensures non-negative because the expectation of square is non-negative for sure.

The UT passes for the new implementation (but fails for the existing one). So it is good to go.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47240

Reviewed By: ejguan

Differential Revision: D24735729

Pulled By: bzinodev

fbshipit-source-id: 136f448dd16622b2b46f40cdf6cb2fccf357c48d
2020-11-07 12:27:09 -08:00
Ksenija Stanojevic
878032d387 [ONNX] Add export of prim::data (#45747)
Summary:
Add export of prim::data

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45747

Reviewed By: bdhirsh

Differential Revision: D24280334

Pulled By: bzinodev

fbshipit-source-id: d21eda84eaba9e690852a72c0e63cbb40eae89bc
2020-11-04 18:15:28 -08:00
shubhambhokare1
1ea14e30f5 [ONNX] Enable NoneType inputs to export API (#45792)
Summary:
Enables the use of NoneType arguments to inputs tuple in the export API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45792

Reviewed By: heitorschueroff

Differential Revision: D24312784

Pulled By: bzinodev

fbshipit-source-id: 1717e856b56062add371af7dc09cdd9c7b5646da
2020-10-29 13:56:52 -07:00
BowenBao
129b41226e [ONNX] Support nd mask index in opset >= 11 (#45252)
Summary:
Fixes below pattern for opset >= 11

`return tensor[tensor > 0]`

where rank of `tensor` > 1.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45252

Reviewed By: VitalyFedyunin

Differential Revision: D24116945

Pulled By: bzinodev

fbshipit-source-id: 384026cded1eb831bb5469e31ece4fcfb6ae8f2a
2020-10-29 13:32:59 -07:00
Negin Raoof
96bc7faa50 [ONNX] Export var, var_mean and std_mean ops (#45678)
Summary:
Adding export for var, var_mean and std_mean ops

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45678

Reviewed By: houseroad

Differential Revision: D24398811

Pulled By: bzinodev

fbshipit-source-id: bf51422a9e035d521156c0fa6e77898aac83a380
2020-10-21 11:23:54 -07:00
BowenBao
b28b5d3c68 [ONNX] Update squeeze test for opset 9 (#45369)
Summary:
Only under static axes does opset 9 supports no-op squeeze when dim is not 1.
Updating the test case where it was setting dynamic axes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45369

Reviewed By: anjali411

Differential Revision: D24280180

Pulled By: bzinodev

fbshipit-source-id: d7cda88ab338a1c41a68052831dcebe739a3843c
2020-10-14 12:53:13 -07:00
Ksenija Stanojevic
6ca03aeb96 [ONNX] Fix flatten operator (#45632)
Summary:
Even when dim is None, there are cases when flatten can be exported.
Also enable test_densenet in scripting mode

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45632

Reviewed By: VitalyFedyunin

Differential Revision: D24116994

Pulled By: bzinodev

fbshipit-source-id: 76da6c073ddf79bba64397fd56b592de850034c4
2020-10-14 12:44:25 -07:00
neginraoof
5ce31b6f3f [ONNX] Improve error handling for adaptive_pool (#45874)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/43032
This update would also improve error handling for interpolate with 'area' mode.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45874

Reviewed By: albanD

Differential Revision: D24141266

Pulled By: bzinodev

fbshipit-source-id: 7559f1d6af4f1ef3507c15a1aee76fe01fa433cd
2020-10-07 09:20:35 -07:00
Ansley Ussery
5072728d88 Fix stride printing/parsing formatting (#45156)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/45156

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D24078695

Pulled By: ansley

fbshipit-source-id: dab993277d43b31105c38d12098c37653747b42a
2020-10-06 15:06:46 -07:00
Dmytro Dzhulgakov
5177f8de2b Revert D23398534: [pytorch][PR] [ONNX] Improve error handling for adaptive_pool
Test Plan: revert-hammer

Differential Revision:
D23398534 (45ddeb5ce6)

Original commit changeset: f2d60d40340f

fbshipit-source-id: acc9d6c3d031662c37447fcee027b0c97b8492a7
2020-10-05 15:16:59 -07:00
Negin Raoof
45ddeb5ce6 [ONNX] Improve error handling for adaptive_pool (#43032)
Summary:
This would also improve error handling for interpolate with 'area' mode.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43032

Reviewed By: malfet

Differential Revision: D23398534

Pulled By: bzinodev

fbshipit-source-id: f2d60d40340f46e7c0499ea73c1e39945713418d
2020-10-05 11:53:14 -07:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Negin Raoof
6b42ca2d69 [ONNX] Update embedding_bag export (#44693)
Summary:
Export of embedding bag with dynamic list of offsets.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44693

Reviewed By: malfet

Differential Revision: D23831980

Pulled By: bzinodev

fbshipit-source-id: 3eaff1a0f20d1bcfb8039e518d78c491be381e1a
2020-09-30 13:36:40 -07:00
Negin Raoof
a77d633db1 [ONNX] Fix view for dynamic input shape (#43558)
Summary:
Export of view op with dynamic input shape is broken when using tensors with a 0-dim.
This fix removes symbolic use of static input size to fix this issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43558

Reviewed By: ailzhang

Differential Revision: D23965090

Pulled By: bzinodev

fbshipit-source-id: 628e9d7ee5d53375f25052340ca6feabf7ba7c53
2020-09-28 14:46:51 -07:00
BowenBao
57c18127dc [ONNX] Update div export to perform true divide (#44831)
Summary:
related https://github.com/pytorch/pytorch/issues/43787

Now that PyTorch div is actually performing true divide, update onnx export code to stay consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44831

Reviewed By: eellison

Differential Revision: D23880316

Pulled By: bzinodev

fbshipit-source-id: 3bb8db34142ac4fed4039295ad3c4cb79487987f
2020-09-28 13:53:43 -07:00
Negin Raoof
95a97e51b5 [ONNX] Improve scripting inplace indexing ops (#44351)
Summary:
Fix a couple of issues with scripting inplace indexing in prepare_inplace_ops_for_onnx pass.
1- Tracing index copy (such as cases lik x[1:3] = data) already applies broadcasting on rhs if needed. The broadcasting node (aten::expand) is missing in scripting cases.

2- Inplace indexing with ellipsis (aten::copy_) is replaced with aten::index_put and then handled with slice+select in this pass.
Support for negative indices for this op added.

Shape inference is also enabled for scripting tests using new JIT API.
A few more tests are enabled for scripting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44351

Reviewed By: ezyang

Differential Revision: D23880267

Pulled By: bzinodev

fbshipit-source-id: 78b33444633eb7ae0fbabc7415e3b16001f5207f
2020-09-28 00:32:36 -07:00
liqunfu
c3bf402cbb handle onnx nll with default ignore index (#44816)
Summary:
in ONNX NegativeLogLikelihoodLoss specification, ignore_index is optional without default value.
therefore, when convert nll op to ONNX, we need to set ignore_index attribute even if it is not specified (e.g. ignore_index=-100).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44816

Reviewed By: ezyang

Differential Revision: D23880354

Pulled By: bzinodev

fbshipit-source-id: d0bdd58d0a4507ed9ce37133e68533fe6d1bdf2b
2020-09-27 23:26:19 -07:00
neginraoof
4005afe94b [ONNX] Update narrow for dynamic inputs (#44039)
Summary:
Update narrow for dynamic inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44039

Reviewed By: mruberry

Differential Revision: D23742215

Pulled By: bzinodev

fbshipit-source-id: 0d58d2fe996f91a124af988a9a21ee433e842d07
2020-09-27 15:52:57 -07:00
Shinichiro Hamaji
8b00c4c794 [ONNX] Correct a minor typo in warning (#45187)
Summary:
The warning for batch_norm was mentioning dropout.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45187

Reviewed By: glaringlee

Differential Revision: D23873215

Pulled By: ezyang

fbshipit-source-id: 1dcc82ad16522215f49b4cd0fc0e357b2094e4f2
2020-09-25 09:26:51 -07:00
Ksenija Stanojevic
0dda65ac77 [ONNX] add jit pass for lists (#43820)
Summary:
Add jit preprocessing pass for adding int lists.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43820

Reviewed By: albanD

Differential Revision: D23674598

Pulled By: bzinodev

fbshipit-source-id: 35766403a073e202563bba5251c07efb7cc5cfb1
2020-09-21 22:05:25 -07:00
shubhambhokare1
0063512a4b [ONNX] Updates to diagnostic tool to find missing ops (#44124)
Summary:
Moved description of tool and changes in function name

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44124

Reviewed By: albanD

Differential Revision: D23674618

Pulled By: bzinodev

fbshipit-source-id: 5db0bb14fc106fc96358b1e0590f08e975388c6d
2020-09-18 10:32:30 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
BowenBao
43406e218a [ONNX] Update ONNX shape inference (#43929)
Summary:
* Support sequence type (de)serialization, enables onnx shape inference on sequence nodes.
* Fix shape inference with block input/output: e.g. Loop and If nodes.
* Fix bugs in symbolic discovered by coverage of onnx shape inference.
* Improve debuggability: added more jit logs. For simplicity, the default log level, when jit log is enabled, will not dump ir graphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43929

Reviewed By: albanD

Differential Revision: D23674604

Pulled By: bzinodev

fbshipit-source-id: ab6aacb16d0e3b9a4708845bce27c6d65e567ba7
2020-09-14 15:36:19 -07:00
Ksenija Stanojevic
f7cfbac89b [ONNX] Update len symbolic (#43824)
Summary:
Update len symbolic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43824

Reviewed By: izdeby

Differential Revision: D23575765

Pulled By: bzinodev

fbshipit-source-id: 0e5c8c8d4a5297f65e2dc43168993350f784c776
2020-09-14 15:00:44 -07:00
shubhambhokare1
da11d932bc [ONNX] Update arange op to support out argument (#43777)
Summary:
Update arange op to support out argument

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43777

Reviewed By: albanD

Differential Revision: D23674583

Pulled By: bzinodev

fbshipit-source-id: 6fb65e048c6b1a551569d4d2a33223522d2a960c
2020-09-14 14:56:17 -07:00
neginraoof
62ebad4ff9 [ONNX] Export new_empty and new_zeros (#43506)
Summary:
Adding symbolic to export new_empty and new_zeros

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43506

Reviewed By: houseroad

Differential Revision: D23674574

Pulled By: bzinodev

fbshipit-source-id: ecfcdbd4845fd3a3c6618a060129fbeee4df5dd7
2020-09-14 14:48:34 -07:00
Elias Ellison
1f0dcf39fc [JIT] dont optimize device dtype on inline (#43363)
Summary:
Follow up to https://github.com/pytorch/pytorch/pull/36404

Adding prim::device and prim::dtype to list of skipped peepholes when we run inlining. In the long term another fix may not be to encode shape / dtype info on the traced graph, because it is not guaranteed to be correct. This is blocked by ONNX currently.

Partial fix for https://github.com/pytorch/pytorch/issues/43134

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43363

Reviewed By: glaringlee

Differential Revision: D23383987

Pulled By: eellison

fbshipit-source-id: 2e9c5160d39d690046bd9904be979d58af8d3a20
2020-09-11 17:29:54 -07:00
David Reiss
7d78a6fcdd Update interpolate to use new upsample overloads (#43025)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43025

- Use new overloads that better reflect the arguments to interpolate.
- More uniform interface for upsample ops allows simplifying the Python code.
- Also reorder overloads in native_functions.yaml to give them priority.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37177

ghstack-source-id: 106938111

Test Plan:
test_nn has pretty good coverage.

Relying on CI for ONNX, etc.

Didn't test FC because this change is *not* forward compatible.

To ensure backwards compatibility, I ran this code before this change

```python
def test_func(arg):
    interp = torch.nn.functional.interpolate
    with_size = interp(arg, size=(16,16))
    with_scale = interp(arg, scale_factor=[2.1, 2.2], recompute_scale_factor=False)
    with_compute = interp(arg, scale_factor=[2.1, 2.2])
    return (with_size, with_scale, with_compute)

traced_func = torch.jit.trace(test_func, torch.randn(1,1,1,1))

sample = torch.randn(1, 3, 7, 7)
output = traced_func(sample)

assert not torch.allclose(output[1], output[2])

torch.jit.save(traced_func, "model.pt")
torch.save((sample, output), "data.pt")
```

then this code after this change

```python
model = torch.jit.load("model.pt")
sample, golden = torch.load("data.pt")
result = model(sample)
for r, g in zip(result, golden):
    assert torch.allclose(r, g)
```

Reviewed By: AshkanAliabadi

Differential Revision: D21209991

fbshipit-source-id: 5b2ebb7c3ed76947361fe532d1dbdd6faa3544c8
2020-09-11 09:59:14 -07:00
shubhambhokare1
f3bf6a41ca [ONNX] Update repeat op (#43430)
Summary:
Update repeat op so that the inputs to sizes argument can a mixture of dynamic and constant inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43430

Reviewed By: houseroad

Differential Revision: D23494257

Pulled By: bzinodev

fbshipit-source-id: 90c5e90e4f73e98f3a9d5c8772850e72cecdf0d4
2020-09-04 18:53:31 -07:00
neginraoof
3d7c22a2ce [ONNX] Enable new scripting passes for functionalization and remove_mutation (#43791)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/41413
This PR initiates the process of updating the torchsciprt backend interface used by ONNX exporter.

Replace jit lower graph pass by freeze module pass

Enable ScriptModule tests for ONNX operator tests (ORT backend) and model tests by default.

Replace jit remove_inplace_ops pass with remove_mutation and consolidation all passes for handling inplace ops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43791

Reviewed By: houseroad

Differential Revision: D23421872

Pulled By: bzinodev

fbshipit-source-id: a98710c45ee905748ec58385e2a232de2486331b
2020-09-04 15:21:45 -07:00
neginraoof
539d029d8c [ONNX] Fix split export using slice (#43670)
Summary:
Fix for exporting split with fixed output shape using slice.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43670

Reviewed By: houseroad

Differential Revision: D23420318

Pulled By: bzinodev

fbshipit-source-id: 09c2b58049fe32dca2f2977d91dd64de6ee9a72f
2020-09-04 10:52:44 -07:00
neginraoof
f6f9d22228 [ONNX] Export KLDivLoss (#41858)
Summary:
Enable export for KLDivLoss

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41858

Reviewed By: mrshenli

Differential Revision: D22918004

Pulled By: bzinodev

fbshipit-source-id: e3debf77a4cf0eae0df6ed5a72ee91c43e482b62
2020-09-02 11:45:13 -07:00
Gao, Xiang
5e97f251a8 Enable TF32 support for cuDNN (#40737)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40737

Reviewed By: mruberry

Differential Revision: D22801525

Pulled By: ngimel

fbshipit-source-id: ac7f7e728b4b3e01925337e8c9996f26a6433fd2
2020-09-01 15:34:24 -07:00
Ksenija Stanojevic
820c4b05a9 [ONNX] Update slice symbolic function (#42935)
Summary:
During scripting, combination of shape (or size()) and slice (e.g x.shape[2:]) produces following error:
 slice() missing 1 required positional argument: 'step'
This happens because aten::slice has 2 signatures:

- aten::slice(Tensor self, int dim, int start, int end, int step) -> Tensor
- aten::slice(t[] l, int start, int end, int step) -> t[]

and when a list is passed instead of tensor the 2nd of the two slice signatures is called, and since it has 4 instead of 5 arguments it produces the above exception.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42935

Reviewed By: houseroad

Differential Revision: D23398435

Pulled By: bzinodev

fbshipit-source-id: 4151a8f878c520cea199b265973fb476b17801fe
2020-09-01 02:08:48 -07:00
Akihiro Nitta
f17d7a5556 Fix exception chaining in torch/ (#43836)
Summary:
## Motivation
Fixes https://github.com/pytorch/pytorch/issues/43770.

## Description of the change
This PR fixes exception chaining only in files under `torch/` where appropriate.
To fix exception chaining, I used either:
1. `raise new_exception from old_exception` where `new_exception` itself seems not descriptive enough to debug or `old_exception` delivers valuable information.
2. `raise new_exception from None` where raising both of `new_exception` and `old_exception` seems a bit noisy and redundant.
I subjectively chose which one to use from the above options.

## List of lines containing raise in except clause:
I wrote [this simple script](https://gist.github.com/akihironitta/4223c1b32404b36c1b349d70c4c93b4d) using [ast](https://docs.python.org/3.8/library/ast.html#module-ast) to list lines where `raise`ing in `except` clause.

- [x] 000739c31a/torch/jit/annotations.py (L35)
- [x] 000739c31a/torch/jit/annotations.py (L150)
- [x] 000739c31a/torch/jit/annotations.py (L158)
- [x] 000739c31a/torch/jit/annotations.py (L231)
- [x] 000739c31a/torch/jit/_trace.py (L432)
- [x] 000739c31a/torch/nn/utils/prune.py (L192)
- [x] 000739c31a/torch/cuda/nvtx.py (L7)
- [x] 000739c31a/torch/utils/cpp_extension.py (L1537)
- [x] 000739c31a/torch/utils/tensorboard/_pytorch_graph.py (L292)
- [x] 000739c31a/torch/utils/data/dataloader.py (L835)
- [x] 000739c31a/torch/utils/data/dataloader.py (L849)
- [x] 000739c31a/torch/utils/data/dataloader.py (L856)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L186)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L189)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L424)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1279)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1283)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1356)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1388)
- [x] 000739c31a/torch/testing/_internal/common_utils.py (L1391)
- [ ] 000739c31a/torch/testing/_internal/common_utils.py (L1412)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L310)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L329)
- [x] 000739c31a/torch/testing/_internal/codegen/random_topo_test.py (L332)
- [x] 000739c31a/torch/testing/_internal/jit_utils.py (L183)
- [x] 000739c31a/torch/testing/_internal/common_nn.py (L4789)
- [x] 000739c31a/torch/onnx/utils.py (L367)
- [x] 000739c31a/torch/onnx/utils.py (L659)
- [x] 000739c31a/torch/onnx/utils.py (L892)
- [x] 000739c31a/torch/onnx/utils.py (L897)
- [x] 000739c31a/torch/serialization.py (L108)
- [x] 000739c31a/torch/serialization.py (L754)
- [x] 000739c31a/torch/distributed/rpc/_testing/faulty_agent_backend_registry.py (L76)
- [x] 000739c31a/torch/distributed/rpc/backend_registry.py (L260)
- [x] 000739c31a/torch/distributed/distributed_c10d.py (L184)
- [x] 000739c31a/torch/_utils_internal.py (L57)
- [x] 000739c31a/torch/hub.py (L494)
- [x] 000739c31a/torch/contrib/_tensorboard_vis.py (L16)
- [x] 000739c31a/torch/distributions/lowrank_multivariate_normal.py (L100)
- [x] 000739c31a/torch/distributions/constraint_registry.py (L142)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43836

Reviewed By: ailzhang

Differential Revision: D23431212

Pulled By: malfet

fbshipit-source-id: 5f7f41b391164a5ad0efc06e55cd58c23408a921
2020-08-31 20:26:23 -07:00