Commit Graph

454 Commits

Author SHA1 Message Date
Justin Chu
f5701a1f9a [ONNX] Remove unused patching methods (#83006)
### Description
<!-- What did you change and why was it needed? -->

Remove unused patching methods:

- `torch._C.Graph.constant`
- unpatch `torch._C.Node.__getitem__` and move the helper function to `symbolic_helper`

Add typing annotations

### Issue
<!-- Link to Issue ticket or RFP -->

#76254

### Testing
<!-- How did you test your change? -->

Unit tested
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83006
Approved by: https://github.com/BowenBao
2022-08-09 19:24:03 +00:00
Justin Chu
c6cdca5c68 [ONNX] Reland #81953 Type utility for converting among JIT, torch and ONNX data types (#82995)
Re-land #81953

Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Deprecated: "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type" in `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82995
Approved by: https://github.com/kit1980
2022-08-08 23:43:43 +00:00
PyTorch MergeBot
b170a52a09 Revert "[ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)"
This reverts commit 6ddf4c6f58.

Reverted https://github.com/pytorch/pytorch/pull/81953 on behalf of https://github.com/kit1980 due to Broke internal builds by removing functions without deprecation
2022-08-07 20:15:28 +00:00
Justin Chu
6ddf4c6f58 [ONNX] Type utility for converting among JIT, torch and ONNX data types (#81953)
Add `_type_utils` for handling data type conversion among JIT, torch and ONNX.

- Replace dictionary / list indexing with methods in ScalarType
- Breaking: **Remove ScalarType from `symbolic_helper`** and move it to `_type_utils`
- Breaking: **Remove "cast_pytorch_to_onnx", "pytorch_name_to_type", "scalar_name_to_pytorch", "scalar_type_to_onnx", "scalar_type_to_pytorch_type"** from `symbolic_helper`
- Deprecate the type mappings and lists. Remove all internal references
- Move _cast_func_template to opset 9 and remove its reference elsewhere (clean up). Added documentation for easy discovery

Why: List / dictionary indexing and lookup are error-prone and convoluted.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81953
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 22:24:45 +00:00
Li-Huai (Allan) Lin
6bdf89b0c7 [ONNX] Fix argmin and argmax test cases (#79503)
Part of #79263

The `keepdim` argument is theoretically ignored when `dim` is not specified (See [docs](https://pytorch.org/docs/stable/generated/torch.argmin.html)).

Unfortunately the PyTorch implementation seems to still take it into account, resulting in a non-fully-reduced tensor, which is an undefined behavior. Thus, I add `dim` argument to the tests to make the outputs between PyTorch and ONNX runtime consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79503
Approved by: https://github.com/justinchuby, https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-08-05 18:09:47 +00:00
titaiwang
b18498a636 [ONNX] Add RReLU eval mode behavior (#82678)
### Description

RReLU behaves the same as LeakyReLU when it's on test mode ([paper](https://arxiv.org/pdf/1505.00853.pdf)), but onnx now only supports train mode behavior, which blocks the models using RReLU.

This PR adds test mode behavior into RReLU symbolic function, adds a runtime case to validate that the outcome now matches torch result, and updates related UT.

1. Extend RReLU symbolic function with test mode behavior
2. Add onnxruntime UT to validate the usage
3. update the existing RReLU UT

### Issue
Fix #82031
Also raise a document issue for torch #82677

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82678
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-08-05 01:46:12 +00:00
Wei-Sheng Chin
4a34cbc7cd [ONNX] exporter native_layer_norm (#81754)
Pytorch has two similar layer normalization symbols `aten::layer_norm` and `aten::native_layer_norm`. This PR reuses `aten::layer_norm`'s exporter for exporting `aten::native_layer_norm` with a small refinement. A test is also included. This PR is required because JIT graphs generated from TorchDynamo and LazyTensor (with TS backend) may contain `native_layer_norm` instead of `layer_norm`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81754
Approved by: https://github.com/BowenBao, https://github.com/justinchuby
2022-07-27 00:30:19 +00:00
Huy Do
6ea422dd0b Format torch/onnx with ufmt (#82137)
This is the last batch for the new ufmt (black + usort) linter. After this, black linter can finally be replaced. The previous PR to format ONNX tests was #81335
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82137
Approved by: https://github.com/kit1980, https://github.com/AllenTiTaiWang
2022-07-25 22:42:21 +00:00
Wei-Sheng Chin
6691ed7884 [ONNX] Export aten::_log_softmax (#81804)
As title. We are seeing more aten symbols when exporting JIT graph from TorchDynamo and LazyTensor.

Fixes #81939
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81804
Approved by: https://github.com/BowenBao
2022-07-25 22:02:56 +00:00
Wei-Sheng Chin
e32691dc7a [ONNX] extend add and sub exporter to cover graph non-tensor inputs (#81736)
ONNX exporter fails when the 3rd input of `aten:add` or `aten::sub` isn't a tensor. This PR fixes this failure.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81736
Approved by: https://github.com/BowenBao
2022-07-25 21:54:52 +00:00
Wei-Sheng Chin
1a9317ca64 [ONNX] Export aten::convolution (#81815)
As title. We encountered this symbol when running MNIST CNN from LazyTensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81815
Approved by: https://github.com/BowenBao
2022-07-25 17:08:01 +00:00
Wei-Sheng Chin
d30784be31 [ONNX] Fix ONNX aten::mul exporter with boolean inputs (#81671)
Continue work left in #72102.

The current exporter always export `aten::mul` to ONNX `Mul`. However, ONNX `Mul` [doesn't support Boolean](https://github.com/onnx/onnx/blob/main/docs/Operators.md#type-constraints-92) so we need to explicitly use ONNX `And` in this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81671
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi
2022-07-25 14:47:32 +00:00
Justin Chu
ed1da2a9df [ONNX] Quantization support for quantized::cat (#79826)
- Add support for quantized `cat`
- Add type annotations for helper functions

Now we can export

```python
import torchvision.models.quantization as models
from torchvision import transforms

torch_model = models.inception_v3(pretrained=True, quantize=True)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79826
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-07-12 15:44:23 +00:00
Justin Chu
06710ec1b9 [ONNX] Reland: Add quantization support to _avg_pool opset 9 and clean up (#81267)
- Add quantization support to _avg_pool opset 9
- Clean up reused / unused variables in avgpool helper
- Add types
- Sort `__all__`

Reland #79793 (Added `argsort` in `__all__`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81267
Approved by: https://github.com/BowenBao
2022-07-11 23:58:28 +00:00
PyTorch MergeBot
b98b9eaae5 Revert "[ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)"
This reverts commit 356341a3ec.

Reverted https://github.com/pytorch/pytorch/pull/79793 on behalf of https://github.com/malfet due to Broke trunk, see 356341a3ec
2022-07-09 02:40:28 +00:00
Justin Chu
356341a3ec [ONNX] Add quantization support to _avg_pool opset 9 and clean up (#79793)
- Add quantization support to _avg_pool opset 9
- Clean up reused / unused variables in avgpool helper
- Add types
- Sort `__all__`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79793
Approved by: https://github.com/BowenBao
2022-07-08 23:14:01 +00:00
Simon Boehm
b00e4e8017 [ONNX] Convert aten numpy_T to ONNX transpose (#79269)
This PR addes support for exporting `aten::numpy_T` to ONNX

Fixes #51183

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79269
Approved by: https://github.com/justinchuby, https://github.com/garymm
2022-07-02 07:01:48 +00:00
qqaatw
3dec9fd09f [ONNX] Fix hardshrink and softshrink output's shape (#79695)
Part of #79263

Before: When the shape of the two functions is `[]`, the reduced output has `[1]` shape.
After: The shape of the two functions is now `[]` as PyTorch's behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79695
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-06-28 20:00:10 +00:00
qqaatw
7a8d6c9b1d [ONNX] Fix onnx logical functions' dtype (#79339)
Part of #79263

Before: output bool is casted back to input's dtype.
After: no longer casted back.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79339
Approved by: https://github.com/BowenBao
2022-06-27 20:50:41 +00:00
qqaatw
efbee797b8 [ONNX] Fix prelu output's shape (#79846)
Part of #79263

Before: The output has `[1]` shape when the input is a scalar.
After: The output has `[]` shape, matching PyTorch's behavior.

The original comment along the code states `torch allows scalar self, and ONNX is ambiguous about whether this is allowed`. The fact seems to be that ONNX never clearly indicates whether scalar inputs are allowed for all the ONNX operators. At least in this case, a scalar input seems to be allowed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79846
Approved by: https://github.com/BowenBao
2022-06-22 02:20:27 +00:00
qqaatw
4b52babcd9 [ONNX] Fix any and all outputs' shape (#79371)
Part of #79263

Before: When `dim` == `None` and `keepdim` == `0`(`False`), the reduced output has `[1]` shape.
After: Squeeze the output so that the shape will be `[]` as PyTorch's behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79371
Approved by: https://github.com/AllenTiTaiWang, https://github.com/BowenBao
2022-06-22 02:18:38 +00:00
qqaatw
c47f78d25f [ONNX] Fix linalg norm output's shapes and dtypes (#79506)
Part of #79263

This PR fixes the following matters:

1. Before this fix, the reduced output has `[1]` shape when `dim = None` and `keepdim = False`. Now the output is reduced to `[]` shape, which matches Pytorch's behavior.
2. Before this fix, the output is always casted to `Long`. Now the output is casted to the input's dtype.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79506
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-06-22 02:12:34 +00:00
Justin Chu
c8b9b6266b [ONNX] Fix arg type in _set_training_mode (#78583)
When `TrainingMode.PRESERVE` is set for export, the exporter used to change the model's training mode based on some logic. Now we respect the option and not touch the model's training state.

- Previously `_set_training_mode`'s behavior doesn't match what the global variable expects. This PR removes the deprecated `_set_training_mode` and makes the type correct.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78583
Approved by: https://github.com/BowenBao
2022-06-15 23:47:12 +00:00
Justin Chu
d3ef5c3fa3 [ONNX] Clean up __init__ in torch.onnx (#78446)
- Move definitions in `__init__` to internal classes and expose them by importing to init (prevent circular dependencies): https://github.com/pytorch/pytorch/wiki/torch.onnx-Namespacing
  - Context classes and enums are moved to `_exporter_states.py`
  - Exceptions are moved to `errors.py`
- Define `__all__` for torch.onnx. https://github.com/pytorch/pytorch/wiki/Public-API-definition-and-documentation
- Moved `utils.__IN_ONNX_EXPORT` to `GLOBALS.in_onnx_export`
- Deprecated `torch.onnx._export`

Precedes #78231

Using this as an aid for finding public functions:

```python
list(filter(lambda x: not x.startswith("_"), torch.onnx.utils.__dict__.keys()))
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78446
Approved by: https://github.com/BowenBao
2022-06-14 04:35:06 +00:00
qqaatw
1a845579b6 [ONNX] Fix inconsistent rand dtype (#79193)
ONNX export of torch.rand produced different data type. This PR makes the type correct

Fixes #77845

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79193
Approved by: https://github.com/justinchuby, https://github.com/garymm
2022-06-14 03:06:39 +00:00
qqaatw
2bafb42a0a Add onnx support for movedim and moveaxis (#78931)
Fixes #68918

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78931
Approved by: https://github.com/BowenBao
2022-06-09 19:41:09 +00:00
Li-Huai (Allan) Lin
b45d303dac Add onnx support for torch.lerp (#78891)
Fixes #68384

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78891
Approved by: https://github.com/BowenBao
2022-06-08 17:16:45 +00:00
Justin Chu
3f14ca8f02 [ONNX] Compare types for DeviceObjType instead of strings (#78114)
- Turn all string comparison on node types into `isinstance` checks
- Update error message in the device op to include the unexpected type's name: `RuntimeError: Unsupported: ONNX export of operator prim::device, output type should be 'DeviceObjType', not '<some unknown type>'. Please feel free to request support or submit a pull request on PyTorch GitHub.`

Tested:

Unit test in `test/onnx/test_pytorch_onnx_onnxruntime.py::TestONNXRuntime_opset13::test_to_device`

Follow up of #78085
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78114
Approved by: https://github.com/garymm
2022-06-07 16:57:24 +00:00
Justin Chu
def778527e [ONNX] Quantization support for five ops (#78103)
- Add quantization support for `interpolate`, `avgpool`, `sigmoid` and `add_relu`
- Return the inputs to ListUnpack if the previous node is ListConstruct so that `ListConstruct` and `ListUnpack` are canceled and removed in the jit passes. ONNX doesn't support them.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78103
Approved by: https://github.com/garymm
2022-06-03 20:22:07 +00:00
Jinkun Lin
0f05e39870 [ONNX] Fix shape inconsistency when exporting scalar log2 (#78701)
This is a simple fix addressing the exportation when the input to `torch.log2` is scalar. `log2(x)` will be exported as `log(x) / log(2)`, which creates a `log` node followed by a `div` node that divides it by a constant. The constant is constructed not as a scalar but as a tensor of shape `[1]`, so a scalar input here will get broadcasted creating the output tensor with shape `[1]`, while originally the torch model's output is a scalar.

```python
import torch
import onnx
import numpy as np

class Model(torch.nn.Module):
    def forward(self, x):
        return torch.log2(x)

x = torch.tensor(1.)  # scalar
model = Model()
torch.onnx.export(model, (x, ), "output.onnx", opset_version=14,
                  output_names=['o0'], input_names=['i0'])
y_trh = model(x).numpy()

model = onnx.load("output.onnx")
print(model.graph.output[0])

import onnxruntime as ort
sess = ort.InferenceSession(
    "output.onnx", providers=['CPUExecutionProvider'])
y_ort = sess.run(['o0'], {'i0': x.numpy()})[0]
assert y_ort.shape == y_trh.shape, 'shape mismatch, ORT is `{}` but PyTorch is `{}`'.format(
    y_ort.shape, y_trh.shape)

```
The resulting ONNX model has an output of shape `[1]` and causes shape mismatch between ORT and PyTorch. The output:
```
name: "o0"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_value: 1
      }
    }
  }
}

Traceback (most recent call last):
  File "test.py", line 501, in <module>
    y_ort.shape, y_trh.shape)
AssertionError: shape mismatch, ORT is `(1,)` but PyTorch is `()`
```
After the fix, the output becomes:
```
name: "o0"
type {
  tensor_type {
    elem_type: 1
    shape {
    }
  }
}
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78701
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-06-03 18:41:17 +00:00
Gary Miguel
fb7a761ffd [ONNX] reduce log spam when exporting dropout in training mode (#78309)
The default for `torch.onnx.export` is `TrainingMode.EVAL`:
0d76299ff7/torch/onnx/__init__.py (L63)

That means that this warning is only printed when the caller overrides
that and explicitly specifies that they want training ops like Dropout.
We should assume the user knows what they're doing and not warn.

Also set `do_constant_folding=False` in the dropout related training tests. Without this, warnings are printed like:
```
UserWarning: It is recommended that constant folding be turned off ('do_constant_folding=False') when exporting the model in training-amenable mode
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78309
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2022-06-03 18:41:07 +00:00
Justin Chu
161e931156 [ONNX] Modernize python syntax (#77935)
Use pyupgrade(https://github.com/asottile/pyupgrade) and flynt to modernize python syntax

```sh
pyupgrade --py36-plus --keep-runtime-typing torch/onnx/**/*.py
pyupgrade --py36-plus --keep-runtime-typing test/onnx/**/*.py
flynt torch/onnx/ --line-length 120
```

- Use f-strings for string formatting
- Use the new `super()` syntax for class initialization
- Use dictionary / set comprehension
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77935
Approved by: https://github.com/BowenBao
2022-05-24 22:52:37 +00:00
Justin Chu
652ecc9ad9 [ONNX] Fix typo when comparing DeviceObjType (#78085)
#77423 Introduced a typo in

1db9be70a7/torch/onnx/symbolic_opset9.py (L5012-L5017)

where the string `DeviceObjType` was replaced with `_C.DeviceObjType`. This PR reverts the changes to the strings.

**Tested:**

With torchvision,

```
pytest test/test_onnx.py::TestONNXExporter::test_mask_rcnn
pytest -n auto test/test_onnx.py::TestONNXExporter
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78085
Approved by: https://github.com/datumbox, https://github.com/BowenBao, https://github.com/ezyang
2022-05-23 17:29:36 +00:00
Justin Chu
0d76299ff7 [ONNX] Clean up module imports (#77423)
Cleaning up onnx module imports to prepare for updating `__init__`.

- Simplify importing the `_C` and `_C._onnx` name spaces
- Remove alias of the symbolic_helper module in imports
- Remove any module level function imports. Import modules instead
    - Alias `symbilic_opsetx` as `opsetx`
- Fix some docstrings

Requires:
- https://github.com/pytorch/pytorch/pull/77448
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77423
Approved by: https://github.com/BowenBao
2022-05-20 01:56:24 +00:00
Justin Chu
563c2719bf [ONNX] Refactor to remove inline imports - attempt 2 (#77448)
Re-land
- #77142

(diff: https://github.com/pytorch/pytorch/compare/c08b8f0..justinchuby:justinchu/remove-patch2)

Fixed:
- Delay import symbolic_opsets in the registry.

Tested locally with torchvision
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77448
Approved by: https://github.com/garymm
2022-05-16 14:44:24 +00:00
Gary Miguel
bdacc0856c [ONNX] handle equality checks on devices (#77203)
Previously the newly added `test_device_eq` would fail since the inputs
to `Equal` were invalid. Handle this by replacing its inputs with a
fixed tensor `Constant`. This is OK since ONNX doesn't have the concept
of different devices.

Discovered during investigation of
https://github.com/microsoft/onnx-converters-private/issues/9
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77203
Approved by: https://github.com/BowenBao
2022-05-13 21:41:31 +00:00
PyTorch MergeBot
6b366dd3c1 Revert "[ONNX] Refactor to remove inline imports (#77142)"
This reverts commit c08b8f0967.

Reverted https://github.com/pytorch/pytorch/pull/77142 on behalf of https://github.com/malfet
2022-05-13 19:44:17 +00:00
Justin Chu
c08b8f0967 [ONNX] Refactor to remove inline imports (#77142)
Reduce circular dependencies

- Lift constants and flags from `symbolic_helper` to `_constants` and `_globals`
    - Standardized constant naming to make it consistant
- Make `utils` strictly dependent on `symbolic_helper`, removing inline imports from symbolic_helper
- Move side effects from `utils` to `_patch_torch`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77142
Approved by: https://github.com/garymm, https://github.com/BowenBao
2022-05-13 03:46:33 +00:00
Brian Hirsh
47dd092bae add a new at::lift operator, fix torch.tensor for functionalization
This reverts commit 85bd65a880.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77285

Approved by: https://github.com/albanD, https://github.com/ezyang
2022-05-12 13:31:19 +00:00
Justin Chu
5dd1c67776 [ONNX] Format ONNX python with black
Format all onnx python code with black and isort with

```sh
isort torch/onnx/ test/onnx
black torch/onnx/ test/onnx
```

Updated lintrunner config to include these paths.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76754
Approved by: https://github.com/suo, https://github.com/BowenBao
2022-05-05 00:19:22 +00:00
BowenBao
679fc90cdb [ONNX] Support optional type (#68793) (#73284)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73284

Some important ops won't support optional type until opset 16,
so we can't fully test things end-to-end, but I believe this should
be all that's needed. Once ONNX Runtime supports opset 16,
we can do more testing and fix any remaining bugs.

Test Plan: Imported from OSS

Reviewed By: albanD

Differential Revision: D34625646

Pulled By: malfet

fbshipit-source-id: 537fcbc1e9d87686cc61f5bd66a997e99cec287b

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: neginraoof <neginmr@utexas.edu>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
(cherry picked from commit 822e79f31ae54d73407f34f166b654f4ba115ea5)
2022-05-04 20:24:30 +00:00
Thiago Crepaldi
7afe4afd86 Export aten::to("cpu") and aten::to(device="cpu")
Fixes https://github.com/facebookresearch/detectron2/pull/4120 after https://github.com/facebookresearch/detectron2/pull/4132 was merged

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76498
Approved by: https://github.com/garymm
2022-04-28 21:40:06 +00:00
BowenBao
cada2cd3ae [ONNX] Support per channel quantization
Extending the support for quantization with per channel quantization.
An extra attribute `axis` can be found for per channel quantized tensors,
most commonly in quantized weight of Convolution or Linear module.
The PR adds support to correctly parse the `axis` attribute, and map to
ONNX representation in `QuantizeLinear` and `DequantizeLinear`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76002

Approved by: https://github.com/garymm
2022-04-25 01:14:57 +00:00
Peter Bell
cb37e7a080 Remove F.pad python implementation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73433

Approved by: https://github.com/albanD, https://github.com/jbschlosser
2022-04-23 00:13:20 +00:00
Justin Chu
2f2158ae45 [ONNX] Add typing annotations to onnx symbolic gelu
Add typing annotations to onnx symbolic gelu
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76192
Approved by: https://github.com/BowenBao
2022-04-22 20:48:57 +00:00
shubhambhokare1
e0c1786587 [onnx] Add support for torch.cross and torch.cdist
Add support for following operators:
- torch.cross
- torch.linalg.cross
- torch.cdist
- torch.nn.pairwisedistance
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75278
Approved by: https://github.com/BowenBao
2022-04-21 20:35:25 +00:00
Thiago Crepaldi
690bc1c54d [ONNX] Raise exception for unimplemented ops for non-caffe2 builds
Currently, when an operator symbolic hits an unimplemented scenario, the symbolic may print a warning and return, allowing a non-ONNX operator be emitted into the graph

This PRs maintains this behavior for 1) Caffe2 builds or 2) non-caffe2 builds with `operator_export_type != ONNX`. If none of the conditions above are met, the converter raises a `RuntimeError` exception otherwise. This is needed so that exporter can detect detect unsupported ONNX operators when ATEN fallback is used (for non-caffe2 scenarios)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75468
Approved by: https://github.com/BowenBao
2022-04-20 17:53:34 +00:00
Thiago Crepaldi
eab3f42883 Update symbolics policy to emit aten::ATen for Caffe2 build only
Currently ONNX exporter symbolics can emit ATen operators when `operator_export_type==ONNX_ATEN_FALLBACK`. However, this is a behavior specific to Caffe2 builds, as the intend use of `ONNX_ATEN_FALLBACK` is to emit ATen operators only when there is no ONNX equivalent.

The reason Caffe2 choses to emit ATen operators when ONNX counterpart exists is for performance on their particular engine implementation, which might not be true for other implementations. e.g. ONNX Runtime can optimize the generated ONNX graph into something more efficient

This PR must be merged only after https://github.com/pytorch/pytorch/pull/73954
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74680
Approved by: https://github.com/garymm, https://github.com/malfet
2022-04-19 15:57:54 +00:00
Thiago Crepaldi
9bbe1d632e Fix ONNX ATen fallback for non-caffe2 engines
This PR introduces 3 BC changes:

First, this PR propagates `BUILD_CAFFE2` flag to `libtorch` and `libtorch_python`, which is necessary for non-caffe2 ONNX runtimes when using `ONNX_ATEN_FALLBACK` operator export type.

Second, as a complement of https://github.com/pytorch/pytorch/pull/68490, this PR refactors Caffe2's Aten ops symbolics to consider not only the `operator_export_type` (aka `ONNX_ATEN_FALLBACK`) to emit Caffe2 Aten ops, but also whether `BUILD_CAFFE2` (which is called `torch.onnx._CAFFE2_ATEN_FALLBACK` in python binding) is set.

Lastly, it renames `onnx::ATen` to `aten::ATen` for ONNX spec consistency in a BC fashion.
ONNX doesn't have `ATen` op on its spec, but PyTorch ONNX converter emits them. Non-Caffe2 backend engines would be mislead by such operator's name/domain. A non-ideal workaround would be to have Aten ops handled based on its name and ignore the (non-complaint) domain. Moreover, users could incorrectly file bugs to either ONNX or ONNX Runtime when they inspect the model and notice the presence of an unspecified ONNX operator.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73954
Approved by: https://github.com/BowenBao, https://github.com/malfet, https://github.com/garymm, https://github.com/jiafatom
2022-04-14 23:18:45 +00:00
BowenBao
f78e0fc956 [ONNX] Support aminmax
Support exporting `torch.aminmax`.
One of the use case is exporting fake quantized models. The observer calls 1601a4dc9f/torch/ao/quantization/observer.py (L447).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75714
Approved by: https://github.com/garymm
2022-04-13 19:18:46 +00:00
ganler
c1f0e6e763 [ONNX] Make Non-Float Op Exportation Compatible to Avoid Invalid ONNX Models
There are a few ONNX operators do not support non-float (e.g., integer) inputs at early versions. For example, Clip supports non-float types until [opset 12](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#type-constraints-280), that said older versions like [opset 6](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#type-constraints-107) cannot deal with integer types.

I initially find such a bug in Clip (https://github.com/pytorch/pytorch/pull/70584), but later found more:
1. Clip < 12;
2. Min/Max < 12;
3. ReLU < 14;
4. Pad < 11;

In PyTorch, if we export Max-11 with integer inputs, actually the exportation will succeed; however, fail when imported by other frameworks like ONNXRuntime.

```python
import torch

class Net(torch.nn.Module):
    def __init__(self) -> None:
        super().__init__()

    def forward(self, x: torch.Tensor):
        return torch.max(x, x + 1)

net = Net()
onnx_model = 'test.onnx'

torch.onnx.export(net, (torch.zeros((3, 3), dtype=torch.int32),),
                  onnx_model, verbose=True, opset_version=11)
```

This is an unexpected behavior as we want to ensure that every model exported by PyTorch is valid (https://github.com/pytorch/pytorch/pull/70584#issuecomment-1020636579). Theoretically, we can simply forbid such cases (e.g., `Clip<int>` < 12, `ReLU<int>` < 14). But actually we can enhance the compatibility and flexibility of PyTorch by simply casting inputs of those operators into float tensors, which allows the float operator functions, and then casting it back to original types.

This PR implements the second approach to achieve better compatibility in PyTorch.

@garymm  @thiagocrepaldi

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72401
Approved by: https://github.com/garymm, https://github.com/thiagocrepaldi
2022-04-11 23:26:44 +00:00
Eugene Lyapustin
d88a116015 Fix exporting models to ONNX without allow_tf32 in _convolution call
Fixes #75098
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75099
Approved by: https://github.com/BowenBao
2022-04-08 17:23:36 +00:00
Thiago Crepaldi
cbabd8f9f8 [ONNX] Raise exception for mixed precision input for BatchNormalization
Fixes #72494

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74875
Approved by: https://github.com/garymm
2022-04-08 13:47:09 +00:00
BowenBao
50b6959c0f [ONNX] Support torch.amax and torch.amin
Fixes #75167

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75268
Approved by: https://github.com/garymm
2022-04-07 00:16:26 +00:00
shubhambhokare1
ef41201d4a [ONNX] Add bucketize symbolic
Add support for torch.bucketize
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74856
Approved by: https://github.com/garymm
2022-04-06 20:13:45 +00:00
BowenBao
97ae431e3e [ONNX] Add symbolic support for torch.nn.cosinesimilarity (#72128) (#73283)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73283

* Add support for torch.nn.cosine_similarity

* Remove fallback logic

* Fix onnx test failures

* Fix opset version

* Modify rtol

* Add aten fallback mode

* fix mypy

* gate with caffe2 fallback

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D34625650

Pulled By: malfet

fbshipit-source-id: bf15d32b1d7055d0ca166d9941ba90b5c8e81cc2
(cherry picked from commit 7086031c52e1bea9bead6966d44e2635060194db)
2022-03-09 14:26:18 +00:00
BowenBao
9210e8f540 [ONNX] Adds overload_name to Aten op (#69378) (#73280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73280

This PR adds a new attribute overload_name to the Aten node so that third party applications can implement calls to libtorch without using PyTorch source code.

This is necessary because torch's torch::jit::findOperatorFor(fullname) requires a full name, including operator and overload names.

ATen op was originally created for Caffe2, which leveraged the availability of the pytorch yaml files to create calls to the aten oeprators directly, not relying on torch::jit::findOperatorFor

The first part of the PR refactors all symbolics that create Aten ops, so that there is a single helper for this operator. Next all symbolics are updated to pass in the relevant overload name, if empty string is not applicable

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D34625645

Pulled By: malfet

fbshipit-source-id: 37d58cfb5231833768172c122efc42edf7d8609a
(cherry picked from commit e92f09117d3645b38bc3235b30aba4b4c7c71dfa)
2022-03-09 14:26:18 +00:00
BowenBao
4a74285e97 [ONNX] Rewrite linspace symbolic
Original symbolic relies on ONNX Range with float inputs. The results are unstable due to precision issues.

Fixes #73559

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73610
Approved by: https://github.com/fatcat-z, https://github.com/garymm
2022-03-04 21:45:52 +00:00
BowenBao
80291dff43 [ONNX] Add torch.nan_to_num and torch.maximum/minimum symbolic (#72090)
* Add nan_to_num symbolic

* Restructure if statements

* Add torch.maximum and torch.minimum support

* Squash tests

* Add dependency on input dtype

* Add documentation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73103
2022-02-23 06:38:11 +00:00
BowenBao
40de6b80ee [ONNX] Add infra for quantized model export and support quantized mobilenet v3 (#72215)
* Add infrastructure and helper functions to enable future work for other quantized operators and models.
* Add export for quantized operators needed by torchvision mobilenet v3 large.
    * ATen namespace: hardsigmoid, flatten, adaptive_avg_pool, quantize_per_tensor, dequantize.
    * Quantized namespace: conv2d, conv2d_relu, hardswish, add, mul.
* Numerous bug fixes, in unpack_quantized_weight.cpp, symbolic functions, and unit test.

Co-authored-by: BowenBao <bowbaomicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73102
2022-02-23 06:22:58 +00:00
BowenBao
cc2aad2ef2 [ONNX] Add symbolic for torch.addcmul (#72126)
* Add addcmul op

* Remove required_grad

Pull Request resolved: https://github.com/pytorch/pytorch/pull/73101
2022-02-22 22:48:18 +00:00
shubhambhokare1
671c8a459a [ONNX] Add pixel_unshuffle support in opset 9
Current we are unable to utilize ONNX's SpaceToDepth operator due to the lack of the mode_s attribute, hence we add an alternative symbolic in opset 9 to support pixel_unshuffle

- Adds support for pixel_unshuffle in opset9
- Adds support for dynamic input shapes for pixel_shuffle and pixel_unshuffle
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72449
2022-02-19 00:15:16 +00:00
BowenBao
5843fea94d [ONNX] Add export support for linalg norm (#66575)
* Add matrix_norm

* Add vector norm

* Fixe flake

* Fixe flake

* nit fixes

* Nit fixes

* Restructure and add comments

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72987
2022-02-18 18:30:16 +00:00
BowenBao
32f6a1e2a2 [ONNX] First version of quantized model export: Support quantized.Linear (#69232)
Co-authored-by: David Fan <jiafamicrosoft.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72986
2022-02-18 18:27:26 +00:00
ganler
3d8b6d3361 fix: onnx PReLU unidirectional broadcasting
Fixes https://github.com/pytorch/pytorch/issues/70570

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70571
2022-02-16 22:28:08 +00:00
Ryan Spring
4f8b986e28 Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: VitalyFedyunin

Differential Revision: D33894937

Pulled By: jbschlosser

fbshipit-source-id: b65e8fb6ea66168af8f34f45ed50e92737a33851
(cherry picked from commit 6e986f91a9)
2022-02-14 03:40:32 +00:00
BowenBao
04c5d978b9 [ONNX] Refactor _run_symbolic_function (#67573) (#68491)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68491

* Allows implementing symbolic functions for domains other than `aten`, for example `prim`, in symbolic_opset#.py.
* Allows symbolic function to access extra context if needed, through `SymbolicFunctionState`.
  * Particularly, the `prim::PythonOp` special case can access node without the need of passing node through inputs. Updates will be made downstreams, and in a follow-up PR we will remove the previous workaround in exporter.
* `prim::Loop`, `prim::If`, etc are now moved outside of `_run_symbolic_function` from utils.py, and to symbolic_opset9.py.

Motivation for this change:
- Better maintainability and reducing complexity. Easier to add symbolic for operators, both simple and complex ones (that need additional context), without the former needing to know the existence of the latter.
- The design idea was long outdated. prim ops are no longer rare special cases, and they shouldn't all be handled inside `_run_symbolic_function`. As a result this function becomes too clumsy. There were also prim ops symbolic added in symbolic_opset#.py with signature `prim_[opname]`, creating separation and confusion.

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483782

Pulled By: malfet

fbshipit-source-id: f9affc31b1570af30ffa6668da9375da111fd54a

Co-authored-by: BowenBao <bowbao@microsoft.com>
(cherry picked from commit 1e04ffd2fd)
2022-02-11 18:35:35 +00:00
BowenBao
eb4238fc26 Allow caffe2-specific graph transformations for OperatorExportTypes.ONNX_ATEN_FALLBACK when BUILD_CAFFE2 is ON (#67460) (#68490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68490

The use of ATEN as a fallback operator during ONNX conversion is important for increasing operator coverage or even provide more efficient implementations over some ONNX ops.

Currently this feature is available through `OperatorExportTypes.ONNX_ATEN_FALLBACK`,
but it also performs changes to the graph that are runnable by Caffe2, only.

This PR introduces restricts caffe2-specific graph transformations for `ONNX_ATEN_FALLBACK`
operator export type for when pytorch is built with caffe2 support (aka BUILD_CAFFE2=1 during build)

The first version of this PR introduced a new operator export type `ONNX_ATEN__STRICT_FALLBACK`,
which essentially is the same as `ONNX_ATEN_FALLBACK` but without caffe2 transformations.
It was preferred to not introduce a new operator export type, but to refine the existing aten fallback one

## BC-breaking note
### The global constant `torch.onnx.PYTORCH_ONNX_CAFFE2_BUNDLE` is removed in favor of
a less visible `torch.onnx._CAFFE2_ATEN_FALLBACK`.
`PYTORCH_ONNX_CAFFE2_BUNDLE` is really a dead code flag always set to False.
One alternative would be fixing it, but #66658 disables Caffe2 build by default.
Making a Caffe2 feature a private one seems to make more sense for future deprecation.

### The method `torch.onnx.export` now defaults to ONNX when `operator_export_type` is not specified.
Previously `torch.onnx.export's operator_export_type` intended to default to `ONNX_ATEN_FALLBACK` when `PYTORCH_ONNX_CAFFE2_BUNDLE` was set, but it would never happen as `PYTORCH_ONNX_CAFFE2_BUNDLE` is always undefined

 Co-authored-by: Nikita Shulga <nshulga@fb.com>

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483781

Pulled By: malfet

fbshipit-source-id: e9b447db9466b369e77d747188685495aec3f124
(cherry picked from commit 5fb1eb1b19)
2022-02-10 03:26:48 +00:00
Nikita Shulga
74c44ba9d6 Revert D33850228: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33850228 (23d03025dc)

Original commit changeset: 3cc33fb298e4

Original Phabricator Diff: D33850228 (23d03025dc)

fbshipit-source-id: 9436e7df73c2b2e2011f321674f24973316d3692
(cherry picked from commit c9efb58223)
2022-01-31 17:44:19 +00:00
Ryan Spring
23d03025dc Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: cpuhrsch

Differential Revision: D33850228

Pulled By: jbschlosser

fbshipit-source-id: 3cc33fb298e480d7ecc5c67716da019d60c6ab33
(cherry picked from commit 3a53b3e94f)
2022-01-31 17:07:45 +00:00
Joel Schlosser
cb823d9f07 Revert D33744717: [pytorch][PR] Implement Tanh Gelu Approximation
Test Plan: revert-hammer

Differential Revision:
D33744717 (f499ab9cef)

Original commit changeset: d64532a562ed

Original Phabricator Diff: D33744717 (f499ab9cef)

fbshipit-source-id: 396c3f63de5865f894dbc353d0790a01a624be93
(cherry picked from commit e9fb2d1db1)
2022-01-28 18:35:01 +00:00
Ryan Spring
f499ab9cef Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: mikaylagawarecki

Differential Revision: D33744717

Pulled By: jbschlosser

fbshipit-source-id: d64532a562ed53247bb4fa52bb16722634d5c187
(cherry picked from commit 4713dd9cca)
2022-01-28 16:59:09 +00:00
BowenBao
4b47047dae [ONNX] Add support for shrink ops (#66969) (#68492)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68492

* Initial commit

* Fix flake issue

* Add test tags

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D32483827

Pulled By: msaroufim

fbshipit-source-id: 41c623712524465b877d0fe0e2f4001d475bf2ce
2022-01-10 11:38:31 -08:00
hwangdeyu
c76c6e9bd3 [ONNX] Add BFloat16 type support when export to ONNX (#66788)
Summary:
- PyTorch and ONNX has supported BFloat16, add this to unblock some mixed-precision training model.
- Support PyTorch TNLG model to use BFloat16 tensors for the inputs/outputs of the layers that run on the NPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66788

Reviewed By: jansel

Differential Revision: D32283510

Pulled By: malfet

fbshipit-source-id: 150d69b1465b2b917dd6554505eca58042c1262a
2021-12-14 12:23:32 -08:00
Brian Hirsh
457ba1dd3e Porting index_add to structured kernels, add an out variant (#65993)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65993

This PR attempts to port `index_add` to structured kernels, but does more than that:

* Adds an `out=` variant to `index_add`
* Revises `native_functions.yaml` registrations, to not have multiple entries and instead pass default value to `alpha`.
* Changes in `derivatives.yaml` file for autograd functioning
* Revises error messages, please see: https://github.com/pytorch/pytorch/pull/65993#issuecomment-945441615

Follow-up PRs in near future will attempt to refactor the OpInfo test, and will give another look at tests in `test/test_torch.py` for this function. (hence the use of ghstack for this)

~This is WIP because there are tests failing for `Dimname` variant on mobile/android builds, and I'm working on fixing them.~

Issue tracker: https://github.com/pytorch/pytorch/issues/55070

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D32646426

fbshipit-source-id: b035ecf843a9a27d4d1e18b202b035adc2a49ab5
2021-12-14 11:57:13 -08:00
vfdev-5
3da2e09c9b Added antialias flag to interpolate (CPU only, bilinear) (#65142)
Summary:
Description:
- Added antialias flag to interpolate (CPU only)
  - forward and backward for bilinear mode
  - added tests

### Benchmarks

<details>
<summary>
Forward pass, CPU. PTH interpolation vs PIL
</summary>

Cases:
- PTH RGB 3 Channels, float32 vs PIL RGB uint8 (apply vs pears)
- PTH 1 Channel, float32 vs PIL 1 Channel Float

Code: https://gist.github.com/vfdev-5/b173761a567f2283b3c649c3c0574112

```
# OMP_NUM_THREADS=1 python bench_interp_aa_vs_pillow.py

Torch config: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.1
  - NVCC architecture flags: -gencode;arch=compute_75,code=sm_75
  - CuDNN 8.0.5
  - Build settings: BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_PYTORCH_QNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=1, USE_CUDNN=1, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=0, USE_OPENMP=ON,

Num threads: 1
[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (320, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.9                |          3.1
      channels_last non-contiguous torch.float32  |                2.6                |          3.6

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (460, 220) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                3.4                |          4.0
      channels_last non-contiguous torch.float32  |                3.4                |          4.8

Times are in milliseconds (ms).

[------------------------ Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 96) -------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                1.6                |          1.8
      channels_last non-contiguous torch.float32  |                1.6                |          1.9

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (1200, 196) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                9.0                |          11.3
      channels_last non-contiguous torch.float32  |                8.9                |          12.5

Times are in milliseconds (ms).

[----------------------- Downsampling: torch.Size([1, 3, 906, 438]) -> (120, 1200) ------------------------]
                                                  |  Reference, PIL 8.3.2, mode: RGB  |  1.10.0a0+git1e87d91
1 threads: -------------------------------------------------------------------------------------------------
      channels_first contiguous torch.float32     |                2.1                |          1.8
      channels_last non-contiguous torch.float32  |                2.1                |          3.4

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (320, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.2               |          1.0

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (460, 220) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               1.4               |          1.3

Times are in milliseconds (ms).

[--------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 96) ---------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              719.9              |         599.9

Times are in microseconds (us).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (1200, 196) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |               3.7               |          3.5

Times are in milliseconds (ms).

[-------------- Downsampling: torch.Size([1, 1, 906, 438]) -> (120, 1200) --------------]
                                 |  Reference, PIL 8.3.2, mode: F  |  1.10.0a0+git1e87d91
1 threads: ------------------------------------------------------------------------------
       contiguous torch.float32  |              834.4              |         605.7

Times are in microseconds (us).

```

</details>

Code is moved from torchvision: https://github.com/pytorch/vision/pull/4208

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65142

Reviewed By: mrshenli

Differential Revision: D32432405

Pulled By: jbschlosser

fbshipit-source-id: b66c548347f257c522c36105868532e8bc1d4c6d
2021-11-17 09:10:15 -08:00
Gary Miguel
f57c63032e [ONNX] Fix reciprocal when input is not floating point (#67471) (#67808)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67808

torch.reciprocal implicitly casts the inputs to float, and ONNX
Reciprocal requires floating point inputs.

Also separate the reciprocal test from other tests, and test different
input types.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181307

Pulled By: malfet

fbshipit-source-id: 3e1109b3c85a49c51dc713656a900b4ee78c8340
2021-11-08 14:37:07 -08:00
Gary Miguel
eb22d06e5e [ONNX] Use human readable enum for dtype scalars (#66822) (#67807)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67807

Also make quoting of string literals consistent.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181309

Pulled By: malfet

fbshipit-source-id: e1053701e3589f0310d8b5ef920359c03c6713f0
2021-11-08 14:37:05 -08:00
Gary Miguel
958d517643 [ONNX] Fix new_full and full_like for Python 3.9 (#67124) (#67806)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67806

Previously new_full would fail with errors like:
`TypeError: only integer tensors of a single element can be converted to an index`

And full_like would trigger warnings like:
`DeprecationWarning: an integer is required (got type float).  Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python.`

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181301

Pulled By: malfet

fbshipit-source-id: 2cf262cfef36c18e7b2423efe1e1d4fa3438f0ba

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-11-08 14:37:03 -08:00
Gary Miguel
9deb602726 [ONNX] Use Reciprocal operator instead of Div(1, x). (#65382) (#67271)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67271

* [ONNX] Use Reciprocal operator instead of Div(1, x).

This is a more readable and perhaps more performant way to export
torch.reciprocal.

* Use Reciprocal in caffe to operator to import onnx

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962519

Pulled By: malfet

fbshipit-source-id: d926e75b1c8312b9a980c9a1207a1a93ba0c71e0

Co-authored-by: take-cheeze <takechi101010@gmail.com>
2021-10-28 08:01:21 -07:00
Shubham Bhokare
d9a5668983 [ONNX] Add dim argument to all symbolic (#66093) (#67270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67270

* Add dim argument to all symbolic

* All symbolic depends on any symbolic

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D31962518

Pulled By: malfet

fbshipit-source-id: f7ee05cf4eff5880fc508154267e060952b5b42d
2021-10-27 13:46:31 -07:00
Nikita Shulga
0bc9928f31 [ONNX] Symbolic: dynamic input for OneHot, bool for Einsum (#65940) (#66147)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66147

Symbolic: dynamic input for OneHot, bool for Einsum

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424094

fbshipit-source-id: 76bea22b29c93d1621c597fe7ab59deb3685087f

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-10-22 13:46:24 -07:00
Nikita Shulga
2c0fe338da [ONNX] Modify softplus symbolic to support beta!=1 (#65001) (#66146)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66146

* Modify softplus symbolic to support beta!=1

* Remove parse args

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424096

fbshipit-source-id: 971af54a28141737ccb17510ada03b0651be2a63
2021-10-22 13:46:22 -07:00
Nikita Shulga
136abf5aff [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66141

* Update aten::sum symbolic for dtype

* Remove nesting and modify opeartor tests

* Fix expect files

[ONNX] Fix expect files added in #64289 (#65356)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424091

fbshipit-source-id: d4af21e9f0d7e1c68bf6ef2f3e385db84b4c53f3
2021-10-22 13:46:12 -07:00
Edward Yang
9b09a5f7ba [ONNX] Enable scripting tests (#64780) (#66138)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66138

* Scripting tests

* Fixed scripting tests for lower opsets

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D31424099

fbshipit-source-id: 67095b7ac67b9da986961788392aa92c95cf11f2
2021-10-08 07:41:03 -07:00
BowenBao
d39790340d [ONNX] Enable export of __xor_ (#64042) (#64581)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64581

* Enbale xor

* Update test_pytorch_onnx_onnxruntime.py

* Update symbolic_opset9.py

* Update symbolic_opset9.py

* Update test_pytorch_onnx_onnxruntime.py

* Update symbolic_opset9.py

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919598

Pulled By: malfet

fbshipit-source-id: 044e55d0697da0050f26a6ceccd1517493d7e8a6
2021-09-30 21:09:01 -07:00
BowenBao
d4ff344fae [ONNX] Fix remainder export (#64230) (#64578)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64578

* Fix remainder export for edge case when input is negative. New export relies on true_divide export.
* Simplified true_divide export. Cleaned up redundant code which is handled by scalar type analysis pass. Removed dependency on `onnx::Where`, thus supports opset 7 & 8.

Fixes #60179

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919601

Pulled By: malfet

fbshipit-source-id: 0f78621c0ac3bdb6bf4225e049ba5f470dc8ab12

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-09-30 21:08:54 -07:00
BowenBao
f17ee368b3 Fix empty size constant creation (#63607) (#64376)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64376

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919608

Pulled By: malfet

fbshipit-source-id: 0e789e8470ce0f130148df764ce77f6d4fd0a274
2021-09-30 21:08:43 -07:00
BowenBao
84190dafa8 [ONNX] Update instance_norm implementation and support training (#60538) (#64375)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64375

* Update the instance_norm track_running_stats=True implementation and support the training mode
* Reference: 9baf75c86e/aten/src/ATen/native/Normalization.cpp (L532)
* Fix https://github.com/pytorch/pytorch/issues/53887

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D30919605

Pulled By: malfet

fbshipit-source-id: 306eb2a1122bb5d90dcb7c18260a3a2057a21c34

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-30 21:07:26 -07:00
BowenBao
73c4bfc30a [ONNX] Add log10 symbolic (#63418) (#64374)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64374

Fixes #61332

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D30919609

Pulled By: msaroufim

fbshipit-source-id: f474376bbf7b59677b10565f316384eca59dba43

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-09-21 13:30:59 -07:00
Nikita Shulga
340531f2e0 [ONNX] Do not use numpy in ONNX opsets (#65188)
Summary:
Replace `torch.tensor([numpy.arange(a, b, c)])` with `torch.arange(a, b, c).unsqueeze(0)`
Replace `tuple(numpy.add(a, b))` with `tuple( x + y for (x, y) in zip(a, b)`

As `numpy` is an optional dependency, it shouldn't be used in PyTorch core by default

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65188

Reviewed By: mruberry

Differential Revision: D31009490

Pulled By: malfet

fbshipit-source-id: 528e48f055bf9ac1de1fd7e94c0be41915df9a0b
2021-09-17 11:28:44 -07:00
Michael Dagitses
aaffcfe9cd implement "xy" indexing for torch.meshgrid (#62724)
Summary:
This is step 4/7 of https://github.com/pytorch/pytorch/issues/50276. This allows the use of `"xy"` indexing but doesn't change any defaults.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62724

Reviewed By: heitorschueroff

Differential Revision: D30995290

Pulled By: dagitses

fbshipit-source-id: 08a6a6144b20bc019f68bc3c52e3bbf967976d8f
2021-09-17 08:31:17 -07:00
Michael Dagitses
2c57bbf521 add support for indexing to meshgrid (#62722)
Summary:
This is step 3/7 of https://github.com/pytorch/pytorch/issues/50276. It only adds support for the argument but doesn't implement new indexing modes yet.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62722

Test Plan:
Verified this is not FC breaking by adding logging to both meshgrid
overloads and then called meshgrid twice:

`meshgrid(*tensors)`
  and
`meshgrid(*tensors, indexing='ij')`

This confirmed that the former signature triggered the original native
function and the latter signature triggered the new native function.

Reviewed By: H-Huang

Differential Revision: D30394313

Pulled By: dagitses

fbshipit-source-id: e265cb114d8caae414ee2305dc463b34fdb57fa6
2021-09-16 09:59:49 -07:00
BowenBao
1dd648f1c4 [ONNX] Suppport torch.dot and torch.nn.utils.spectral_norm (#62596) (#62765)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62765

Fixes #27723

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375181

Pulled By: msaroufim

fbshipit-source-id: 715f4745899757ec405877980cd20c826028eb2c

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-08-20 12:46:56 -07:00
BowenBao
db0771b05d [ONNX] Update repeat_interleave for dynamic repeats (#59979) (#62764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62764

Fixes #58733

- Support dynamic interleave for cases with dynamic repeat values
- Moved repeat_interleave symbolic from opset 11 to opset 13, as sequence as output types for loop outputs is needed for this change

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30375179

Pulled By: msaroufim

fbshipit-source-id: 787f96bf91d124fd0483761088c5f4ae930d96a9

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-08-20 12:46:54 -07:00
BowenBao
2aa19f33c6 [ONNX] Fix for batchnorm training op mode (#52758) (#62760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62760

* Rebase

# Conflicts:
#	torch/csrc/jit/passes/onnx/eval_peephole.cpp

# Conflicts:
#	test/onnx/test_utility_funs.py
#	torch/onnx/symbolic_opset9.py

* Update symbolic_opset12.py

* Update test.sh
# Conflicts:
#	.jenkins/caffe2/test.sh

* Merge

* Fix utility tests

# Conflicts:
#	test/onnx/test_pytorch_onnx_onnxruntime.py
#	test/onnx/test_utility_funs.py

* Fix for comment

* Enable BN tests

* Fix for test

* Update test_pytorch_onnx_onnxruntime.py

* Update test_pytorch_onnx_onnxruntime.py

* Update test_utility_funs.py

* Update test_pytorch_onnx_onnxruntime.py

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349060

Pulled By: msaroufim

fbshipit-source-id: 93312c17607974731c17099ae181acb6e4c1c409
2021-08-18 13:29:07 -07:00
BowenBao
3a7bbf5fb7 [ONNX] Add support for opset14 in PT-ONNX exporter (#59486) (#62758)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62758

* Add initial changes for opset14

* Fixed flake

* Add onnx submodule changes and removed utility func tests

* Add updated batchNorm symbolic

* Add triu/tril symbolics

* Fix lint

* Fixed test failures

* Add reshape with allowzero

* Added tests/refactored opset versioning

* Bump onnxruntime version

* Fix clang/lint failures

* Add reshape shape inference for opset 14

* Changes for allowzero

* Fix lint/clang and test failures

* Updated PR

* Flake fixes

* Fix flake

* Remove new_jit_api tests

* Add opset14 models

* Update allowzero

* Fix test failures

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349063

Pulled By: msaroufim

fbshipit-source-id: 54724246149b01a2f627c43d7396253a7e9c9eb9

Co-authored-by: Shubham Bhokare <sbhokare@OrtTrainingDev3.af05slrtruoetgaxwwjv5nsq5e.px.internal.cloudapp.net>
2021-08-18 13:29:01 -07:00
BowenBao
99b154b8be [ONNX] Support lstm_cell symbolic (#61476) (#62757)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62757

Support lstm_cell symbolic

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349061

Pulled By: msaroufim

fbshipit-source-id: f236177e3e5c62a30b7e4d91a623bcaef21b5eb1

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-08-18 13:27:46 -07:00
Peter Lin
8d7786ada6 Simplify hardswish ONNX export graph. (#60080)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/58301

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60080

Reviewed By: suo

Differential Revision: D30002939

Pulled By: SplitInfinity

fbshipit-source-id: 8b4ca6f62d51b72e9d86534592e3c82ed6608c9d
2021-08-05 11:15:14 -07:00
BowenBao
6f08ddfc28 [ONNX] Enable aten:normal op and add tests for aten:uniform op. (#60441) (#61560)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61560

1. Add a new symbolic function broadcast_tensors() to support exporting torch.broadcast_tensors() function. This is required by exporting torch.distribution.normal() function.
2. Add a new symbolic function normal() to support exporting torch.distribution.normal() function.
3. Add relative tests for normal and uniform ops as well.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767995

Pulled By: SplitInfinity

fbshipit-source-id: acfe5e7801d00c0df8ca46966bbd6015fed0045e

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-07-21 15:10:35 -07:00
BowenBao
f0054e1a6e [ONNX] Update expand_as for dynamic shape (#61084) (#61559)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61559

Update expand_as for dynamic shape

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767990

Pulled By: SplitInfinity

fbshipit-source-id: 3f1e3f68fd17c5ffbd4a50fccff224fd9d6c84fb

Co-authored-by: Negin Raoof <neginmr@utexas.edu>
2021-07-21 15:10:33 -07:00
BowenBao
34075e2c8b [ONNX] Fix the issue of converting empty list to sequence. (#58651) (#61558)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61558

When we construct an empty list by python list comprehension, we need to avoid converting the node without inputs to onnx::Concat in shape_type_inference.cpp and peephole.cpp because it will create an invalid Concat node which doesn't have inputs.

In addition, update the code to avoid passing a Sequence input to an onnx::Cast node which doesn't accept Sequence data type as an input.

Add tests for the validation as well.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D29767989

Pulled By: SplitInfinity

fbshipit-source-id: f97f172ff20eebda4c3744c7a934df36716f12a2

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-07-21 15:10:31 -07:00
BowenBao
d9dc94406f [ONNX] Add linspace symbolic (#58854) (#60246)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60246

* Adds support for linspace op
* Modifies arange symbolic in opset 9 to replicate the same behavior in which dtype is determined (similar to opset 11) as in https://pytorch.org/docs/stable/generated/torch.arange.html
* Enabled some arange unit tests which were disabled for opset 9

Test Plan: Imported from OSS

Reviewed By: zou3519, ZolotukhinM

Differential Revision: D29494911

Pulled By: SplitInfinity

fbshipit-source-id: bddff18a90f8a78121c8ecdd1dafc15c69962d66

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-07-08 16:29:26 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
fca931d181 List striding with arbitrary step size (#58537)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58537

Test Plan: Imported from OSS

Reviewed By: ejguan

Differential Revision: D28531721

Pulled By: tugsbayasgalan

fbshipit-source-id: 8c8ed32ca00366603bfb5086e87dfa62736ff4b2
2021-06-22 11:25:23 -07:00
BowenBao
3995fb1840 Add new_ones symbolic (#59255) (#59539)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59539

Add new_ones symbolic in PT-ONNX exporter

Test Plan: Imported from OSS

Reviewed By: navahgar

Differential Revision: D29046603

Pulled By: SplitInfinity

fbshipit-source-id: e7420c7b543c33e3640e62461d08ff4d5843eda7

Co-authored-by: Shubham Bhokare <shubhambhokare@gmail.com>
2021-06-17 15:49:24 -07:00
BowenBao
044b519a80 Symbolic for ReLu6 (#58560) (#59538)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59538

Four mealv2 models can export in torch 1.8.1, but fails when torch master introduces relu6 a few months back.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046607

Pulled By: SplitInfinity

fbshipit-source-id: d9cf7050e4ac0dad892441305ffebc19ba84e2be

Co-authored-by: David <jiafa@microsoft.com>
2021-06-15 12:24:17 -07:00
BowenBao
5d00c374dd [ONNX] Sum empty tensor could not be exported to ONNX successfully. (#58141) (#59537)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59537

PyTorch sum over empty tensor gives 0, while ONNX produces an error.

torch.sum will be translated into onnx::ReduceSum op. Per the definition of ReduceSum, update the keepdims attribute for this scenario.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046604

Pulled By: SplitInfinity

fbshipit-source-id: 6f5f3a66cb8eda8b5114b8474dda6fcdbae73469

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-06-15 12:24:16 -07:00
BowenBao
83450aa11d [ONNX] Add support for torch.bernoulli() export (#57003) (#59536)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59536

Support export HuggingFace - Training DeBERTa model.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046609

Pulled By: SplitInfinity

fbshipit-source-id: df87e0c6ed0f13463297bdeba73967fcf2aa37ca

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-06-15 12:24:14 -07:00
BowenBao
cd5f142af4 fix error message for type_as (#57948) (#59535)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59535

Improve error message for type_as and add unit test.

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb, ansley

Differential Revision: D29046605

Pulled By: SplitInfinity

fbshipit-source-id: 978bceeb62e4d3c68815cd5fdf160909a99d00f2

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-06-15 12:24:12 -07:00
Joel Schlosser
ef32a29c97 Back out "[pytorch][PR] ENH Adds dtype to nn.functional.one_hot" (#59080)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59080

Original commit changeset: 3686579517cc

Test Plan: None; reverting diff

Reviewed By: albanD

Differential Revision: D28746799

fbshipit-source-id: 75a7885ab0bf3abadde9a42b56d479f71f57c89c
2021-05-27 15:40:52 -07:00
BowenBao
2c17b6a0fe [ONNX] Enable support for roll() op. (#58389) (#58697)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58697

1. Add a symbolic function for aten::roll() op in symbolic_opset9.py.
2. Add a test with multiple scenarios as well.

Test Plan: Imported from OSS

Reviewed By: driazati, bhosmer

Differential Revision: D28714807

Pulled By: SplitInfinity

fbshipit-source-id: eae85f2dcf02737c9256a180f6905a935ca3f57e

Co-authored-by: fatcat-z <jiz@microsoft.com>
2021-05-27 12:06:45 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
BowenBao
b8c96e6b08 Support symbolic for conv_tbc (#58359) (#58692)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58692

This is a fix for exporting fairseq models, see:
```python
model = torch.hub.load(github, 'conv.wmt14.en-fr', tokenizer='moses', bpe='subword_nmt')
model = torch.hub.load(github, 'conv.wmt17.en-de', tokenizer='moses', bpe='subword_nmt')
```
With this fix, and comment out model script one line `GradMultiply`, these two models can be exported successfully with perf met.

The original PR https://github.com/pytorch/pytorch/pull/57708 has merging issue, use this one instead.

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714809

Pulled By: SplitInfinity

fbshipit-source-id: 71c2de6cec7ee05af68560996acf47d97af46fb2

Co-authored-by: David <jiafa@microsoft.com>
2021-05-27 12:06:37 -07:00
BowenBao
d101816fdd [ONNX] RNN scripting (#57564) (#58691)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58691

Note the first commit in this PR has its own pull request here since it seemed self-contained:
https://github.com/pytorch/pytorch/pull/57082

* [ONNX] simplify batch_first logic in RNN tests

* [ONNX] support GRU with packed input in scripting mode

This required two changes:
* Add as_tensor to symbolic_opset9.py
* Change torch::jit::pushPackingPastRnn to recognize and properly
  replace another use of the batch_sizes output of prim::PackPadded.
  Previously the code assumed that the first use was as input to the
  RNN operator. However in some cases, it is also used to compute
  max_batch_size. For example in this code:
  https://github.com/pytorch/pytorch/blob/febff45/torch/nn/modules/rnn.py#L815-L815

With these changes the GRU tests now pass in scripting mode for opset
version >= 11.

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714805

Pulled By: SplitInfinity

fbshipit-source-id: f19647a04533d9ec76399a8793b3f712ea0337d2

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:35 -07:00
BowenBao
51d14b6859 [ONNX] Update instance_norm2 symbolic to handle track_running_stats=True (#55051) (#58690)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58690

Fixes [#53887](https://github.com/pytorch/pytorch/issues/53887)
Not input calling running_mean and running_var when track_running_stats=True

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714812

Pulled By: SplitInfinity

fbshipit-source-id: 3f2f2ec9a7eaf8a1432a552d751cbd5974b20195

Co-authored-by: hwangdeyu <deyhuang@qq.com>
2021-05-27 12:05:26 -07:00
Meghan Lele
0d5527de7a Back out "Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"" (#58923)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58923

Original commit changeset: c54597b2048e
ghstack-source-id: 129842041

Test Plan: Sandcastle and OSS CI.

Reviewed By: snisarg

Differential Revision: D28432555

fbshipit-source-id: 2a9ec22cc004c7c6979f1cc8f3124b833cdc6634
2021-05-26 13:29:07 -07:00
Adnios
09a8f22bf9 Add mish activation function (#58648)
Summary:
See issus: https://github.com/pytorch/pytorch/issues/58375

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58648

Reviewed By: gchanan

Differential Revision: D28625390

Pulled By: jbschlosser

fbshipit-source-id: 23ea2eb7d5b3dc89c6809ff6581b90ee742149f4
2021-05-25 10:36:21 -07:00
Thomas J. Fan
a7f4f80903 ENH Adds dtype to nn.functional.one_hot (#58090)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/33046
Related to https://github.com/pytorch/pytorch/issues/53785

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58090

Reviewed By: zou3519

Differential Revision: D28640893

Pulled By: jbschlosser

fbshipit-source-id: 3686579517ccc75beaa74f0f6d167f5e40a83fd2
2021-05-24 13:48:25 -07:00
Serhat Yilmaz
4ca4640bae [torch][repeat_interleave] remove stream syncronization if output size is given (#58417)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58417

Same as title.

Test Plan:
Rely on CI signal.

Update unit test to exercise new code path as well.

Reviewed By: ngimel

Differential Revision: D28482927

fbshipit-source-id: 3ec8682810ed5c8547b1e8d3869924480ce63dcd
2021-05-22 20:53:28 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
9db64e6e56 Revert "Striding for lists Part 2 (#49352)" (#58523)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58523

This reverts commit fee7e8b91d.

Test Plan: Imported from OSS

Reviewed By: gmagogsfm

Differential Revision: D28528023

Pulled By: tugsbayasgalan

fbshipit-source-id: 9fa1d86f0c81fcc6fd3798e0d51a712a3c9b3952
2021-05-20 13:20:33 -07:00
Meghan Lele
c034bce979 Back out "[ONNX] Process const folding progressively when converts to ONNX (#54569)"
Summary: Original commit changeset: 833dac7c71f2

Test Plan:
```
buck test mode/dev //pytext/fb/assistant/lite/test:test -- --exact
'pytext/fb/assistant/lite/test:test - test_export_bytes_model_to_caffe2
(pytext.fb.assistant.lite.test.test.TestExport)'
```

Reviewed By: jeanm

Differential Revision: D28431840

fbshipit-source-id: 0f1d530034404421a5d51691173e1cc0ee16fdd6
2021-05-14 13:45:49 -07:00
BowenBao
0d11dbf511 [ONNX] Support index_add_ function. (#56867) (#57830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57830

This is PR is aiming to support tensor.index_add_() method in symbolic function. We leverage scatter_add() to implement this function while ONNX doesn't have a corresponding operator.

Notes:

      1.  4 tests have been added for some scenarios.
      2.  If there are duplicated value in 'index' parameter, the export will still execute successfully but the results are wrong. Add a warning for every call to this symbolic function. And if we detect that the rank of 'index' is greater than the size of the 'dim' dimension, will raise an exception to stop exporting an incorrect ONNX file.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393518

Pulled By: SplitInfinity

fbshipit-source-id: f487ca2c63fec47c6ab74f1a7783dae7f3b8d1ef

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-05-14 09:51:55 -07:00
BowenBao
bfe7728f18 [ONNX] Process const folding progressively when converts to ONNX (#54569) (#57601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57601

This PR automatically solves onnx const attribute issue in PR https://github.com/pytorch/pytorch/pull/53784.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393525

Pulled By: SplitInfinity

fbshipit-source-id: 833dac7c71f24a88af62d5dd2be0a702ed34d053

Co-authored-by: David <jiafa@microsoft.com>
2021-05-13 13:42:51 -07:00
BowenBao
2b0f481d3f Add support to to(device) op. (#56857) (#57599)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57599

Currently, if we call tensor.to() method and pass a device as the parameter. It will fail, because in symbolic function of to() we didn't handle such case.

So add a check in the beginning of this symbolic function, if this is a device cast, we return self directly. A test has also been added.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393523

Pulled By: SplitInfinity

fbshipit-source-id: c41e3c0293932fc90dedb544daadd9c5d4b48792

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-05-13 13:42:48 -07:00
BowenBao
ac9e79e561 Add a new operator for fill_() function. (#56859) (#57596)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57596

Add the corresponding symbolic function and test for fill_() function.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393520

Pulled By: SplitInfinity

fbshipit-source-id: 3e177f88d3776d0d4a9d5e7ec7df4e6629738799

Co-authored-by: Jay Zhang <jiz@microsoft.com>
2021-05-13 13:42:43 -07:00
BowenBao
3bc8a2264d [ONNX] Support .item() export & NumberType to tensor conversion (#55697) (#57594)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57594

Support .item() export & NumberType to tensor conversion

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D28393516

Pulled By: SplitInfinity

fbshipit-source-id: 94d0aec0a8fe144ee2567dc3c9c19fcb18ed21fa

Co-authored-by: BowenBao <bowbao@microsoft.com>
2021-05-13 13:41:29 -07:00
Tugsbayasgalan (Tugsuu) Manlaibaatar
fee7e8b91d Striding for lists Part 2 (#49352)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49352

In this PR, we replace all definitions of slice to take None parameters for the start, end, and step. This will simplify the compiler logic

Test Plan:
test_jit test cases

Imported from OSS

Reviewed By: jamesr66a, nikithamalgifb

Differential Revision: D25929903

fbshipit-source-id: 5bfc6bad514a8aafbef2dacc706f95f867fe85f1
2021-05-13 00:16:02 -07:00
Peter Bell
2043093217 Add correction parameter to std/var (#50903)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50903

First part of #50010. Also fixes #51127.

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27911345

Pulled By: mruberry

fbshipit-source-id: 7138fddc935802918ab9ff19f4bc1b9f4d745d41
2021-05-07 14:40:28 -07:00
Peter Bell
33eea146ee torch.clamp with tensor min and max (#52695)
Summary:
Fixes gh-2793

Pull Request resolved: https://github.com/pytorch/pytorch/pull/52695

Reviewed By: mruberry

Differential Revision: D27395977

Pulled By: ezyang

fbshipit-source-id: f86aa240feb034d42e4c45447e72218f6a773c24
2021-05-03 12:56:16 -07:00
BowenBao
913f1f75b3 Revert "Revert [ONNX] Redesign inplace conversion" (#56675)
Summary:
Adjust how MutationRemover is used to avoid creating aliasDb multiple times for the same graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56675

Reviewed By: pbelevich

Differential Revision: D27945692

Pulled By: SplitInfinity

fbshipit-source-id: a6c548438e88ddee18ef03a6f0461ab9eaaaa829
2021-04-22 22:22:16 -07:00
Nikita Shulga
36828aa0ff Revert D27866138: [ONNX] Redesign inplace conversion (#55033)
Test Plan: revert-hammer

Differential Revision:
D27866138 (24ff92f76d)

Original commit changeset: ab5c9188740c

fbshipit-source-id: b99bf5b12e109089ebd5748c1dc152c6af1cebdb
2021-04-21 21:11:06 -07:00
BowenBao
24ff92f76d [ONNX] Redesign inplace conversion (#55033) (#56173)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56173

* Create `InplaceConverter` and `ValueTracker` to keep track of aliases of values throughout the graph. For a given value, a new alias is created every time when there is an inplace operation, SetAttr, or through nested blocks owned by If/Loop nodes.
* Fix bug where controlflow node output types are not set, when the complete node is unable to run ONNX shape inference due to containing non-onnx node.
* Add symbolic for `__not__` ~~and `prim_min`~~(update: moved to a separate PR), and update `index_put` opset9 to support case of assignment without providing indices.
* Bump ORT version in CI test.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866138

Pulled By: SplitInfinity

fbshipit-source-id: ab5c9188740c50f783ceba4d54fda43c26e2fde7
2021-04-21 17:59:11 -07:00
Sam Estep
75024e228c Add lint for unqualified type: ignore (#56290)
Summary:
The other half of https://github.com/pytorch/pytorch/issues/56272.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/56290

Test Plan:
CI should pass on the tip of this PR, and we know that the lint works because the following CI runs (before this PR was finished) failed:

- https://github.com/pytorch/pytorch/runs/2384511062
- https://github.com/pytorch/pytorch/actions/runs/765036024

Reviewed By: seemethere

Differential Revision: D27867219

Pulled By: samestep

fbshipit-source-id: e648f07b6822867e70833e23ddafe7fb7eaca235
2021-04-21 08:07:23 -07:00
BowenBao
0b0fca3c59 [ONNX] Export mv op (#55470) (#56169)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56169

Adding matrix-vector multiplication op

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866141

Pulled By: SplitInfinity

fbshipit-source-id: 40e8f65c590bc5354b764b51e0c3cd8386fdc33b
2021-04-20 23:00:46 -07:00
BowenBao
90e63cc41f [ONNX] Add support for prim::min (#55259) (#56168)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56168

Add support for prim::min operator and update full_like

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866144

Pulled By: SplitInfinity

fbshipit-source-id: f4af4b8171ed8bd7980fa3141f5fc9811e2bc367
2021-04-20 23:00:44 -07:00
BowenBao
5a455dc717 [ONNX] Enable tensordot symbolic function. (#55654) (#56166)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56166

Support tensordot in symbolic function of opset 12, and add tests accordingly.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866140

Pulled By: SplitInfinity

fbshipit-source-id: 68e218cfbd630900fb92871fc7c0de3e7e8c8c3d
2021-04-20 23:00:41 -07:00
BowenBao
f804b65d4e [ONNX] Update repeat_interleave symbolic (#54312) (#56165)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56165

Add implementation for cases when
- interleaving happens along dim which consist of dynamic axes

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866137

Pulled By: SplitInfinity

fbshipit-source-id: 7fef1b2c614f2e24a677b7ca0886bb37bd0ab479
2021-04-20 23:00:39 -07:00
BowenBao
75995e4bf6 [ONNX] Add support for hann_window operator. (#54587) (#56163)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56163

* [ONNX] Improve index_put symbolic to handle singular Bool updates (#53690)

Adds support for cases where the updates to the index_put node is a single Bool value, such as the case shown below

```
mask[indices] = True
```

Fixes #53507

* [ONNX] Support primitive type input/outputs and attributes (#53550)

Support primitive type attributes. Needed for Silero model.

* [ONNX] Fix if output shape mismatch error & Fix graph input directly used as output (#53219)

Fix if output shape mismatch error & Fix graph input directly used as output

* Add support for hann_window operator.

* [ONNX] Replace decomposeLinear pre process pass with a symbolic (#53077)

Replace decomposeLinear pre process pass with a symbolic

* Add a test case for dtype is None.

* Resolve flake8 issue.

* Remove one unused test case.

* Add support for hann_window operator.

* Add a test case for dtype is None.

* Remove one unused test case.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D27866145

Pulled By: SplitInfinity

fbshipit-source-id: e0b43df9ecd1a95cd7ac297213aba453bbaf2913

Co-authored-by: Shubham Bhokare <32080845+shubhambhokare1@users.noreply.github.com>
Co-authored-by: Negin Raoof <neginmr@utexas.edu>
Co-authored-by: Bowen Bao <bowbao@microsoft.com>
Co-authored-by: Ksenija Stanojevic <KsenijaS@users.noreply.github.com>
2021-04-20 22:59:31 -07:00
Kurt Mohler
3fe4718d16 Add padding_idx argument to EmbeddingBag (#49237)
Summary:
This PR adds a `padding_idx` parameter to `nn.EmbeddingBag` and `nn.functional.embedding_bag`. As with `nn.Embedding`'s `padding_idx` argument, if an embedding's index is equal to `padding_idx` it is ignored, so it is not included in the reduction.

This PR does not add support for `padding_idx` for quantized or ONNX `EmbeddingBag` for opset10/11 (opset9 is supported). In these cases, an error is thrown if `padding_idx` is provided.

Fixes https://github.com/pytorch/pytorch/issues/3194

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49237

Reviewed By: walterddr, VitalyFedyunin

Differential Revision: D26948258

Pulled By: jbschlosser

fbshipit-source-id: 3ca672f7e768941f3261ab405fc7597c97ce3dfc
2021-04-14 09:38:01 -07:00
whiteking64
e6bfff679d [ONNX] Add hardsigmoid symbolic in opset 9 #49649 (#54193)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/49649
Adds support for torch.nn.Hardsigmoid operator in torch.onnx.export

Pull Request resolved: https://github.com/pytorch/pytorch/pull/54193

Reviewed By: anjali411

Differential Revision: D27522969

Pulled By: SplitInfinity

fbshipit-source-id: 33abcec578f4bc3cf5c3ee1c1bed7d94816bee96
2021-04-07 14:28:31 -07:00
Peter Bell
2ee02b30b1 Replace rounding_mode="true" with rounding_mode=None (#51988)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51988

* **#51988 Replace rounding_mode="true" with rounding_mode=None**

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D27561817

Pulled By: mruberry

fbshipit-source-id: 60d1d9c389570f60d599fc1876518717367fb368
2021-04-05 14:53:43 -07:00
Ksenija Stanojevic
cb0cee4a3d [ONNX] Replace decomposeLinear pre process pass with a symbolic (#53077) (#54866)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54866

Replace decomposeLinear pre process pass with a symbolic

Test Plan: Imported from OSS

Reviewed By: nikithamalgifb

Differential Revision: D27408981

Pulled By: SplitInfinity

fbshipit-source-id: d2d76cab3383122a60df1f356742a33db56adc71
2021-03-31 21:14:25 -07:00
Yukio Siraichi
27048c1dfa Remove legacy constructor calls from _torch_ folder. (#53889)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/53146
Related to https://github.com/pytorch/pytorch/issues/47112

As mentioned in https://github.com/pytorch/pytorch/issues/47112, the plan is to:

1. Verify that all `torch.Tensor()` scenarios are covered by other functions
2. Scrub internal `torch.Tensor()` uses
3. Update the docs and throw `TORCH_WARN_ONCE` if someone uses `torch.Tensor()`

In this PR, I replaced all occurrences of `torch.Tensor` present in the _torch_ folder.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53889

Reviewed By: walterddr, zou3519

Differential Revision: D27190743

Pulled By: jbschlosser

fbshipit-source-id: 7ecc201d57935b8dbb98ae3718b60d95cb55a010
2021-03-19 15:20:19 -07:00
BowenBao
ad8d1b2aaa [ONNX] Update embedding export wrt padding_idx (#53931)
Summary:
To be in-sync with https://github.com/pytorch/pytorch/issues/53447

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53931

Reviewed By: ngimel

Differential Revision: D27026616

Pulled By: malfet

fbshipit-source-id: 4c50b29fa296c90aeeeb1757bdaada92cbba33d4
2021-03-15 10:03:53 -07:00
Nikita Shulga
5b648ef909 Revert D26922420: [ONNX] fix export of embedding with padding_idx (#53053)
Test Plan: revert-hammer

Differential Revision:
D26922420 (ee4ce8e9d9)

Original commit changeset: b8b867a96a13

fbshipit-source-id: 501392f419f2735658001c96f83d9754acd8e476
2021-03-12 14:51:01 -08:00
BowenBao
ee4ce8e9d9 [ONNX] fix export of embedding with padding_idx (#53053) (#53530)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53530

fix export of embedding with padding_idx

Test Plan: Imported from OSS

Reviewed By: navahgar, jamesr66a, malfet

Differential Revision: D26922420

Pulled By: SplitInfinity

fbshipit-source-id: b8b867a96a13cf810f9c0ae88fcc5c95072bb390
2021-03-12 02:49:46 -08:00
BowenBao
a572f70f2f [ONNX] Support torch.isinf, torch.any and torch.all export to ONNX (#53328) (#53529)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53529

Supported for ONNX export after opset 10.
This is not exportable to opsets < 10 due to
1. onnx::IsInf is introduced in opset 10
2. onnx::Equal does not accept float tensor prior to opset 11

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922418

Pulled By: SplitInfinity

fbshipit-source-id: 69bcba50520fa3d69db4bd4c2b9f88c00146fca7
2021-03-12 02:49:41 -08:00
BowenBao
a6a811f23a [ONNX] Add repeat_interleave symbolic (#52855) (#53312)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53312

- Add support for aten::repeat_interleave
- NOTE: Also adds fix for cases with split op where input tensor sizes are not known but _outputs is provided

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922422

Pulled By: SplitInfinity

fbshipit-source-id: 5362d0d8ccfdc14c15e1ae73fd70c4c113f823e6
2021-03-12 02:49:34 -08:00
BowenBao
38414d29a1 [ONNX] Remove the last Cast in pow symbolic_opset9 (#52646) (#53305)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53305

Fixes #52436
For opset 9 of onnx Pow, if X is int32, Y is float, we will cast back to int32 which is consistent with X type.
However, pytorch is still float. The aten graph sometimes does not bind with the type for operators,
we are fine with the float type and don't want to cast back.
Even if X, Y are int32, the resulting float32 and int32 makes no difference.

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922425

Pulled By: SplitInfinity

fbshipit-source-id: f8c09524acee0de615df10a14310ca1dd583831e
2021-03-12 02:47:19 -08:00
Shubham Bhokare
49a923c8b5 [ONNX] Update LayerNorm symbolic to handle autocasting (#52199) (#52350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52350

When onnx export creates a 0-dim tensor of constant type, this action overrides the type promotion logic as quoted in #9515. In order to prevent this from happening this PR adds the following functionality.
If the data type is a floating point type, it is converted to a 0-dim double tensor, else it is converted to a 0-dim tensor of its original type

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D26490325

Pulled By: SplitInfinity

fbshipit-source-id: 4c47c69c9b6523d2e45b74c2541d6d8ca7e28fc9
2021-02-19 10:57:15 -08:00