Commit Graph

18 Commits

Author SHA1 Message Date
Spandan Tiwari
7583519b87 Provide argument in ONNX export to exclude intializers from graph inputs. (#23284)
Summary:
Starting ONNX IR version 4, the initializers in the ONNX graph do not have to be inputs of the graphs. This constraint, which existed in IR version 3 and earlier, was relaxed in IR version 4. This PR provides an API level argument to allow ONNX export with the relaxed constraint of IR version 4, i.e. provides the option to not include initializers as inputs. This allows backends/runtimes to do certain optimizations, such as constant folding, better.

*Edit*: After discussion with houseroad we have the following behavior. For any OperatorExportType, except OperatorExportTypes.ONNX, the current status of export is maintained in this PR by default. However, the user can override it by setting the `keep_initializers_as_inputs` argument to the export API.  But when exporting to ONNX, i.e. OperatorExportType is OperatorExportTypes.ONNX, the current status is changed in that by default the initializers are NOT part of the input. Again, the default can be overridden by setting the `keep_initializers_as_inputs` argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23284

Differential Revision: D16459961

Pulled By: bddppq

fbshipit-source-id: b8f0270dfaba47cdb8e04bd4cc2d6294f1cb39cf
2019-08-12 14:17:25 -07:00
BowenBao
02023d7dba canonicalize_ops pass bugfix: copy metadata for new output (#23809)
Summary:
Without metadata(datatype) for the new output, exporter won't be able to perform implicit scalar datatype casting. This PR covers a large portion of this common issue seen in many exported models, e.g. https://github.com/pytorch/pytorch/issues/23724
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23809

Reviewed By: ezyang

Differential Revision: D16707640

Pulled By: bddppq

fbshipit-source-id: 3de985c6b580b9c9ebaec08085c7443bd8d9c7f8
2019-08-09 08:27:13 -07:00
neginraoof
f278aee731 Std opset export (#22310)
Summary:
Added export for std (standard deviation) op, plus onnxruntime, caffe2 and expect tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22310

Differential Revision: D16109889

Pulled By: bddppq

fbshipit-source-id: 067b2d385d463877bb99f673a18da4e5ea823426
2019-08-05 15:55:42 -07:00
neginraoof
dfd8a08f51 frobenius_norm onnx export added
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23536

Differential Revision: D16566154

Pulled By: bddppq

fbshipit-source-id: 6d076274d1d780e7d39d17ddb35ceabe55b394a3
2019-08-05 10:13:00 -07:00
neginraoof
4e6e11c139 added opset10 ORT tests (#22993)
Summary:
Added a number of opset10 tests from Caffe2 to ORT
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22993

Differential Revision: D16467954

Pulled By: bddppq

fbshipit-source-id: 0b92694c7c0213bdf8e77e6f8e07e6bc8a85170a
2019-08-02 17:34:48 -07:00
Bowen Bao
638d0b3705 Support ONNX export Multinomial (#23581)
Summary:
cc bddppq spandantiwari
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23581

Differential Revision: D16584853

Pulled By: bddppq

fbshipit-source-id: 01c066e86a0ad071361cd67b8c3925bfb6b84a4a
2019-08-02 11:06:21 -07:00
liqunfu
83d6c6be07 ONNX export for index_select (#21866)
Summary:
ONNX export for index_select
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21866

Reviewed By: zrphercule

Differential Revision: D16471345

Pulled By: houseroad

fbshipit-source-id: 745c23ba8a3223b5ec59b924df7358a36a92518c
2019-07-26 13:56:15 -07:00
liqunfu
7a0ae0079f export sort to onnx
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21913

Differential Revision: D15982801

Pulled By: houseroad

fbshipit-source-id: 96dbd738c557478fffd48000db7263ae1f9754f5
2019-07-26 00:02:20 -07:00
BowenBao
a35136dd73 Add support for onnx tensor index export (#21716)
Summary:
Support exporting
* Standard tensor indexing like
```
x = torch.ones(4, 5)
ind = torch.tensor([0, 1])

return x[ind]
```
* [Advanced indexing](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing) like
```
x = torch.ones(4,5,6,7,8)
ind1 = torch.tensor([0, 1])
ind2 = torch.tensor([[3], [2]])
ind3 = torch.tensor([[2, 2], [4, 5]])

return x[2:4, ind1, None, ind2, ind3, :]
```
It would be ideal if ONNX can natively support indexing in future opsets, but for opset <= 10 it will always need this kind of workarounds.

There are still various limitations, such as not supporting advanced indexing with negative indices, not supporting mask indices of rank > 1, etc. My feeling is that these are less common cases that requires great effort to support using current opset, and it's better to not make the index export more cumbersome than it already is.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21716

Reviewed By: zrphercule

Differential Revision: D15902199

Pulled By: houseroad

fbshipit-source-id: 5f1cc687fc9f97da18732f6a2c9dfe8f6fdb34a6
2019-07-23 17:11:28 -07:00
BowenBao
eb5137a5d1 Export torch.arange to ONNX (#22601)
Summary:
Some overlap with https://github.com/pytorch/pytorch/pull/21716 regarding caffe2 nonzero. Will rebase the other one accordingly whichever gets merged first.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22601

Reviewed By: zrphercule

Differential Revision: D16224660

Pulled By: houseroad

fbshipit-source-id: dbfd1b8776cb626601e0bf83b3fcca291806e653
2019-07-22 20:30:39 -07:00
BowenBao
52de340629 Export torch.masked_fill with onnx::where
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22521

Reviewed By: zrphercule

Differential Revision: D16155168

Pulled By: houseroad

fbshipit-source-id: 5d419f08213324d474b839ba1ae13c799aeee92a
2019-07-16 10:55:30 -07:00
BowenBao
b3147bc674 PyTorch export to ONNX Opset 7 and 8 - Cont (#22421)
Summary:
This is an extension to the original PR https://github.com/pytorch/pytorch/pull/21765

1. Increase the coverage of different opsets support, comments, and blacklisting.
2. Adding backend tests for both caffe2 and onnxruntime on opset 7 and opset 8.
3. Reusing onnx model tests in caffe2 for onnxruntime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22421

Reviewed By: zrphercule

Differential Revision: D16225518

Pulled By: houseroad

fbshipit-source-id: 01ae3eed85111a83a0124e9e95512b80109d6aee
2019-07-12 14:52:48 -07:00
Lara
42c6ea5faa ONNX Export Topk with Dynamic k (+ add test cases)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21104

Differential Revision: D16061592

Pulled By: houseroad

fbshipit-source-id: 855b310a138fdde9c25869ffe9f127189dc2eaf5
2019-07-05 23:46:36 -07:00
Lara Haidar
7ca7edc307 ONNX Export LayerNorm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22265

Reviewed By: zrphercule

Differential Revision: D16076268

Pulled By: houseroad

fbshipit-source-id: 29b4ecab2fa0dc7250c9d1ad6924903181a66ab2
2019-07-02 09:37:07 -07:00
Chris Seymour
d8de69d621 Adds symbolic op for logsumexp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22306

Differential Revision: D16046027

Pulled By: houseroad

fbshipit-source-id: 7319fd58321220941250c5b8eff024914798e392
2019-06-29 00:09:06 -07:00
Lara
45c6fa0007 Refactor Tests for Multiple ONNX Opsets (#20036)
Summary:
Refactor tests for https://github.com/pytorch/pytorch/pull/19294.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20036

Reviewed By: zrphercule

Differential Revision: D16016593

Pulled By: houseroad

fbshipit-source-id: eaae324e347679acf3d0ac1c14be03919f54496e
2019-06-26 17:06:57 -07:00
Lara
7b1ffba3bf ArgumentStash for Scalar arguments (#21931)
Summary:
Scalars are being traced as constants.
This PR is to fix this issue.

The ONNX Graph for Test_Full_op() before and after this change:

def Test_Full_op():
  class Test_Full(nn.Module):
    def forward(self, x):
      return torch.full((3, 4), x, dtype=torch.long)
  model = Test_Full()
  x = torch.tensor(12)
  output = model(x)

Before this change:
graph(%input1 : Long()):
%output1 : Float(3, 4) = onnx::Constant[value=<Tensor>]
return (%output1)

After this change:
graph(%input1 : Long()):
%1 : int[] = onnx::Constant[value= 3 4 [ Variable[CPULongType]{2} ]]
%2 : Tensor = onnx::ConstantOfShape[value={0}]
%output1 : Float(3, 4) = onnx::Add(%2, %input1)
return (%output1)

Similar PR : https://github.com/pytorch/pytorch/pull/12939
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21931

Reviewed By: zrphercule

Differential Revision: D15950066

Pulled By: houseroad

fbshipit-source-id: 3470665d88fa34faa600940ef16b069a06002cd5
2019-06-25 15:22:08 -07:00
Lu Fang
c1744a6c39 Add ONNX py3 CI cases (#21715)
Summary:
So far, we only have py2 ci for onnx. I think py3 support is important. And we have the plan to add onnxruntime backend tests, which only supports py3.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21715

Reviewed By: bddppq

Differential Revision: D15796885

Pulled By: houseroad

fbshipit-source-id: 8554dbb75d13c57b67ca054446a13a016983326c
2019-06-14 10:20:14 -07:00