Commit Graph

94 Commits

Author SHA1 Message Date
Negin Raoof
d93fc64776 Update export for topk and sort (#25739)
Summary:
updated export for topk and sort as part of opset11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25739

Reviewed By: hl475

Differential Revision: D17467131

Pulled By: houseroad

fbshipit-source-id: 653be138455728ec8e9bb81ae63dd7ce0c4d0793
2019-10-02 12:20:30 -07:00
Negin Raoof
6b9bcd0606 export baddbmm (#26901)
Summary:
Adding symbolic for baddbmm export
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26901

Reviewed By: hl475

Differential Revision: D17620967

Pulled By: houseroad

fbshipit-source-id: 3931dff5a4afdcb4a45d967fb0efaf84029c16e5
2019-09-26 22:53:21 -07:00
Lara Haidar
614edfce81 Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889)
Summary:
ONNX does not support dictionaries for inputs and output. The reason is that the arg flattening and unflattening does not handle Dictionary types.
This PR adds flattening/unflattening support for dictionaries and strings.
However this feature should be handled with caution for input dictionaries; and users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available.

This PR will allow exporting cases where models have dictionnary outputs (detection and segmentation models in torchvision), and where dictionary inputs are used for model configurations (MultiScaleRoiAlign in torchvision).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25889

Reviewed By: hl475

Differential Revision: D17613605

Pulled By: houseroad

fbshipit-source-id: c62da4f35e5dc2aa23a85dfd5e2e11f63e9174db
2019-09-26 22:31:09 -07:00
Lu Fang
b6a1d618b2 Revert D17565828: [pytorch][PR] [ONNX] Export baddbmm
Test Plan: revert-hammer

Differential Revision:
D17565828

Original commit changeset: 85f605a7b3fa

fbshipit-source-id: 7705325087d83362f71a717be880a13e9f575b37
2019-09-25 14:24:18 -07:00
Negin Raoof
63fd10549a Export baddbmm (#25738)
Summary:
Added ONNX export for baddbmm in opset9
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25738

Reviewed By: hl475

Differential Revision: D17565828

Pulled By: houseroad

fbshipit-source-id: 85f605a7b3fa4783ef4f6ced86223133c85062d5
2019-09-25 12:28:06 -07:00
Spandan Tiwari
af3b15b74c Setting automatic default selection for ONNX IR v4 semantics in ONNX export API (#26146)
Summary:
This is a follow-up PR for https://github.com/pytorch/pytorch/pull/23284. In that PR we had removed changing the default behavior for `keep_initializers_as_input` argument to the export API. With this PR we are enabling that change in that if `keep_initializers_as_input` is not specified then value/behavior for this argument is chosen automatically depending on whether the export type is ONNX or not.

This was part of the earlier PR was removed for further review. The test points have also been updated.

This change may fail some internal tests which may require explicitly setting `keep_initializers_as_input=True` to preserve old behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26146

Reviewed By: hl475

Differential Revision: D17369677

Pulled By: houseroad

fbshipit-source-id: 2aec2cff50d215714ee8769505ef24d2b7865a11
2019-09-24 10:02:31 -07:00
Lara
c79d116a7d Update ONNX Export for Gather and Scatter for Opset 11
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24790

Reviewed By: hl475

Differential Revision: D17159723

Pulled By: houseroad

fbshipit-source-id: a63bb7c681120de85588dafecd03f04742dde8b7
2019-09-23 17:13:25 -07:00
Negin Raoof
293d73fc92 Export gelu (#24475)
Summary:
Added support for gelu in symbolic opset9 + op and ORT tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24475

Reviewed By: hl475

Differential Revision: D17088708

Pulled By: houseroad

fbshipit-source-id: 9d2f9d7d91481c57829708793d88f786d6c3956f
2019-09-18 21:18:07 -07:00
neginraoof
fcb100a3e0 Export round (#26126)
Summary:
Added round export in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26126

Reviewed By: hl475

Differential Revision: D17403589

Pulled By: houseroad

fbshipit-source-id: f9ac3f7602c50019b9feadda8d5d944a058c5455
2019-09-16 16:40:10 -07:00
Lara Haidar
8ca93ec351 Fix torch.arange traced as constant (#25363)
Summary:
torch.arange is always traced as a constant which makes it impossible to trace correctly TestModel() from the example below.

class TestModel(torch.nn.Module):
  def forward(self, input):
    return torch.arange(input.shape[0])
input = torch.randn(5,3,2)
print(torch.jit.trace(TestModel(), input).graph)

Currently the trace of TestModel() looks like:

graph(%self : ClassType<TestModel>,
      %input : Float(5, 3, 2)):
  %11 : int = prim::Constant[value=5]()
  %12 : int = prim::Constant[value=4]()
  %13 : int = prim::Constant[value=0]()
  %14 : Device = prim::Constant[value="cpu"]()
  %15 : bool = prim::Constant[value=0]()
  %16 : Long(5) = aten::arange(%11, %12, %13, %14, %15)
  return (%16)

This PR will allow the trace to have a variable value for %11.
The trace of TestModel() with this PR's modifs looks like:

graph(%self : ClassType<TestModel>,
      %input : Float(5, 3, 2)):
  %2 : int = prim::Constant[value=0]()
  %3 : int = aten::size(%input, %2)
  %4 : Long() = prim::NumToTensor(%3)
  %11 : Scalar = prim::ImplicitTensorToNum(%4)
  %12 : int = prim::Constant[value=4]()
  %13 : int = prim::Constant[value=0]()
  %14 : Device = prim::Constant[value="cpu"]()
  %15 : bool = prim::Constant[value=0]()
  %16 : Long(5) = aten::arange(%11, %12, %13, %14, %15)
  return (%16)

More info : https://github.com/pytorch/pytorch/issues/20075
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25363

Reviewed By: zrphercule

Differential Revision: D17301934

Pulled By: houseroad

fbshipit-source-id: d9907763742cb51d8c761bf63fc2e4918f7b9941
2019-09-11 13:39:54 -07:00
Lara Haidar
387d5a4459 Add ONNX Export Support to rsqrt
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24153

Reviewed By: zrphercule

Differential Revision: D17231150

Pulled By: houseroad

fbshipit-source-id: 621fa9069238a74101bb2a7f4792a6feb1f89606
2019-09-10 14:33:54 -07:00
neginraoof
d291935377 Export Unique
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25050

Differential Revision: D17085391

Pulled By: dzhulgakov

fbshipit-source-id: a17d54cf634650d3874d02c2bfacd906572ccf5f
2019-08-29 23:27:29 -07:00
Negin Raoof
bf978e7890 cumsum (#24476)
Summary:
Added support for cumsum in symbolic opset 11 + op and ORT tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24476

Differential Revision: D16896780

Pulled By: bddppq

fbshipit-source-id: b52355796ee9f37004c9258f710688ad4b1ae8a2
2019-08-19 16:57:04 -07:00
Diego Estrada
50161f3b3c Add ONNX Export Support to empty and empty_like (#24166)
Summary:
Empty and empty_like return uninitialized tensors with specific sizes.
The values in the tensor cannot be predicted, that's why tests in test_pytorch_onnx_onnxruntime.py and test_pytorch_onnx_caffe2.py are not added.
The tests in test_operators.py verify the onnx graph and output shape.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24166

Differential Revision: D16831571

Pulled By: bddppq

fbshipit-source-id: b2500f36ced4735da9a8418d87a39e145b74f63a
2019-08-16 10:40:18 -07:00
neginraoof
3574d9ff70 updated pixel_shuffle in opset 11 to use depthToSpace
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23739

Differential Revision: D16800355

Pulled By: bddppq

fbshipit-source-id: 1502c5b7ec1495286bad17b6ffa359cf995f78fb
2019-08-15 11:37:44 -07:00
Spandan Tiwari
7583519b87 Provide argument in ONNX export to exclude intializers from graph inputs. (#23284)
Summary:
Starting ONNX IR version 4, the initializers in the ONNX graph do not have to be inputs of the graphs. This constraint, which existed in IR version 3 and earlier, was relaxed in IR version 4. This PR provides an API level argument to allow ONNX export with the relaxed constraint of IR version 4, i.e. provides the option to not include initializers as inputs. This allows backends/runtimes to do certain optimizations, such as constant folding, better.

*Edit*: After discussion with houseroad we have the following behavior. For any OperatorExportType, except OperatorExportTypes.ONNX, the current status of export is maintained in this PR by default. However, the user can override it by setting the `keep_initializers_as_inputs` argument to the export API.  But when exporting to ONNX, i.e. OperatorExportType is OperatorExportTypes.ONNX, the current status is changed in that by default the initializers are NOT part of the input. Again, the default can be overridden by setting the `keep_initializers_as_inputs` argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23284

Differential Revision: D16459961

Pulled By: bddppq

fbshipit-source-id: b8f0270dfaba47cdb8e04bd4cc2d6294f1cb39cf
2019-08-12 14:17:25 -07:00
neginraoof
f278aee731 Std opset export (#22310)
Summary:
Added export for std (standard deviation) op, plus onnxruntime, caffe2 and expect tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22310

Differential Revision: D16109889

Pulled By: bddppq

fbshipit-source-id: 067b2d385d463877bb99f673a18da4e5ea823426
2019-08-05 15:55:42 -07:00
neginraoof
dfd8a08f51 frobenius_norm onnx export added
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23536

Differential Revision: D16566154

Pulled By: bddppq

fbshipit-source-id: 6d076274d1d780e7d39d17ddb35ceabe55b394a3
2019-08-05 10:13:00 -07:00
Brian Vaughan
97a604ef57 Rereapply optional ScalarType interface changes that were reverted in D16079809 (#22456)
Summary:
re-apply changes reverted in:
https://github.com/pytorch/pytorch/pull/22412

Also change log_softmax to take positional arguments. Long-term we do want the kwarg-only interface, but seems to currently be incompatible with jit serialization.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22456

Differential Revision: D16097159

Pulled By: nairbv

fbshipit-source-id: 8cb73e9ca18fc66b35b873cf4a574b167a578b3d
2019-07-03 20:03:25 -07:00
Lara Haidar
7ca7edc307 ONNX Export LayerNorm
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22265

Reviewed By: zrphercule

Differential Revision: D16076268

Pulled By: houseroad

fbshipit-source-id: 29b4ecab2fa0dc7250c9d1ad6924903181a66ab2
2019-07-02 09:37:07 -07:00
Lu Fang
de84104059 Lint ONNX Related Code (#22423)
Summary:
Lint the code
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22423

Differential Revision: D16086518

Pulled By: houseroad

fbshipit-source-id: c6e5143f42c73a70beeaa2e089df4164f6265c32
2019-07-01 21:44:16 -07:00
Wanchao Liang
dff2c07183 Manual revert of D16012838
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22412

Reviewed By: nairbv, houseroad

Differential Revision: D16079809

fbshipit-source-id: ee0d805ff7a2bc5f98bcc65f90b8199751c840f6
2019-07-01 19:58:21 -07:00
Spandan Tiwari
83768f0756 Add ONNX export support for multidim torch.sum. (#22240)
Summary:
This change fixes the issue reported in https://github.com/pytorch/pytorch/issues/22066.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22240

Reviewed By: zrphercule

Differential Revision: D15996934

Pulled By: houseroad

fbshipit-source-id: 3a842ba26f54aa710233fbe87d727fc1f2568d9c
2019-06-27 15:02:33 -07:00
Brian Vaughan
7707dee761 Re apply optional ScalarType changes (#22237)
Summary:
This is (mostly) the re-application of:
https://github.com/pytorch/pytorch/pull/21088

which was reverted due to an issue conflicting with changes in:
https://github.com/pytorch/pytorch/pull/22104
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22237

Differential Revision: D16012838

Pulled By: nairbv

fbshipit-source-id: 35f4a73c97ab68b4e2648aca96b2176f07b5a883
2019-06-26 13:36:25 -07:00
Hong Xu
299ea84a70 Use latest stable flake8-bugbear in CI and fix B011 flake8 error. (#21944)
Summary:
- PyCQA/flake8-bugbear#53 has been fixed (but not yet closed on their side) and a new version of flake8-bugbear has been released on Mar 28, 2019. Switch CI to use the latest stable version.
- Fix the new B011 errors that flake8-bugbear catches in the current codebase.

 ---

B011: Do not call assert False since python -O removes these calls. Instead callers should raise AssertionError().
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21944

Differential Revision: D15974842

Pulled By: soumith

fbshipit-source-id: de5c2c07015f7f1c50cb3904c651914b8c83bf5c
2019-06-24 20:48:15 -07:00
Michael Suo
e016a424ef Revert D15944971: [pytorch][PR] merge interfaces that have an optional scalartype parameter
Differential Revision:
D15944971

Original commit changeset: 53473c370813

fbshipit-source-id: a18158b448cb8993b12e1a3bf2c2a3e0d6df6b10
2019-06-24 09:41:33 -07:00
Brian Vaughan
142361a7e4 merge interfaces that have an optional scalartype parameter (#21088)
Summary:
This change is backwards incompatible in *C++ only* on mean(), sum(), and prod() interfaces that accepted either of:
```
Tensor sum(IntArrayRef dim, bool keepdim=false) const;
Tensor sum(IntArrayRef dim, ScalarType dtype) const;
```
but now to specify both the dim and dtype will require the keepdim parameter:
```
Tensor sum(IntArrayRef dim, bool keepdim=false, c10::optional<ScalarType> dtype=c10::nullopt) const;
```

[xla ci]
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21088

Reviewed By: ailzhang

Differential Revision: D15944971

Pulled By: nairbv

fbshipit-source-id: 53473c370813d9470b190aa82764d0aea767ed74
2019-06-24 07:17:58 -07:00
Lara
34aee933f9 ONNX Export Interpolate (Resize) for opset version 10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/21434

Reviewed By: zrphercule

Differential Revision: D15777197

Pulled By: houseroad

fbshipit-source-id: 517b06a54a234ffdb762401e83f5a732023ed259
2019-06-19 13:40:27 -07:00
daquexian
76e01542ed Fix the shape of PReLU weight (#21330)
Summary:
Fix issue https://github.com/pytorch/pytorch/issues/21271
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21330

Reviewed By: zrphercule

Differential Revision: D15776459

Pulled By: houseroad

fbshipit-source-id: 4e0aef88e9c91c79faa3da6fa66f7466dee52018
2019-06-12 11:03:40 -07:00
BowenBao
63a55d4932 Support gather export with OneHot + Mul (#21235)
Summary:
This could serve as a alternative solution to export ```torch.gather``` before something similar goes into ONNX spec. The exported model is verified to be correct against onnxruntime backend. We weren't able to test against Caffe2 backend because it doesn't seem to support OneHot opset9.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21235

Differential Revision: D15613039

Pulled By: houseroad

fbshipit-source-id: 7fc097f85235c071474730233ede7d83074c347f
2019-06-06 08:35:28 -07:00
Peyman Manikashani
98e3aaeb78 Adding support for exporting models with variable length input/output to ONNX (#20034)
Summary:
Proposal: https://gist.github.com/pk-g/cc45ff8c5891b5699bffd883a87f13ae?fbclid=IwAR17bRA7Fks4APoZRYiNa93UkLdoFCpRDuIYEx0lNVyPTyaDAShbEnytiQo
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20034

Reviewed By: zrphercule

Differential Revision: D15606731

Pulled By: houseroad

fbshipit-source-id: 247251e07b4893cb3f7a1287948b1f57aadb7851
2019-06-05 12:02:23 -07:00
Spandan Tiwari
22865d4ce1 Add ONNX export support for torch.rand. (#20559)
Summary:
This PR adds support for torch.rand export in the PyTorch ONNX exporter. There are other generator ops that need to be supported for export and they will added in subsequent PRs. This op is needed with priority for a model on our end.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20559

Differential Revision: D15379653

Pulled By: houseroad

fbshipit-source-id: d590db04a4cbb256c966f4010a9361ab8eb3ade3
2019-06-03 16:09:01 -07:00
Peyman Manikashani
93d5503f34 bug fix 19374 - fix for upsample export
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20116

Differential Revision: D15256899

Pulled By: houseroad

fbshipit-source-id: cf0dfd679d528fbb77f483e23071f4a96fb27091
2019-05-23 14:48:23 -07:00
daquexian
35e0015c70 Export sign onnx operator (#20470)
Summary:
A trivial commit that supports exporting sign operator
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20470

Differential Revision: D15393446

Pulled By: ezyang

fbshipit-source-id: 12fb1c147d016205abf814907d667f7d8b074ae1
2019-05-17 08:57:22 -07:00
Yanghan Wang
373e6a78bf make box plus one a legacy argument in detection ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20550

Reviewed By: newstzpz

Differential Revision: D15348610

fbshipit-source-id: 12b1e119e9bc9191ba9f2aa6d695ef215780c349
2019-05-16 18:17:12 -07:00
Kanghwan Jang
6f7a315a71 Allow onnx export for maxpool with dilations (#18721)
Summary:
Now, MaxPool operator supports the 'dilations' attribute with this commit:
b22041c3f1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18721

Reviewed By: zrphercule

Differential Revision: D15152400

Pulled By: houseroad

fbshipit-source-id: e8f5ab35c5c2c3a540a22f7cf7bb453d892d0400
2019-05-02 11:26:57 -07:00
Spandan Tiwari
df05c7fbac Fix momentum setting in BatchNorm forward pass. (#18764)
Summary:
This is a fix for issue https://github.com/pytorch/pytorch/issues/18525. The issue is related not only to ONNX export, but can manifest in other scenarios.
An existing test point in test/onnx/test_operators.py has been updated to cover this scenario as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18764

Reviewed By: zrphercule

Differential Revision: D14735166

Pulled By: houseroad

fbshipit-source-id: 5a737c648f64355929ff31eb12bd4869e744768d
2019-04-08 16:30:00 -07:00
Lu Fang
443a58e03d Export C10 operator in PyTorch Model (#18210)
Summary:
Almost there, feel free to review.

these c10 operators are exported to _caffe2 domain.

TODO:

- [x] let the onnx checker pass
- [x] test tensor list as argument
- [x] test caffe2 backend and converter
- [x] check the c10 schema can be exported to onnx
- [x] refactor the test case to share some code
- [x] fix the problem in ONNX_ATEN_FALLBACK
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18210

Reviewed By: zrphercule

Differential Revision: D14600916

Pulled By: houseroad

fbshipit-source-id: 2592a75f21098fb6ceb38c5d00ee40e9e01cd144
2019-04-08 16:06:00 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Lu Fang
18b31b73fb Retain the parameter names in ONNX exporter (#17551)
Summary:
So, we will keep the names of ONNX initializers the same as the names in PyTorch state dict.

Later, we will make this as the default behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17551

Reviewed By: dzhulgakov

Differential Revision: D14491920

Pulled By: houseroad

fbshipit-source-id: f355c02e1b90d7ebbebf4be7c0fb6ae208ec795f
2019-03-20 12:11:23 -07:00
Lara Haidar-Ahmad
001cffed9d ONNX Export IsNan op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17698

Reviewed By: zrphercule

Differential Revision: D14470646

Pulled By: houseroad

fbshipit-source-id: d3e6adc83c4f9fa288c5fe0ae4c6af71fdd47905
2019-03-15 12:19:03 -07:00
Lu Fang
1043ff6d68 Set the default ONNX opset to the latest stable opset (i.e., 9) (#17736)
Summary:
1) The changes in the new opset won't affect internal pipeline.
2) The CI won't be affected by the ONNX changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17736

Reviewed By: zrphercule

Differential Revision: D14358710

Pulled By: houseroad

fbshipit-source-id: 4ef15d2246b50f6875ee215ce37ecf92d555ca6a
2019-03-07 10:56:06 -08:00
Lara Haidar-Ahmad
3f94fc4862 ONNX Export for Max and Average Pooling in CEIL_MODE
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16769

Differential Revision: D14362175

Pulled By: houseroad

fbshipit-source-id: 65cfb1dfba6a43d39cc85374add368fe8e4e5645
2019-03-07 10:10:21 -08:00
Lara Haidar
3dba1285ab ONNX Export Narrow op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17550

Differential Revision: D14350401

Pulled By: houseroad

fbshipit-source-id: 4d88079bb7a8bbd270b0272009826eb3b202cc33
2019-03-06 22:37:58 -08:00
Lara Haidar-Ahmad
073634612f ONNX Export Argmin and Argmax ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17382

Differential Revision: D14338811

Pulled By: houseroad

fbshipit-source-id: be07548d8063d1aa94f1801c18137738365b85fb
2019-03-06 12:11:47 -08:00
Spandan Tiwari
c658d9b21b Temporarily disable Upsample operator tests in pytorch-onnx tests (#17696)
Summary:
In discussion with houseroad, because Upsample op is being updated in ONNX https://github.com/onnx/onnx/pull/1773 and these tests are blocking it. These tests will be updated once the ONNX PR goes in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17696

Differential Revision: D14338845

Pulled By: houseroad

fbshipit-source-id: cfaf8cf1ab578ae69dd3bf21b1c0681b572b9b6f
2019-03-06 11:25:34 -08:00
Lu Fang
b0c18570ca add the support for stable ONNX opsets in exporter (#16068)
Summary:
Still wip, need more tests and correct handling for opset 8 in symbolics.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16068

Reviewed By: zrphercule

Differential Revision: D14185855

Pulled By: houseroad

fbshipit-source-id: 55200be810c88317c6e80a46bdbeb22e0b6e5f9e
2019-02-22 12:05:17 -08:00
Lara Haidar
b8d1f4a423 ONNX Export Maxpool Indices
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16455

Differential Revision: D14140375

Pulled By: houseroad

fbshipit-source-id: 12d02c447e7fe0fae49969d1daf40a87660ed416
2019-02-19 21:10:14 -08:00
BowenBao
19addc7eb0 Support nonzero onnx export
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17036

Differential Revision: D14079676

Pulled By: houseroad

fbshipit-source-id: 562b538dd9ab330c26f15fdb34c98dc7a23571a1
2019-02-13 23:52:42 -08:00
Edward Yang
d7e6f9b5a7 Revert D14020906: [pytorch][PR] Extend support for exporting reshape to onnx.
Differential Revision:
D14020906

Original commit changeset: 168616873044

fbshipit-source-id: 2730bb6990d41f3a9cef6625ea919c219733433d
2019-02-11 06:08:55 -08:00