Commit Graph

32 Commits

Author SHA1 Message Date
BowenBao
586c2e8d62 [ONNX] Fix graph sequence output from loop node (#51305) (#51521)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51521

* Add loop & if node to the list of nodes that could produce sequence type output.
* Switch from `[]` to `at()` to avoid segfault of out of range access.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203112

Pulled By: SplitInfinity

fbshipit-source-id: e990eeed933124b195be0be159271e33fb485063
2021-02-04 12:44:17 -08:00
BowenBao
3f185ac18e [ONNX] Export get/set attribute nodes (#50768) (#51517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51517

Fix get/set attributes when getting/setting a model parameter.
This PR also fixes inplace ops in If blocks.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26203116

Pulled By: SplitInfinity

fbshipit-source-id: bed6ee6dd92b5b43febc8c584a6872290f8fe33f
2021-02-04 12:43:59 -08:00
BowenBao
68034197e8 [ONNX] Support gelu for fp16 export (#50487) (#50911)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50911

Need to replace dtype of export created scalars from float to double. (In torch implicit conversion logic, python numbers are double)

Test case skipped in CI due to that current CI job env does not have CUDA support.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D26050889

Pulled By: SplitInfinity

fbshipit-source-id: 1fdde23a68d4793e6b9a82840acc213e5c3aa760
2021-01-27 17:49:02 -08:00
neginraoof
137f2a385a [ONNX] Handle sequence output for models (#50599)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/46542

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50599

Reviewed By: SplitInfinity

Differential Revision: D25928897

Pulled By: bzinodev

fbshipit-source-id: a898cef7b2d15a287aedd9798ce1423cebf378d4
2021-01-21 15:36:41 -08:00
Brian Vaughan
a9db2f8e7a Revert D24924236: [pytorch][PR] [ONNX] Handle sequence output shape and type inference
Test Plan: revert-hammer

Differential Revision:
D24924236 (adc65e7c8d)

Original commit changeset: 506e70a38cfe

fbshipit-source-id: 78069a33fb3df825af1cb482da06a07f7b26ab48
2021-01-15 05:58:35 -08:00
Negin Raoof
adc65e7c8d [ONNX] Handle sequence output shape and type inference (#46542)
Summary:
Handle sequence output shape and type inference.

This PR fixes value type of sequence outputs. Prior to this, all model sequence type outputs were unfolded for ONNX models.
This PR also enable shape inference for sequence outputs to represent the dynamic shape of these values.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46542

Reviewed By: ezyang

Differential Revision: D24924236

Pulled By: bzinodev

fbshipit-source-id: 506e70a38cfe31069191d7f40fc6375239c6aafe
2021-01-14 21:12:35 -08:00
Spandan Tiwari
aeefe2ce31 [ONNX] ONNX dev branch merge 01-06-2021 (#50163)
Summary:
[ONNX] ONNX dev branch merge 01-06-2021
- [ONNX] Support onnx if/loop sequence output in opset 13 - (https://github.com/pytorch/pytorch/issues/49270)
- Symbolic function for torch.square (https://github.com/pytorch/pytorch/issues/49446)
- [ONNX] Add checks in ONNXSetDynamicInputShape (https://github.com/pytorch/pytorch/issues/49783) …
- [ONNX] Enable export af aten::__derive_index (https://github.com/pytorch/pytorch/issues/49514) …
- [ONNX] Update symbolic for unfold (https://github.com/pytorch/pytorch/issues/49378) …
- [ONNX] Update the sequence of initializers in exported graph so that it is as same as inputs. (https://github.com/pytorch/pytorch/issues/49798)
- [ONNX] Enable opset 13 ops (https://github.com/pytorch/pytorch/issues/49612) …
- [ONNX] Improve error message for supported model input types in ONNX export API. (https://github.com/pytorch/pytorch/issues/50119)
- [ONNX] Add a post-pass for If folding (https://github.com/pytorch/pytorch/issues/49410)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/50163

Reviewed By: pbelevich

Differential Revision: D25821059

Pulled By: SplitInfinity

fbshipit-source-id: 9f511a93d9d5812d0ab0a49d61ed0fa5f8066948
2021-01-13 13:51:21 -08:00
Jane Xu
52fe73a39e Enable Python code coverage for onnx runs (#47387)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/44120

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47387

Reviewed By: heitorschueroff

Differential Revision: D24737378

Pulled By: janeyx99

fbshipit-source-id: 79e3d0b62f7da0617330f312fb1ed548c6be2a3b
2020-11-09 20:52:14 -08:00
Ksenija Stanojevic
7a599870b0 [ONNX] Update peephole pass for prim::ListUnpack (#46264)
Summary:
Update pass that handles prim::ListUnpack in peephole file, so that it also covers the case when input to the node is of ListType.

Fixes https://github.com/pytorch/pytorch/issues/45816

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46264

Reviewed By: mrshenli

Differential Revision: D24566070

Pulled By: bzinodev

fbshipit-source-id: 32555487054f6a7fe02cc17c66bcbe81ddf9623e
2020-11-05 09:42:24 -08:00
Jane Xu
4189c3ca76 Fix onnx test-reports path in CI (#47315)
Summary:
Currently, no test reports are uploaded to CI because the paths for the `onnx` runs are incorrect. This PR attempts to change that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47315

Reviewed By: malfet

Differential Revision: D24727607

Pulled By: janeyx99

fbshipit-source-id: f6d91698fdb15a39e01ef812032d4cd30621f864
2020-11-04 10:30:52 -08:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Rong Rong
105132b891 Move ONNX circle ci build to torch and remove all caffe2 CI job/workflows (#44595)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44595

Reviewed By: seemethere

Differential Revision: D23670280

Pulled By: walterddr

fbshipit-source-id: b32633912f6c8b4606be36b90f901e636567b355
2020-09-14 09:50:13 -07:00
Negin Raoof
7f1c9886cd [ONNX] Enable models tests (#38791)
Summary:
PR to enable model tests which are fixed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38791

Reviewed By: hl475

Differential Revision: D21732498

Pulled By: houseroad

fbshipit-source-id: f417f9d4124ef5a663dc666d5c2ed6ba013b26a4
2020-05-27 09:09:59 -07:00
Lara Haidar
728c7dcea3 ONNX Update training ops and training amenable export API (#35567)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35567

Reviewed By: hl475

Differential Revision: D20715339

Pulled By: houseroad

fbshipit-source-id: ad88097e76b169035ab5814b769dc1bed54c6008
2020-03-29 23:14:25 -07:00
Alban Desmaison
45e1be9762 Revert D19710370: [pytorch][PR] ONNX Update training ops and training amenable export API
Test Plan: revert-hammer

Differential Revision:
D19710370

Original commit changeset: e5e79d385529

fbshipit-source-id: d0114dc561a3415869805d3fbf43b92730bbcf54
2020-03-27 06:51:05 -07:00
Lara Haidar
025a0abe5a ONNX Update training ops and training amenable export API (#32950)
Summary:
- Update Dropout and Batchnorm in opset 12 : https://github.com/onnx/onnx/pull/2568
- Update api logic for exporting to ONNX training amenable models
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32950

Reviewed By: hl475

Differential Revision: D19710370

Pulled By: houseroad

fbshipit-source-id: e5e79d38552936966662c41d39ddf33be1ba3e35
2020-03-27 00:39:39 -07:00
neginraoof
beb4309406 [ONNX] Reduce ONNX test time on CI (#33242)
Summary:
Among all ONNX tests, ONNXRuntime tests are taking the most time on CI (almost 60%).
This is because we are testing larger models (mainly torchvision RCNNs) for multiple onnx opsets.
I decided to divide tests between two jobs for older/newer opsets. This is now reducing the test time from 2h to around 1h10mins.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33242

Reviewed By: hl475

Differential Revision: D19866498

Pulled By: houseroad

fbshipit-source-id: 446c1fe659e85f5aef30efc5c4549144fcb5778c
2020-03-05 14:38:34 -08:00
Junjie Bai
b44c0f328e Skip same tests in ONNX Python3 CI as in Python2 (#31827)
Summary:
resolve https://github.com/pytorch/pytorch/issues/31103

vgg models were not tested in Python2 but are turned on in Python3
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31827

Reviewed By: houseroad

Differential Revision: D19274123

Pulled By: bddppq

fbshipit-source-id: c48beb574e8b03b2adbd6c9d8ca3f600bee93024
2020-01-03 12:42:42 -08:00
Lu Fang
b96610bf5a fix the CI job for onnx (#22946)
Summary:
ONNX uses virtualenv, and PyTorch doesn't. So --user flag is causing problems in ONNX ci...

Fixing it by moving it to pytorch only scripts. And will install ninja in onnx ci separately.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22946

Reviewed By: bddppq

Differential Revision: D16297781

Pulled By: houseroad

fbshipit-source-id: 52991abac61beaf3cfbcc99af5bb1cd27b790485
2019-07-17 09:50:06 -07:00
BowenBao
b3147bc674 PyTorch export to ONNX Opset 7 and 8 - Cont (#22421)
Summary:
This is an extension to the original PR https://github.com/pytorch/pytorch/pull/21765

1. Increase the coverage of different opsets support, comments, and blacklisting.
2. Adding backend tests for both caffe2 and onnxruntime on opset 7 and opset 8.
3. Reusing onnx model tests in caffe2 for onnxruntime.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22421

Reviewed By: zrphercule

Differential Revision: D16225518

Pulled By: houseroad

fbshipit-source-id: 01ae3eed85111a83a0124e9e95512b80109d6aee
2019-07-12 14:52:48 -07:00
BowenBao
319ef3bcbb Fix onnx custom op export & add initial test case (#21321)
Summary:
- Fix typo in ```torch/onnx/utils.py``` when looking up registered custom ops.
- Add a simple test case
    1. Register custom op with ```TorchScript``` using ```cpp_extension.load_inline```.
    2. Register custom op with ```torch.onnx.symbolic``` using ```register_custom_op_symbolic```.
    3. Export model with custom op, and verify with Caffe2 backend.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21321

Differential Revision: D16101097

Pulled By: houseroad

fbshipit-source-id: 084f8b55e230e1cb6e9bd7bd52d7946cefda8e33
2019-07-03 16:59:12 -07:00
Lu Fang
c1744a6c39 Add ONNX py3 CI cases (#21715)
Summary:
So far, we only have py2 ci for onnx. I think py3 support is important. And we have the plan to add onnxruntime backend tests, which only supports py3.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21715

Reviewed By: bddppq

Differential Revision: D15796885

Pulled By: houseroad

fbshipit-source-id: 8554dbb75d13c57b67ca054446a13a016983326c
2019-06-14 10:20:14 -07:00
Edward Yang
d6af6588c2 Super resolution export to Caffe2 is broken, skip it. (#21479)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21479
ghimport-source-id: 60fa97fb2dfb37a758c0e8b9c2bc0fb2819fd2f7

Differential Revision: D15713609

Pulled By: ezyang

fbshipit-source-id: a3d9c49e2db985f4373508cd44e94d43ae6e24da
2019-06-07 05:46:26 -07:00
Junjie Bai
bd53c8eb93 Move torchvision install out of onnx test script
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20890

Differential Revision: D15486657

Pulled By: bddppq

fbshipit-source-id: 3acd7386d1f070cad9bd43d6e74244b706c0dc16
2019-05-23 18:02:48 -07:00
Junjie Bai
90182a7332 Install torchvision from master
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20836

Differential Revision: D15464705

Pulled By: bddppq

fbshipit-source-id: abe2ac2de2bf4c8d07334e6b2565c738c40428ae
2019-05-23 02:16:57 -07:00
Lu Fang
7db4c8ed76 fix the onnx ci
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19048

Reviewed By: yinghai

Differential Revision: D14844917

Pulled By: houseroad

fbshipit-source-id: 30719e05a443981284dedf34a9e51213271aa934
2019-04-08 23:07:31 -07:00
Jesse Hellemn
8964a2e6e6 Split Caffe2 CI into cmake-only and python builds (#15917)
Summary:
bypass-lint

- Change all Caffe2 builds to use setup.py instead of cmake
- Add a -cmake- Caffe2 build configuration that uses cmake and only builds cpp
- Move skipIfCI logic from onnx test scripts to the rest of CI logic
- Removal of old PYTHONPATH/LD_LIBRARY_PATH/etc. env management
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15917

Reviewed By: orionr

Differential Revision: D13637583

Pulled By: pjh5

fbshipit-source-id: c5c5639db0251ba12b6e4b51b2ac3b26a8953153
2019-01-14 15:20:44 -08:00
Yinghai Lu
1756daaa75 Use FULL_CAFFE2 to build caffe2 and python in one shot (#10427)
Summary:
Building caffe2 and pytorch separately will end up duplicated symbols as they now share some basic libs. And it's especially bad for registry. This PR fixes our CI and build them in one shot with shared symbols.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10427

Reviewed By: bddppq

Differential Revision: D9282372

Pulled By: yinghai

fbshipit-source-id: 0514931ea88277029a68fa5368ff4336472f132e
2018-08-12 15:39:12 -07:00
Junjie Bai
3be8e4db51 Do not run ONNX integration tests in parallel
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/9861

Differential Revision: D9011458

Pulled By: bddppq

fbshipit-source-id: 7ab1b1763d56f1290ade7a99682ad461c97f807b
2018-07-25 21:54:29 -07:00
bddppq
3af3d13599
Run onnx integration tests in caffe2 CI (#7565)
* Run onnx integration tests in caffe2 CI

* verbose log

* turn off onnx verbose installation log

* can not install ninja

* Do not use all cores to build pytorch

* install tests require

* pip install to user dir

* use determined path to improve (s)ccache hit

* Do not change path in test.sh

* Add the compile cache hit trick to conda install as well

* cover jenkins in CI environment detection
2018-05-15 13:25:24 -07:00
bddppq
cf9913d569
Install torchvision before running integration tests (#7552) 2018-05-14 11:49:10 -07:00
bddppq
141d81d095
Move ONNX integration tests from onnx-fb-universe to PyTorch repo (#7397)
* Move ONNX integration tests from onnx-fb-universe to PyTorch repo

* Switch to use torchvision

* Delete single rnn operator tests, they have been covered in e2e tests in test_caffe2.py

* Mirror the fix in onnx-fb-universe to bypass cuda check

667326d84b
2018-05-11 15:05:18 -07:00