Commit Graph

134 Commits

Author SHA1 Message Date
Yan Li
03ab65023a backout D33469839 (#71443)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71443

cogwheel test inline_cvr_infer_canary_pyper_model_publish is timing out.

The convert_fx call takes > 20 mins for local and local_ro sub modules, which used to take ~ 2 mins.

Test Plan:
Fblearn flow run
* the following cmd took 1113 seconds before the diff and 5002 seconds after.
    flow-cli clone-locally 320014219  --run-as-secure-group pytorch_at_scale  --operators pyper_model_publish_workflow.pyper_model_publish_workflow.process_torch_package_model_files.process_non_sparse_parameters[0]

Cogwheel test
* Cogwheel test with packages in B3588 (the last good run) took 4694.48s
* Cogwheel test with packages in B3590 (the first timeout) took 13975.83s
* Cogwheel test with the following packages took 4535.04s
  * all packages in B3588 except the model publish
  * the model publish built with D33469839 (043e84b3d2) reversed (created D33633570)

Reviewed By: albanD, jerryzh168

Differential Revision: D33633570

fbshipit-source-id: dc5e777c48a90c551641a3f79126461f6a60449e
2022-01-18 15:49:02 -08:00
BowenBao
ff78c73286 [ONNX] Remove f arg from export_to_pretty_string (#69045) (#69546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69546

The arg is not used and was previously deprecated.

Also remove torch.onnx._export_to_pretty_string. It's redundant with the
public version.

Test Plan: Imported from OSS

Reviewed By: malfet

Differential Revision: D32994270

Pulled By: msaroufim

fbshipit-source-id: f8f3933b371a0d868d9247510bcd73c31a9d6fcc
2022-01-12 21:31:36 -08:00
anjali411
043e84b3d2 Per-overload torch.ops API (#67254)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67254

Fixes https://github.com/pytorch/pytorch/issues/65997

BC breaking:
`output = torch.ops._test.leaky_relu(self=torch.tensor(-1.0))` now fails with the error `TypeError: __call__() got multiple values for argument 'self'` since we call into `OpOverloadBundle`'s `__call__` method that has `self` bound to it as its first argument.

Follow up work:
1. disallow `default` as an overload name for aten operators.
2. Add a method to obtain a list of all overloads (exclude the ones registered by JIT)
3. Add methods/properties to `OpOverload` to access more schema information (types of input and output args etc)

cc ezyang gchanan

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D33469839

Pulled By: anjali411

fbshipit-source-id: c3fc43460f1c7c9651c64b4d46337be21c400621
2022-01-10 17:29:06 -08:00
BowenBao
3f02ad09ec [ONNX] shapeValueMap: Represent symbolic shape as value (#68203) (#69545)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69545

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32994272

Pulled By: malfet

fbshipit-source-id: 77cbdd78d01712faf4f9703549a2833340954509

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-12-09 22:00:46 -08:00
Bowen Bao
02e35ce17b [ONNX] Update onnx function export with comments and clean up (#66817) (#67803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67803

* Addresses comments from #63589

[ONNX] remove torch::onnx::PRODUCER_VERSION (#67107)

Use constants from version.h instead.
This simplifies things since we no longer have to update
PRODUCER_VERSION for each release.

Also add TORCH_VERSION to version.h so that a string is available for
this purpose.

[ONNX] Set `ir_version` based on opset_version. (#67128)

This increases the odds that the exported ONNX model will be usable.
Before this change, we were setting the IR version to a value which may
be higher than what the model consumer supports.

Also some minor clean-up in the test code:
* Fix string replacement.
* Use a temporary file so as to not leave files around in the test
  current working directory.

Test Plan: Imported from OSS

Reviewed By: msaroufim

Differential Revision: D32181306

Pulled By: malfet

fbshipit-source-id: 02f136d34ef8f664ade0bc1985a584f0e8c2b663

Co-authored-by: BowenBao <bowbao@microsoft.com>
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-11-05 10:35:35 -07:00
Jane Xu
5347dab851 Set test owners for onnx tests (#66860)
Summary:
Action following https://github.com/pytorch/pytorch/issues/66232

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66860

Reviewed By: malfet

Differential Revision: D31964696

Pulled By: janeyx99

fbshipit-source-id: 4e77d1bda92d9107ca0b90a06d24fa4477ceaffa
2021-10-27 12:50:45 -07:00
Nikita Shulga
136abf5aff [ONNX] Update sum symbolic to handle dtypes (#64289) (#66141)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66141

* Update aten::sum symbolic for dtype

* Remove nesting and modify opeartor tests

* Fix expect files

[ONNX] Fix expect files added in #64289 (#65356)

Test Plan: Imported from OSS

Reviewed By: jansel

Differential Revision: D31424091

fbshipit-source-id: d4af21e9f0d7e1c68bf6ef2f3e385db84b4c53f3
2021-10-22 13:46:12 -07:00
BowenBao
478d4cf883 [ONNX] Deprecated the example_outputs param from torch.onnx.export() function. (#62815) (#64380)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64380

* `example_outputs` used to determine the type and shape of the outputs without tracing the execution of the model. And it must be provided when exporting a ScriptModule or ScriptFunction when using export() function.

* Since we can work out `example_outputs` in internal function instead of being provided by user, so we deprecated this argument in the export() function to increase user experience of calling this function.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905266

Pulled By: malfet

fbshipit-source-id: d00b00d7d02b365d165028288ad915678caa51f2

Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-09-23 22:20:46 -07:00
BowenBao
47d1ed60e1 [ONNX] Remove argument _retain_param_name from torch.onnx.export() function. (#61702) (#64370)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64370

As of now, the "_retain_param_name" parameter has no description in PyTorch docs website. According to code, this argument determines if we keep the original parameter names of PyTorch model in the final ONNX graph. If this is False, those original parameter names will be replaced with a series of integers starting from 1.

Since setting numbers as parameter names make no sense to users, we remove this argument from the torch.onnx.export() function to increase user experience of calling this function.

This PR will still keep it in torch.onnx.export() function for backward support while all backend logic has been changed to work as _retain_param_name is set to True.

Test Plan: Imported from OSS

Reviewed By: ezyang

Differential Revision: D30905270

Pulled By: malfet

fbshipit-source-id: ca60757ca17daaff937e9f08da42596086795f4a

Co-authored-by: fatcat-z <zhang-ji@outlook.com>
2021-09-23 22:18:52 -07:00
BowenBao
6512838fab [ONNX] Enhance shape (two changes merged) (#64585)
Summary:
Enhanced shape inference by introducing typeReliableMap.
[ONNX] exporter changes for torch hub models (https://github.com/pytorch/pytorch/issues/62856)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/64585

Reviewed By: ezyang

Differential Revision: D30870418

Pulled By: msaroufim

fbshipit-source-id: 87a294799cb87d649d1d13b6114a5cfbac9be15c

Co-authored-by: jiafatom <jiafa@microsoft.com>
2021-09-15 13:02:19 -07:00
BowenBao
2aa19f33c6 [ONNX] Fix for batchnorm training op mode (#52758) (#62760)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/62760

* Rebase

# Conflicts:
#	torch/csrc/jit/passes/onnx/eval_peephole.cpp

# Conflicts:
#	test/onnx/test_utility_funs.py
#	torch/onnx/symbolic_opset9.py

* Update symbolic_opset12.py

* Update test.sh
# Conflicts:
#	.jenkins/caffe2/test.sh

* Merge

* Fix utility tests

# Conflicts:
#	test/onnx/test_pytorch_onnx_onnxruntime.py
#	test/onnx/test_utility_funs.py

* Fix for comment

* Enable BN tests

* Fix for test

* Update test_pytorch_onnx_onnxruntime.py

* Update test_pytorch_onnx_onnxruntime.py

* Update test_utility_funs.py

* Update test_pytorch_onnx_onnxruntime.py

Test Plan: Imported from OSS

Reviewed By: SplitInfinity

Differential Revision: D30349060

Pulled By: msaroufim

fbshipit-source-id: 93312c17607974731c17099ae181acb6e4c1c409
2021-08-18 13:29:07 -07:00
BowenBao
0a6828a306 [ONNX] use consistent quoting for string literals (#57757) (#58695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58695

As PEP8 says: "Pick a rule and stick to it." [1]

[1] https://www.python.org/dev/peps/pep-0008/#string-quotes

Test Plan: Imported from OSS

Reviewed By: driazati

Differential Revision: D28714811

Pulled By: SplitInfinity

fbshipit-source-id: c95103aceb1725c17c034dc6fc8216627f189548

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-05-27 12:06:42 -07:00
Antonio Cuni
980d6f2589 torch.linalg.det (#53119)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/51652.
In particular:
- the main implementation is in `torch.linalg.det` now. `torch.det` is just a deprecated alias to it
- add a new `OpInfo` for `torch.linalg.det`
- remove the old-style tests for `torch.det` (this is similar to what we did for `torch.linalg.slogdet`, see https://github.com/pytorch/pytorch/issues/49194)
- added a `out=` argument to `torch.linalg.det`, but **not** to `torch.det`.

It is worth noting that I had to skip few tests:
- `TestGradientsCuda::test_fn_gradgrad_linalg_det_cuda_float64`. This is not a regression: the functionality is broken also on master, but the test is not executed properly due to https://github.com/pytorch/pytorch/issues/53361.

And the following tests which fails only on ROCm:
- `test_variant_consistency_jit_cuda_{float64,float32}`
- `test_fn_grad_cuda_float64`

I think that the ROCm tests fail because the current linalg.det backward is unstable if the matrix has repeated singular values, see https://github.com/pytorch/pytorch/issues/53364 .

(At the moment of writing some CI jobs are still running but I believe the build will be green, since the only difference wrt the last push is the skip of the ROCm tests)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53119

Reviewed By: H-Huang

Differential Revision: D27441999

Pulled By: mruberry

fbshipit-source-id: 5eab14c4f0a165e0cf9ec626c3f4bb23359f2a9e
2021-04-05 08:45:27 -07:00
shubhambhokare1
e1c1a7e964 [ONNX] Changes to export API to better handle named arguments (#47367)
Summary:
The args parameter of ONNX export is changed to better support optional arguments such that args is represented as:
args (tuple of arguments or torch.Tensor, a dictionary consisting of named arguments (optional)):
            a dictionary to specify the input to the corresponding named parameter:
            - KEY: str, named parameter
            - VALUE: corresponding input

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47367

Reviewed By: H-Huang

Differential Revision: D25432691

Pulled By: bzinodev

fbshipit-source-id: 9d4cba73cbf7bef256351f181f9ac5434b77eee8
2020-12-10 12:31:00 -08:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Mike Ruberry
cb26661fe4 Throws runtime error when torch.full would infer a float dtype from a bool or integral fill value (#40364)
Summary:
BC-breaking NOTE:

In PyTorch 1.6 bool and integral fill values given to torch.full must set the dtype our out keyword arguments. In prior versions of PyTorch these fill values would return float tensors by default, but in PyTorch 1.7 they will return a bool or long tensor, respectively. The documentation for torch.full has been updated to reflect this.

PR NOTE:

This PR causes torch.full to throw a runtime error when it would have inferred a float dtype by being given a boolean or integer value. A versioned symbol for torch.full is added to preserve the behavior of already serialized Torchscript programs. Existing tests for this behavior being deprecated have been updated to reflect it now being unsupported, and a couple new tests have been added to validate the versioned symbol behavior. The documentation of torch.full has also been updated to reflect this change.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40364

Differential Revision: D22176640

Pulled By: mruberry

fbshipit-source-id: b20158ebbcb4f6bf269d05a688bcf4f6c853a965
2020-06-23 23:27:22 -07:00
Ksenija Stanojevic
2f7f47eba1 [ONNX]Enable tests in test_operators.py (#39431)
Summary:
Enable Dropout and SoftmaxCrossEntropy tests in test_operators.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39431

Reviewed By: hl475

Differential Revision: D21877501

Pulled By: houseroad

fbshipit-source-id: 1e9b1e5cf80dc1843bdbde2662f3339e357c6654
2020-06-03 21:49:19 -07:00
Negin Raoof
b7b99ab0c8 [ONNX] Remove Aten ops from ONNX export (#37239)
Summary:
This PR adds a new operator export type to exporter: ONNX_FALLTHROUGH
This new type allows ops that are not supported to pass through.
This PR also removes all aten ops in ONNX operator export type mode.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37239

Reviewed By: hl475

Differential Revision: D21440509

Pulled By: houseroad

fbshipit-source-id: 38b826677cf3431ea44868efebefe1ff51c9aa75
2020-05-29 21:20:14 -07:00
Negin Raoof
7f1c9886cd [ONNX] Enable models tests (#38791)
Summary:
PR to enable model tests which are fixed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38791

Reviewed By: hl475

Differential Revision: D21732498

Pulled By: houseroad

fbshipit-source-id: f417f9d4124ef5a663dc666d5c2ed6ba013b26a4
2020-05-27 09:09:59 -07:00
Ksenija Stanojevic
5b12c29b17 [ONNX]Update Dropout Export (#37641)
Summary:
Dropout operator in ONNX has additional input: training_mode.
Update Dropout export to match changes made in ONNX
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37641

Reviewed By: hl475

Differential Revision: D21613782

Pulled By: houseroad

fbshipit-source-id: f34d1a1f8116200c6609b4b43489d5610f6d0ec4
2020-05-18 13:10:44 -07:00
Zeeshan Siddiqui
6c0f447b51 Remove ONNX BatchNorm(12) test and converter. (#37309)
Summary:
Pursuant to https://github.com/onnx/onnx/pull/2750 we must remove PyTorch ONNX exporter related changes to BatchNorm(12) that were introduced as part of https://github.com/pytorch/pytorch/pull/35567. This change is also needed to unblock ONNX [BUILD CI failures](https://circleci.com/gh/onnx/onnx/4629?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link) caused by PyTorch/Caffe2 tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37309

Reviewed By: hl475

Differential Revision: D21288914

Pulled By: houseroad

fbshipit-source-id: 15b076a2af55918dcd57f4e2fc77accd3d1510bd
2020-04-28 17:45:01 -07:00
Ksenija Stanojevic
92e91cee8d ONNX Export Support for CrossEntropyLoss (#34830)
Summary:
Add ONNX export support for torch.nn.CrossEntropyLoss.

This PR makes following changes:
1. Updates nll_loss export
2. Makes a post pass for SoftmaxCrossEntropy
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34830

Reviewed By: hl475

Differential Revision: D21230712

Pulled By: houseroad

fbshipit-source-id: c81911a41968e23813ba10274340ce4d8ba1ed78
2020-04-25 17:56:53 -07:00
Lara Haidar
728c7dcea3 ONNX Update training ops and training amenable export API (#35567)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35567

Reviewed By: hl475

Differential Revision: D20715339

Pulled By: houseroad

fbshipit-source-id: ad88097e76b169035ab5814b769dc1bed54c6008
2020-03-29 23:14:25 -07:00
Alban Desmaison
45e1be9762 Revert D19710370: [pytorch][PR] ONNX Update training ops and training amenable export API
Test Plan: revert-hammer

Differential Revision:
D19710370

Original commit changeset: e5e79d385529

fbshipit-source-id: d0114dc561a3415869805d3fbf43b92730bbcf54
2020-03-27 06:51:05 -07:00
Lara Haidar
025a0abe5a ONNX Update training ops and training amenable export API (#32950)
Summary:
- Update Dropout and Batchnorm in opset 12 : https://github.com/onnx/onnx/pull/2568
- Update api logic for exporting to ONNX training amenable models
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32950

Reviewed By: hl475

Differential Revision: D19710370

Pulled By: houseroad

fbshipit-source-id: e5e79d38552936966662c41d39ddf33be1ba3e35
2020-03-27 00:39:39 -07:00
Edward Yang
d5f8c8f3ba Revert D20121169: [pytorch][PR] ONNX Export Support for CrossEntropyLoss
Test Plan: revert-hammer

Differential Revision:
D20121169

Original commit changeset: 7b56617e8c60

fbshipit-source-id: d7f302d1e54f3c978c3be0a0ad1ee600790a5b27
2020-03-12 20:30:54 -07:00
Ksenija Stanojevic
944ea4c334 ONNX Export Support for CrossEntropyLoss (#33767)
Summary:
Add ONNX export support for torch.nn.CrossEntropyLoss.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33767

Reviewed By: hl475

Differential Revision: D20121169

Pulled By: houseroad

fbshipit-source-id: 7b56617e8c60617b922949fc8b4ecc626eedf7ed
2020-03-12 11:46:58 -07:00
Pritam Damania
f050b16dd9 Move pytorch distributed tests to separate folder for contbuild. (#30445)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30445

Create distributed and rpc directories under caffe/test for better management
of unit tests.

Differential Revision: D18702786

fbshipit-source-id: e9daeed0cfb846ef68806f6decfcb57c0e0e3606
2020-01-22 21:16:59 -08:00
Brian Stark
a472f0201f Added support for Dim operation in ONNX export (#31928)
Summary:
While ONNX does not currently directly support the Dim operation on a
tensor, we can provide the same functionality with two ONNX operations.
This allows us to support Dim for all opsets. It may be adventageous to
add support for Dim into a future ONNX opset, and use that for more
efficient code.
While testing dim op found that there is an issue with empty blocks
withing if statements. Modified graph generation to prevent generation
of empty if blocks.

Fixes https://github.com/pytorch/pytorch/issues/27569
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31928

Reviewed By: hl475

Differential Revision: D19376602

Pulled By: houseroad

fbshipit-source-id: 111682b058a5341f5cca6c1a950c83ae412a4c6c
2020-01-13 19:42:43 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
neginraoof
5205556782 Export custom ops (#29752)
Summary:
Updated to export API:
When calling this API, a dict containing the custom opsets (domain and version) used to export the model could be provided.
We allow registering one custom opset (domain, version) per ONNX opset. So, when exporting an operator from a custom domain, users need to pass this pair. Default custom opset version is 1.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29752

Reviewed By: hl475

Differential Revision: D18703662

Pulled By: houseroad

fbshipit-source-id: 84d22557d132b526169051193d730761798fce60
2019-12-09 18:48:50 -08:00
Lara Haidar
45024e7a35 Support Exporting Bitshift to ONNX (#28210)
Summary:
Support exporting left/right bitshifts to ONNX for all opset versions.

ONNX has a bitshift operator in opset 11, but it only supports unsigned ints, so it can't be used in PyTorch (since only uint8 is the only uint type).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28210

Reviewed By: hl475

Differential Revision: D18575512

Pulled By: houseroad

fbshipit-source-id: 74161db67f599996a0614981edcc171af6780d21
2019-11-19 09:25:50 -08:00
Spandan Tiwari
509d9630ca Disabling ONNX IR v4 sematics for opset 8 or lower. (#28990)
Summary:
Currently, `keep_initializers_as_input` argument in `torch.onnx.export` API can be used to choose whether to export an ONNX model with IR v3 or v4 semantics. Currently, the implementation does not check for which opset is being used for export. This is an issue because ONNX IR v4 is valid only for opset 9 and above (as listed [here](https://github.com/onnx/onnx/releases/tag/v1.4.0)), and opset 8 or lower export with `keep_initializers_as_input=False` will create a illegal ONNX graph.

This change fixes this by introducing a check on opset version when deciding whether to export ONNX IR v3 or v4.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28990

Reviewed By: hl475

Differential Revision: D18352523

Pulled By: houseroad

fbshipit-source-id: 7e9055d405c3faf52b80a8de0d04186d4c350c15
2019-11-06 21:57:21 -08:00
Negin Raoof
60d606094c Export Meshgrid (#26037)
Summary:
Exporting meshgrid op in opset 9 symbolics
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26037

Reviewed By: hl475

Differential Revision: D17452325

Pulled By: houseroad

fbshipit-source-id: d556b78e46594a232cdefd8c257cccd8b98221d6
2019-10-25 16:59:22 -07:00
neginraoof
d2eb08d17b Fix tracing slice/select with dynamic inputs (#26549)
Summary:
Fix Slice/Select trace arguments. This PR stashes arguments to functions in order to avoid tracing them as constants.
This PR depends on a fix for select op in PR:
https://github.com/pytorch/pytorch/pull/25273
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26549

Reviewed By: hl475

Differential Revision: D17623851

Pulled By: houseroad

fbshipit-source-id: ae314004266688d2c25c5bada2dcedbfc4f39c5b
2019-10-22 17:09:40 -07:00
Negin Raoof
4f70b5a4de Export det (#26958)
Summary:
Added symbolic to export det in opset 11
Updating ONNX submodule is required for det export
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26958

Reviewed By: hl475

Differential Revision: D17844887

Pulled By: houseroad

fbshipit-source-id: 224ae3ff82939dc7ae8584c5a30a31fe6afa05f6
2019-10-22 13:33:15 -07:00
neginraoof
95922c90b5 Export update for arange and _dim_arange (#26875)
Summary:
Export arange and _dim_arange using onnx::range in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26875

Reviewed By: hl475

Differential Revision: D17623848

Pulled By: houseroad

fbshipit-source-id: 41f0066ca1c42882ccc051a3ee5448dca25ee5d2
2019-10-17 13:55:45 -07:00
Negin Raoof
a24291a554 Unfold export (#24970)
Summary:
ONNX export for Unfold in symbolic opset9 + op and ORT tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24970

Reviewed By: hl475

Differential Revision: D17495106

Pulled By: houseroad

fbshipit-source-id: fcd179a1213c0f219628f25c09e66fcfe4c5df50
2019-10-07 13:06:37 -07:00
Negin Raoof
c874dd91a7 export remainder (#24410)
Summary:
Added ONNX export support for torch.remainder and torch.fmod
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24410

Reviewed By: hl475

Differential Revision: D17466791

Pulled By: houseroad

fbshipit-source-id: afe6519e5f370824e3b4a45b69036a7260fb72cf
2019-10-03 20:15:20 -07:00
Negin Raoof
d93fc64776 Update export for topk and sort (#25739)
Summary:
updated export for topk and sort as part of opset11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25739

Reviewed By: hl475

Differential Revision: D17467131

Pulled By: houseroad

fbshipit-source-id: 653be138455728ec8e9bb81ae63dd7ce0c4d0793
2019-10-02 12:20:30 -07:00
Negin Raoof
6b9bcd0606 export baddbmm (#26901)
Summary:
Adding symbolic for baddbmm export
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26901

Reviewed By: hl475

Differential Revision: D17620967

Pulled By: houseroad

fbshipit-source-id: 3931dff5a4afdcb4a45d967fb0efaf84029c16e5
2019-09-26 22:53:21 -07:00
Lara Haidar
614edfce81 Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889)
Summary:
ONNX does not support dictionaries for inputs and output. The reason is that the arg flattening and unflattening does not handle Dictionary types.
This PR adds flattening/unflattening support for dictionaries and strings.
However this feature should be handled with caution for input dictionaries; and users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available.

This PR will allow exporting cases where models have dictionnary outputs (detection and segmentation models in torchvision), and where dictionary inputs are used for model configurations (MultiScaleRoiAlign in torchvision).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25889

Reviewed By: hl475

Differential Revision: D17613605

Pulled By: houseroad

fbshipit-source-id: c62da4f35e5dc2aa23a85dfd5e2e11f63e9174db
2019-09-26 22:31:09 -07:00
Lu Fang
b6a1d618b2 Revert D17565828: [pytorch][PR] [ONNX] Export baddbmm
Test Plan: revert-hammer

Differential Revision:
D17565828

Original commit changeset: 85f605a7b3fa

fbshipit-source-id: 7705325087d83362f71a717be880a13e9f575b37
2019-09-25 14:24:18 -07:00
Negin Raoof
63fd10549a Export baddbmm (#25738)
Summary:
Added ONNX export for baddbmm in opset9
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25738

Reviewed By: hl475

Differential Revision: D17565828

Pulled By: houseroad

fbshipit-source-id: 85f605a7b3fa4783ef4f6ced86223133c85062d5
2019-09-25 12:28:06 -07:00
Spandan Tiwari
af3b15b74c Setting automatic default selection for ONNX IR v4 semantics in ONNX export API (#26146)
Summary:
This is a follow-up PR for https://github.com/pytorch/pytorch/pull/23284. In that PR we had removed changing the default behavior for `keep_initializers_as_input` argument to the export API. With this PR we are enabling that change in that if `keep_initializers_as_input` is not specified then value/behavior for this argument is chosen automatically depending on whether the export type is ONNX or not.

This was part of the earlier PR was removed for further review. The test points have also been updated.

This change may fail some internal tests which may require explicitly setting `keep_initializers_as_input=True` to preserve old behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26146

Reviewed By: hl475

Differential Revision: D17369677

Pulled By: houseroad

fbshipit-source-id: 2aec2cff50d215714ee8769505ef24d2b7865a11
2019-09-24 10:02:31 -07:00
Lara
c79d116a7d Update ONNX Export for Gather and Scatter for Opset 11
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24790

Reviewed By: hl475

Differential Revision: D17159723

Pulled By: houseroad

fbshipit-source-id: a63bb7c681120de85588dafecd03f04742dde8b7
2019-09-23 17:13:25 -07:00
Negin Raoof
293d73fc92 Export gelu (#24475)
Summary:
Added support for gelu in symbolic opset9 + op and ORT tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24475

Reviewed By: hl475

Differential Revision: D17088708

Pulled By: houseroad

fbshipit-source-id: 9d2f9d7d91481c57829708793d88f786d6c3956f
2019-09-18 21:18:07 -07:00
neginraoof
fcb100a3e0 Export round (#26126)
Summary:
Added round export in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26126

Reviewed By: hl475

Differential Revision: D17403589

Pulled By: houseroad

fbshipit-source-id: f9ac3f7602c50019b9feadda8d5d944a058c5455
2019-09-16 16:40:10 -07:00
Lara Haidar
8ca93ec351 Fix torch.arange traced as constant (#25363)
Summary:
torch.arange is always traced as a constant which makes it impossible to trace correctly TestModel() from the example below.

class TestModel(torch.nn.Module):
  def forward(self, input):
    return torch.arange(input.shape[0])
input = torch.randn(5,3,2)
print(torch.jit.trace(TestModel(), input).graph)

Currently the trace of TestModel() looks like:

graph(%self : ClassType<TestModel>,
      %input : Float(5, 3, 2)):
  %11 : int = prim::Constant[value=5]()
  %12 : int = prim::Constant[value=4]()
  %13 : int = prim::Constant[value=0]()
  %14 : Device = prim::Constant[value="cpu"]()
  %15 : bool = prim::Constant[value=0]()
  %16 : Long(5) = aten::arange(%11, %12, %13, %14, %15)
  return (%16)

This PR will allow the trace to have a variable value for %11.
The trace of TestModel() with this PR's modifs looks like:

graph(%self : ClassType<TestModel>,
      %input : Float(5, 3, 2)):
  %2 : int = prim::Constant[value=0]()
  %3 : int = aten::size(%input, %2)
  %4 : Long() = prim::NumToTensor(%3)
  %11 : Scalar = prim::ImplicitTensorToNum(%4)
  %12 : int = prim::Constant[value=4]()
  %13 : int = prim::Constant[value=0]()
  %14 : Device = prim::Constant[value="cpu"]()
  %15 : bool = prim::Constant[value=0]()
  %16 : Long(5) = aten::arange(%11, %12, %13, %14, %15)
  return (%16)

More info : https://github.com/pytorch/pytorch/issues/20075
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25363

Reviewed By: zrphercule

Differential Revision: D17301934

Pulled By: houseroad

fbshipit-source-id: d9907763742cb51d8c761bf63fc2e4918f7b9941
2019-09-11 13:39:54 -07:00