Commit Graph

606 Commits

Author SHA1 Message Date
Negin Raoof
f99a28f515 [ONNX] Adding a pass to replace interpolate function with aten::__interpolate (#35744)
Summary:
Since aten;:__interpolate is removed in https://github.com/pytorch/pytorch/pull/34514, we need a pass replace interpolate function with aten::__interpolate for ONNX export.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35744

Reviewed By: hl475

Differential Revision: D20907041

Pulled By: houseroad

fbshipit-source-id: f2d2cdfec47389245c50f538267124eedf682adf
2020-04-14 23:16:22 -07:00
Wanchao Liang
999d7f6ab2 [jit] tracer flag to guard risky behaivors (#36277)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36277

This PR introduce a flag to the tracer that guard the risky behaviors
like adding list/dict as output of the tracer. Currently to ensure not
BC breaking user, we throw warning if the tracer output is list, and
will throw error when the tracer output is dict to enforce using this
flag (next PR)

Test Plan: Imported from OSS

Differential Revision: D20998157

Pulled By: wanchaol

fbshipit-source-id: 0d2c55f1a263a48b1b92dd6ad54407815e0a6f72
2020-04-13 22:35:03 -07:00
Negin Raoof
681ca45717 [ONNX] Export torch.inverse op (#35318)
Summary:
Added support for torch.inverse export as part of opset 12
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35318

Reviewed By: hl475

Differential Revision: D20865900

Pulled By: houseroad

fbshipit-source-id: be12b5194d21408cae24eec16e9e12377e8546ad
2020-04-07 10:48:33 -07:00
neginraoof
e56ba8481e [ONNX] fix size for opset 11 (#35984)
Summary:
Fixing size, as the aten op has updated to support 0 inputs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35984

Reviewed By: hl475

Differential Revision: D20858214

Pulled By: houseroad

fbshipit-source-id: 8ad0a0174a569455e89da6798eed403c8b162a47
2020-04-05 18:58:54 -07:00
neginraoof
60a3e82c4e [ONNX] Fix for constant folding: Slice, Added ReduceL1 and ReduceL2 (#35280)
Summary:
1- Added support for constant folding onnx::ReduceL1 and onnx::ReduceL2
2- Fixed constant folding for slice as onnx::Slice opset 11 supports negative axes and indices
3- Updated export of select opset 11
4- Separated test environment for test_utility_functions as environment variables could be overwritten by caffe2 quantization tests on CI
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35280

Reviewed By: hl475

Differential Revision: D20626140

Pulled By: houseroad

fbshipit-source-id: 39667c7852eeaa97d9da23f53da52760d3670ecf
2020-04-01 04:47:47 -07:00
Shihao Xu
cae6bdf199 [JIT] Mark aten::wait as having side effect, since it can represent RPC message received (#35695)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35695

aten::wait was optimized out, causing RPC futures are not waited on.

Test Plan:
```
buck test mode/dev-nosan //caffe2/torch/fb/distributed/model_parallel/tests:test_dist_optim
```

```
buck test mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork

buck build mode/dev-nosan //caffe2/test/distributed/rpc/jit:rpc_fork && \
buck-out/gen/caffe2/test/distributed/rpc/jit/rpc_fork\#binary.par \
-r test_python_future_with_jit
```

```
buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r test_trace_fork_wait_inline
```

```
buck build mode/dev-nosan //caffe2/test:jit && \
buck-out/gen/caffe2/test/jit\#binary.par -r test_trace_fork_wait_inline_onnx
```

Differential Revision: D9562716

fbshipit-source-id: 35b2c971efa42949ffdf0910bd75a927eee8d965
2020-03-31 22:17:25 -07:00
Lara Haidar
728c7dcea3 ONNX Update training ops and training amenable export API (#35567)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/35567

Reviewed By: hl475

Differential Revision: D20715339

Pulled By: houseroad

fbshipit-source-id: ad88097e76b169035ab5814b769dc1bed54c6008
2020-03-29 23:14:25 -07:00
julianmack
ad1091f753 Fixes default dtype value for onnx hardtanh export (opset11) (#35467)
Summary:
Oneline fix to lara-hdr 's PR https://github.com/pytorch/pytorch/pull/30169.

Default `dtype` value should be set when `dtype is None` rather than when `dtype is not None`.

I didn't make an issue for this as such a small change but I have been using this locally in order to export a model with opset 11 (opset 10 still works).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35467

Differential Revision: D20686048

Pulled By: mruberry

fbshipit-source-id: 726a5f9c0711c7a79b171fe98b602cdef27f9b31
2020-03-27 19:15:42 -07:00
Alban Desmaison
45e1be9762 Revert D19710370: [pytorch][PR] ONNX Update training ops and training amenable export API
Test Plan: revert-hammer

Differential Revision:
D19710370

Original commit changeset: e5e79d385529

fbshipit-source-id: d0114dc561a3415869805d3fbf43b92730bbcf54
2020-03-27 06:51:05 -07:00
Lara Haidar
025a0abe5a ONNX Update training ops and training amenable export API (#32950)
Summary:
- Update Dropout and Batchnorm in opset 12 : https://github.com/onnx/onnx/pull/2568
- Update api logic for exporting to ONNX training amenable models
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32950

Reviewed By: hl475

Differential Revision: D19710370

Pulled By: houseroad

fbshipit-source-id: e5e79d38552936966662c41d39ddf33be1ba3e35
2020-03-27 00:39:39 -07:00
Elias Ellison
e68afe3ab9 [JIT] remove prim::shape op (#34286)
Summary:
Desugar prim::shape to aten::size so that passes don't need to reason about both ops. Serialized models still resolve to `prim::shape` so this doesn't break BC.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34286

Differential Revision: D20316818

Pulled By: eellison

fbshipit-source-id: d1585687212843f51e9396e07c108f5c08017818
2020-03-26 19:29:25 -07:00
Lara Haidar
7e327e1210 Enable Constant Folding for ONNX Opset 12 (#34823)
Summary:
Currently constant folding is only enabled for ONNX opset versions 9 to 11. This PR enables it for the new ONNX opset 12.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34823

Reviewed By: hl475

Differential Revision: D20627629

Pulled By: houseroad

fbshipit-source-id: 7501d8ab8295751c0e9a02752d8908a35d8a0454
2020-03-25 11:06:39 -07:00
anjali411
c73e97033a Added type promotion logic for complex numbers (#34093)
Summary:
Issue: https://github.com/pytorch/pytorch/issues/33780
After this PR:
1. dtype promotion logic will correctly work for ops involving complex scalars
2. added alias for complex64 (cfloat) and complex128 (cdouble)
3. added an internal function get_complex_default_dtype (consciously not exposed in public API)
   - sets the default complex dtype to be double if default_dtype is set to double, else float https://github.com/pytorch/pytorch/pull/34093#discussion_r392350224
>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex64)

>>> torch.set_default_dtype(torch.float64)
>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex128)

>>> 1j + torch.ones(2)
tensor([(1.0000 + 1.0000j), (1.0000 + 1.0000j)], dtype=torch.complex128)

>>> torch.tensor(1j) + torch.ones(2,2)
tensor([[(1.0000 + 1.0000j), (1.0000 + 1.0000j)],
        [(1.0000 + 1.0000j), (1.0000 + 1.0000j)]], dtype=torch.complex128)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34093

Differential Revision: D20537125

Pulled By: anjali411

fbshipit-source-id: 05fb1f81b8ba039d0b698cdd2c0bbf8b0ce0b767
2020-03-25 09:12:21 -07:00
Mike Ruberry
9c4683e8e3 Revert D20312366: [pytorch][PR] Added type promotion logic for complex numbers
Test Plan: revert-hammer

Differential Revision:
D20312366

Original commit changeset: 90f00a1a916d

fbshipit-source-id: 4510739a888b2eec5d8a72e792998ac46da6d82a
2020-03-19 05:55:57 -07:00
anjali411
c8f665dcb6 Added type promotion logic for complex numbers (#34093)
Summary:
Issue: https://github.com/pytorch/pytorch/issues/33780
After this PR:
1. dtype promotion logic will correctly work for ops involving complex scalars
2. torch.ComplexFloatTensor, torch.ComplexDoubleTensor works
3. added alias for complex64 (cfloat) and complex128 (cdouble)
4. added an internal function get_complex_default_dtype (consciously not exposed in public API)

>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex64)

>>> torch.set_default_dtype(torch.float64)
>>> 1j*torch.ones(2)
tensor([(0.0000 + 1.0000j), (0.0000 + 1.0000j)], dtype=torch.complex128)

>>> 1j + torch.ones(2)
tensor([(1.0000 + 1.0000j), (1.0000 + 1.0000j)], dtype=torch.complex128)

>>> torch.tensor(1j) + torch.ones(2,2)
tensor([[(1.0000 + 1.0000j), (1.0000 + 1.0000j)],
        [(1.0000 + 1.0000j), (1.0000 + 1.0000j)]], dtype=torch.complex128)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34093

Differential Revision: D20312366

Pulled By: anjali411

fbshipit-source-id: 90f00a1a916d9c8eeda101eb6e9d250fce569815
2020-03-18 23:36:13 -07:00
Mike Ruberry
1afc584188 Deprecates current torch.full integral type inference, adds torch.full complex type inference (#34709)
Summary:
Per title.

Currently torch.full will always (attempt to) produce a float tensor. This is inconsistent with NumPy in (at least) two cases:

- When integral fill values (including bool) are given
- When complex fill values are given

For example:

```
np.full((1, 2), 1).dtype
: dtype('int64')

np.full((1, 2), (1 + 1j)).dtype
: dtype('complex128')
```

Whereas in PyTorch

```
torch.full((1, 2), 1).dtype
: torch.float32

torch.full((1, 2), (1 + 1j)).dtype
: RuntimeError: value cannot be converted to type float without overflow: (1,1)
```

This PR begins the process of deprecating our current behavior of returning float tensors (by default) when given integer fill values by warning the user that integer fill values will require explicitly specifying the dtype or out kwargs in 1.6, and in 1.7 the behavior will change to return a LongTensor by default (BoolTensor for bool values). The intermediate 1.6 release is to prevent changing the behavior silently and unexpectedly.

The PR also implements inference for complex types. So that with it:

```
torch.full((1, 2), (1 + 1j)).dtype
: torch.complex64
```

The complex type inference returns a ComplexFloat tensor when given a complex fill value (and no dtype or out kwarg is specified), unless the default dtype is Double, in which case a ComplexDouble tensor is returned.

A test for these behaviors is added to test_torch.py.

Implementation note:

This PR required customizing full's dispatch because currently in eager codegen the TensorOptions object passed to functions improperly sets has_dtype() to true, even if the user did not explicitly provide a dtype. torch.arange already worked around this issue with its own custom implementation. The JIT, however, does pass a properly constructed TensorOptions object.

Future Work:

This PR does not extend torch.full's complex type inference to ONNX. This seems unlikely to come up and will be a clear error if it does. When integer type inference is added to torch.full, however, then porting the behavior to ONNX may be warranted. torch.arange ported its complex type promotion logic to ONNX, for example.

Additionally, this PR mostly leaves existing call sites in PyTorch that would trigger this warning intact. This is to be more minimal (since the PR is BC breaking). I will submit a separate PR fixing PyTorch's call sites.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34709

Differential Revision: D20509387

Pulled By: mruberry

fbshipit-source-id: 129593ba06a1662032bbbf8056975eaa59baf933
2020-03-18 12:19:31 -07:00
neginraoof
480d1849b0 [ONNX] Fix for expand -1 dim value (#34069)
Summary:
PyTorch expand allows size with -1 dim value. -1 dim value means to infer the dimension from input tensor. This can be exported to ONNX expand with 1 dim value since ONNX expand supports two-way broadcast.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34069

Reviewed By: hl475

Differential Revision: D20195532

Pulled By: houseroad

fbshipit-source-id: c90e7d51b9d7422c09c5ed6e135ca8263105b8c9
2020-03-16 15:30:20 -07:00
Supriya Rao
8a395882ce [quant][onnx] Support conversion of quantized sigmoid operator from pytorch to caffe2 (#34629)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34629

Add support for sigmoid in the conversion flow through onnx

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_quantized_sigmoid
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_small_model

Imported from OSS

Differential Revision: D20433680

fbshipit-source-id: 95943e14637d294122e4d102c5c19c06d27064c6
2020-03-13 22:42:06 -07:00
Supriya Rao
af28915164 [quant][onnx] Add support to convert max_pool2d quantized pytorch op to C2 (#33945)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33945

Add mapping for this operator in symbolics

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_max_pool2d

Imported from OSS

Differential Revision: D20433681

fbshipit-source-id: 88f02ade698262a6f8824671830bc1f7d40bbfa6
2020-03-13 22:40:49 -07:00
ettiee
2cf576e9ea small typos (#34589)
Summary:
Spotted a couple of small typos 🙏
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34589

Differential Revision: D20387653

Pulled By: ngimel

fbshipit-source-id: 3089fe606ccb8c8ee57cf7a900aba714fd0ce567
2020-03-11 11:01:31 -07:00
Colin Jermain
4f62cbe7de [ONNX] Support one_hot (#34454)
Summary:
This PR resolves https://github.com/pytorch/pytorch/issues/22534 by adding a converter for the `torch.nn.functional.one_hot` function, and covering it with a test.

Are there other places this should be tested?
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34454

Reviewed By: hl475

Differential Revision: D20354255

Pulled By: houseroad

fbshipit-source-id: 84224c1610b2cc7986c91441c65647ddc090750d
2020-03-09 22:26:36 -07:00
Mike Ruberry
3671036ef3 Adds true_divide function, analogous to Python 's, JAX's, NumPy's (true) division (#34236)
Summary:
See NumPy's division documentation here: https://numpy.org/doc/1.18/reference/generated/numpy.divide.html#numpy.divide.

True division is the same as PyTorch's default division except when both inputs are integer or bool tensors. In the latter case the inputs are (conceptually) cast to the default floating type before the division is performed.

The function is implemented for dense and sparse tensors and supports exporting to ONNX from PyTorch's eager mode or JIT traces. The function is inherently incompatible with exporting to ONNX via JIT script, and is another datapoint suggesting we should deprecate exporting scripted graphs to ONNX.

Tests are added for the type promotion, named tensor, and ONNX export behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34236

Reviewed By: houseroad

Differential Revision: D20334087

Pulled By: mruberry

fbshipit-source-id: 83d00d886f46f713215d7d9e02ffd043164c57f1
2020-03-09 21:06:33 -07:00
neginraoof
bcfd348858 [ONNX] Export new_zeros (#34077)
Summary:
ONNX export for new_zeros op added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34077

Reviewed By: hl475

Differential Revision: D20332074

Pulled By: houseroad

fbshipit-source-id: 4235c4f2c279c37aa8dde6d13c1b26f621967768
2020-03-09 10:38:22 -07:00
neginraoof
d2b5eb2a45 [ONNX] Fix for random generators export (#33789)
Summary:
Export random generator with dynamic input size
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33789

Reviewed By: hl475

Differential Revision: D20121175

Pulled By: houseroad

fbshipit-source-id: c16d11eb07678166d125759d97aadfcd7c80ef14
2020-03-05 17:58:54 -08:00
Lara Haidar
8216d9ae64 ONNX Export Support for NLLLoss (#33509)
Summary:
Adding ONNX export support for torch.nn.NLLLoss().
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33509

Reviewed By: hl475

Differential Revision: D20052212

Pulled By: houseroad

fbshipit-source-id: 62efcff4efa1e0e97c65ad1b670c2fc1da08d28f
2020-03-05 11:13:21 -08:00
Michael Suo
bd7e9c490a [jit] stop printing crap in test_jit (#33917)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/33917

Test Plan: Imported from OSS

Differential Revision: D20150750

Pulled By: suo

fbshipit-source-id: 9a35298a8856d423fb6b9043174853cccf968706
2020-02-27 19:06:43 -08:00
Brian Vaughan
910acafc79 Revert D20124224: [jit] stop printing crap in test_jit
Test Plan: revert-hammer

Differential Revision:
D20124224

Original commit changeset: 9241d21fdf94

fbshipit-source-id: 0680f9db922f9a33a4e859eedd142b87a51bbede
2020-02-27 13:40:34 -08:00
Michael Suo
150e025be8 [jit] stop printing crap in test_jit (#33779)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33779

This should eliminate random warnings and print spew from test_jit.

It also fixes a bug where we weren't properly comparing captured outputs
(!)

Test Plan: Imported from OSS

Differential Revision: D20124224

Pulled By: suo

fbshipit-source-id: 9241d21fdf9470531b0437427b28e325cdf08d3a
2020-02-26 18:46:03 -08:00
Lara Haidar
a0e90e1b45 ONNX Error Message on Missing Op (#33593)
Summary:
Print a complete and comprehensive error message with a description of the issue when an op is missing during ONNX export (previously an ambiguous "key not in registry" error was thrown which was not helpful for the user to understand the failure).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33593

Reviewed By: hl475

Differential Revision: D20052213

Pulled By: houseroad

fbshipit-source-id: ae3010a97efdab26effad5e4a418e9cc41f5b04e
2020-02-26 15:18:16 -08:00
Hong Xu
a6a72ac68f Fix all occurrences of C416. (#33429)
Summary:
C416: Unnecessary (list/set) comprehension - rewrite using list/set().

See https://pypi.org/project/flake8-comprehensions/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33429

Differential Revision: D19972858

Pulled By: ezyang

fbshipit-source-id: faac042a94c59d737bd5ae983121a0a029346e23
2020-02-21 08:32:22 -08:00
Spandan Tiwari
bf0951d937 Updating ONNX checker logic. (#33522)
Summary:
We want to run ONNX checker only when selected operator type is ONNX, and nowhere else. This PR updates the logic in the exporter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33522

Reviewed By: hl475

Differential Revision: D19983954

Pulled By: houseroad

fbshipit-source-id: 15db726321637a96fa110051cc54e9833e201133
2020-02-19 19:30:29 -08:00
Spandan Tiwari
96989a2a11 [ONNX] Adding ONNX large model export support in exporter (#33062)
Summary:
There are large models such as GPT2-large which cannot be exported with the current exporter because of the 2GB protobuf limit (e.g. see https://github.com/pytorch/pytorch/issues/19277). ONNX spec specifies a special format for large (> 2GB)  models. This PR adds support for exporting large models in ONNX large model format in the PyTorch-ONNX exporter.

This is the first PR for this feature that enables the end-to-end execution. Tests for large model export have been added. We may need follow-up PRs to refine this workflow based on user feedback.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33062

Reviewed By: hl475

Differential Revision: D19782292

Pulled By: houseroad

fbshipit-source-id: e972fcb066065cae6336aa91c03023d9c41c88bd
2020-02-18 20:51:43 -08:00
Negin Raoof
9823662b43 [ONNX] Export split with list of sizes (#33161)
Summary:
Exporting Split with a dynamic list of split_sizes is not supported.
This PR enables export using onnx SplitToSequence + SequenceAt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33161

Reviewed By: hl475

Differential Revision: D19860152

Pulled By: houseroad

fbshipit-source-id: 300afedc22b01923efb23acd1a3627aa146bb251
2020-02-14 12:46:33 -08:00
Brian Stark
f9ad5528e0 Fix for rand_like as well. (#33095)
Summary:
This is a followup PR to https://github.com/pytorch/pytorch/issues/32830 This solves the same issue for RandLike which we saw in RandNLike
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33095

Reviewed By: hl475

Differential Revision: D19848625

Pulled By: houseroad

fbshipit-source-id: 147921becf79490027a93606d52c5bc41d9eaf7f
2020-02-12 14:54:39 -08:00
Negin Raoof
6249d7302b [ONNX] Fix export for avg_pool with default stride (#33017)
Summary:
If using nn.functional avg_pool, stride is an optional arg. If not provided, it is set to kernel_size.
This PR fixes the export of avg_pool with default stride.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33017

Reviewed By: hl475

Differential Revision: D19759604

Pulled By: houseroad

fbshipit-source-id: b0352db6fbaf427f4cff9ba8a942efdeb39b6f02
2020-02-07 22:46:46 -08:00
Lara
868db903ae ONNX support for torch.take (#33061)
Summary:
Adding ONNX export support for torch.take()
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33061

Reviewed By: hl475

Differential Revision: D19782651

Pulled By: houseroad

fbshipit-source-id: 0168fb941e166acda4ca607165248b8e0b260ace
2020-02-07 13:41:26 -08:00
Negin Raoof
d678093907 [ONNX] Extend op registration to next opsets (#32943)
Summary:
Currently, custom ops are registered for a specific opset version.
For example, all torchvision custom ops are registered for opset 11, and cannot be exported into higher opset versions. This PR extends op registration to higher opset versions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32943

Reviewed By: hl475

Differential Revision: D19739406

Pulled By: houseroad

fbshipit-source-id: dd8b616de3a69a529d135fdd02608a17a8e421bc
2020-02-07 10:37:50 -08:00
Brian Stark
afa8cbf8c2 Modifed randNLike for scripting (#32830)
Summary:
the rand N like function had required args which were not being used.
As such modified the method signature to give default values so when
scripting does not provide these arguments which are not even being
used, no error is thrown.

Additionally modified the const checker for handling prim::Constant as
well
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32830

Reviewed By: hl475

Differential Revision: D19731715

Pulled By: houseroad

fbshipit-source-id: a3cacb3977eecb88b122e0ceb654fdbf1c8286c1
2020-02-06 18:19:42 -08:00
Lara
4502d8c391 Interpolate Float [] support in ONNX (#32554)
Summary:
The PR https://github.com/pytorch/pytorch/pull/31791 adds support for float[] constant, which affects some cases of ONNX interpolate support.
This PR adds float[] constants support in ONNX, updates interpolate in ONNX, and re-enable the disabled tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32554

Reviewed By: hl475

Differential Revision: D19566596

Pulled By: houseroad

fbshipit-source-id: 843f62c86126fdf4f9c0117b65965682a776e7e9
2020-02-04 16:14:40 -08:00
Ehsan Azar
58e8d5588a [ONNX] Export bitwise_not for bool (logical_not) (#28439)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/25805 (for bool tensors as in the issue)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28439

Differential Revision: D19700156

Pulled By: ezyang

fbshipit-source-id: 0706ada6a8d259dce381ba2d009f226e14c3c14f
2020-02-04 14:45:58 -08:00
BowenBao
1c42b9466b [ONNX] Update support of exporting bool type index mask (#32445)
Summary:
e.g. `tensor[torch.tensor([0, 1, 0], dtype=torch.bool)]`
Previously the mask is of type uint8. Both uint8 and bool should be supported for export.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32445

Reviewed By: hl475

Differential Revision: D19610713

Pulled By: houseroad

fbshipit-source-id: 8df636e0c3cb0b82919a689242a962c79220209c
2020-02-03 13:01:14 -08:00
neginraoof
e03e4f3a2d [ONNX] Add einsum export (#32716)
Summary:
Adding symbolic for onnx einsum as part of opset 12
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32716

Reviewed By: hl475

Differential Revision: D19626168

Pulled By: houseroad

fbshipit-source-id: d8cc8af5f05f36aca3cd55dead602261ccdfec51
2020-02-03 12:56:50 -08:00
Brian Stark
55c382e62b Fixed access to element in size tensor for scripting (#32652)
Summary:
when using scripting, there was an error in attempting to access a
specific element from within the size tensor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32652

Reviewed By: hl475

Differential Revision: D19610726

Pulled By: houseroad

fbshipit-source-id: bca49927bbe71dbe7e7d7edf301908fe79e089b5
2020-01-29 18:33:46 -08:00
Brian Stark
43d31ae4c3 Added ONNX model checker to ONNX export (#32298)
Summary:
Included the ONNX model checker code in the ONNX export
this will force onnx checker to run for all models that get exported.
This should help with validating exported models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32298

Reviewed By: hl475

Differential Revision: D19538251

Pulled By: houseroad

fbshipit-source-id: eb20b124fe59200048f862ddaf20f6c59a0174d5
2020-01-28 16:28:54 -08:00
Jianyu Huang
3ada2e0d64 [pytorch][embeddingbag] Parallelize the EmbeddingBag operator (#4049)
Summary:
Pull Request resolved: https://github.com/pytorch/glow/pull/4049

Pull Request resolved: https://github.com/pytorch/pytorch/pull/27477

We would like to add the intra-op parallelization support for the EmbeddingBag operator.

This should bring speedup for the DLRM benchmark:
https://github.com/pytorch/pytorch/pull/24385

Benchmark code:
```
from __future__ import absolute_import, division, print_function, unicode_literals

import torch
import time

eb = torch.nn.EmbeddingBag(1000000, 64, mode='sum')

input = torch.LongTensor(1500).random_(0, 1000000)
offsets = torch.zeros(64, dtype=torch.int64)

niter = 10000
s = time.time()
for _ in range(niter):
    out = eb(input, offsets)
time_per_iter = (time.time() - s) / niter
print('time_per_iter', time_per_iter)
print('GB/s', (input.numel() * 64 * 4 + out.numel() * 4) / time_per_iter / 1e9)
```

The following results are single core on Skylake T6:
- Before our change (with the original caffe2::EmbeddingLookup)
time_per_iter 6.313693523406982e-05
GB/s 6.341517821789133

- After our change using the EmbeddingLookupIdx API which takes the offsets instead of lengths.
time_per_iter 5.7627105712890626e-05
GB/s 6.947841559053659

- With Intel's PR: https://github.com/pytorch/pytorch/pull/24385
time_per_iter 7.393271923065185e-05
GB/s 5.415518381664018

For multi-core performance, because Clang doesn't work with OMP, I can only see the single-core performance on SKL T6.
ghstack-source-id: 97124557

Test Plan:
With D16990830:
```
buck run mode/dev //caffe2/caffe2/perfkernels:embedding_bench
```

With D17750961:
```
buck run mode/opt //experimental/jianyuhuang/embeddingbag:eb
buck run mode/opt-lto //experimental/jianyuhuang/embeddingbag:eb
```

OSS test
```
python run_test.py -i nn -- TestNNDeviceTypeCPU.test_EmbeddingBag_per_sample_weights_and_new_offsets_cpu
```

Buck test
```
buck test mode/dev-nosan //caffe2/test:nn -- "test_EmbeddingBag_per_sample_weights_and_new_offsets_cpu"

OMP_NUM_THREADS=3 buck test mode/opt -c pytorch.parallel_backend=tbb //caffe2/test:nn -- "test_EmbeddingBag_per_sample_weights_and_new_offsets"  --print-passing-details
```

Generate the AVX2 code for embedding_lookup_idx_avx2.cc:
```
python hp_emblookup_codegen.py --use-offsets
```

Differential Revision: D17768404

fbshipit-source-id: 8dcd15a62d75b737fa97e0eff17f347052675700
2020-01-23 21:29:44 -08:00
Brian Wignall
f326045b37 Fix typos, via a Levenshtein-type corrector (#31523)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos, with https://github.com/bwignall/typochecker to help automate the checking.

Uses an updated version of the tool used in https://github.com/pytorch/pytorch/pull/30606 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31523

Differential Revision: D19216749

Pulled By: mrshenli

fbshipit-source-id: 7fd489cb9a77cd7e4950c1046f925d57524960ea
2020-01-17 16:03:19 -08:00
neginraoof
ffc8e255c4 Sort export w/ negative axes (#31971)
Summary:
Fixing export of Sort on negative axes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31971

Reviewed By: hl475

Differential Revision: D19325874

Pulled By: houseroad

fbshipit-source-id: 18ab2bf39221970c8ab65a1355f5759f88faa54f
2020-01-15 15:13:23 -08:00
Negin Raoof
4460a86cd6 Support op registration if name starts with underscore (_) (#32017)
Summary:
This is required for rehistering torchvision::_new_empty_tensor op
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32017

Reviewed By: hl475

Differential Revision: D19399606

Pulled By: houseroad

fbshipit-source-id: 43e1f2d78d2a0310af347b42f7e9b54cd503a20d
2020-01-15 14:57:57 -08:00
Chetan Kandpal
3363ca20a7 example_outputs Doc Edit (#31826)
Summary:
torch.onnx.export docs contain two descriptions for 'example_outputs' arg.
So combined the information for it with the description with the parameters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31826

Differential Revision: D19274928

Pulled By: zou3519

fbshipit-source-id: cbcce0a79c51784c1d7aa8981aab8aac118ca9b4
2020-01-15 12:34:34 -08:00
Brian Stark
a472f0201f Added support for Dim operation in ONNX export (#31928)
Summary:
While ONNX does not currently directly support the Dim operation on a
tensor, we can provide the same functionality with two ONNX operations.
This allows us to support Dim for all opsets. It may be adventageous to
add support for Dim into a future ONNX opset, and use that for more
efficient code.
While testing dim op found that there is an issue with empty blocks
withing if statements. Modified graph generation to prevent generation
of empty if blocks.

Fixes https://github.com/pytorch/pytorch/issues/27569
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31928

Reviewed By: hl475

Differential Revision: D19376602

Pulled By: houseroad

fbshipit-source-id: 111682b058a5341f5cca6c1a950c83ae412a4c6c
2020-01-13 19:42:43 -08:00
Junjie Bai
5be8dac329 Remove non-ascii character from torch/onnx/symbolic_opset11.py
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31814

Reviewed By: houseroad

Differential Revision: D19270742

Pulled By: bddppq

fbshipit-source-id: 80800d588e63701d6e1b5838d7ada993f0246a81
2020-01-02 20:54:32 -08:00
BowenBao
c4f10e0fe7 Renaming scales parameter for interpolate (#31526)
Summary:
PR separated from https://github.com/pytorch/pytorch/pull/31274.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31526

Reviewed By: zou3519

Differential Revision: D19221931

Pulled By: gchanan

fbshipit-source-id: 81958a9910867ac9d62f2b47abc49384526c4e51
2020-01-02 08:19:30 -08:00
neginraoof
0b57b383b1 Im2col export (#30972)
Summary:
Added im2col to opset 11.
This symbolic is used to export torch.nn.Unfold
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30972

Reviewed By: hl475

Differential Revision: D18946921

Pulled By: houseroad

fbshipit-source-id: 13dd0cbae899700df32fd74d6dff1f29033a2b4c
2019-12-20 09:45:45 -08:00
BowenBao
0e548a76eb Upgrade exported ONNX IR version to 6 (#31025)
Summary:
Upgrade IR version from 4 to 6, below is change doc from ONNX. The upgrade should be backward compatible.

```
  // IR VERSION 5 published on March 18, 2019
  // - Add message TensorAnnotation.
  // - Add quantization annotation in GraphProto to map tensor with its scale and zero point quantization parameters.
  IR_VERSION_2019_3_18 = 0x0000000000000005;

  // IR VERSION 6 published on Sep 19, 2019
  // - Add support for sparse tensor constants stored in model.
  //   - Add message SparseTensorProto
  //   - Add sparse initializers
  IR_VERSION = 0x0000000000000006;
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31025

Reviewed By: hl475

Differential Revision: D18935444

Pulled By: houseroad

fbshipit-source-id: 9ba47f9657fa1a668db291cf04af07d5e8d73c21
2019-12-16 23:18:22 -08:00
Lara
1d5af9599d Update ONNX Flatten to accept negative indices in opset 11 (#30751)
Summary:
Update ONNX Flatten to accept negative indices in opset 11.
With this change, some cases of flatten do not rely on the input rank being available.
Fixes : https://github.com/pytorch/pytorch/issues/30512 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30751

Reviewed By: hl475

Differential Revision: D18946904

Pulled By: houseroad

fbshipit-source-id: a6fa30a9182fff92211e505a19325525c6112f19
2019-12-12 15:27:54 -08:00
BowenBao
6ab2d1b1a4 Partially support tensor lists in loop/concat/stack (#30126)
Summary:
This is a follow-up PR after https://github.com/pytorch/pytorch/pull/29136 ~~and https://github.com/pytorch/pytorch/pull/29171~~

ONNX::Loop does not support Sequence type as loop-carried dependencies. Only tensors are supported.
This PR adds a pass that converts Sequence loop-carried dependencies to scan_outputs.
In opset 11, only the below pattern is supported.
```
PTIR graph:
 ...
 %res.1 : Tensor[] = prim::ListConstruct()
 %res : Tensor[] = prim::Loop(%11, %22, %res.1)
   block0(%i.1 : Tensor, %res.6 : Tensor[]):
     ...
     %res.3 : Tensor[] = aten::append(%res.6, %17)
     -> (%22, %res.3)
 return (%res.3)

ONNX graph:
 ...
 %res : Tensor = onnx::Loop(%11, %22)
   block0(%i.1 : Tensor):
     ...
     -> (%22, %17)
 %res_seq : Tensor[] = onnx::SplitToSequence[keepdims=0](%res)
 return (%res_seq)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30126

Reviewed By: hl475

Differential Revision: D18946880

Pulled By: houseroad

fbshipit-source-id: 67ee65700513e8a942344a3d647e2e73c19ee3d2
2019-12-11 21:24:41 -08:00
Lara
97c1e90f46 ONNX Interpolate Add Scales Params (#28324)
Summary:
Fix for : https://github.com/pytorch/pytorch/issues/27176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28324

Reviewed By: hl475

Differential Revision: D18309133

Pulled By: houseroad

fbshipit-source-id: 348bb41393442c6b107d88fc2cd3224e0afa3ccf
2019-12-11 20:09:15 -08:00
Lara
79c27ba4ef Add ONNX Export Support to floor_divide (#31081)
Summary:
Adding support for the new ATen op floor_divide which was introduced in https://github.com/pytorch/pytorch/pull/30493/files.

This operation is used in Torchvision/FasterRCNN-MaskRCNN, which are now failing after the new op was introduced.
This PR fixes the failure.

cc: neginraoof
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31081

Reviewed By: houseroad

Differential Revision: D18945316

Pulled By: eellison

fbshipit-source-id: 09919c237d618ce7db293c7770f48f7304949dcf
2019-12-11 19:39:11 -08:00
BowenBao
8013ffd400 Fix weight_norm export for dim=0 (#31015)
Summary:
Exported weight_norm is incorrectly reducing over axis 0 as well when dim is set to 0.
Previous test case only covers weight with size(0) == 1, which yields the same result whether reduced over or not.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31015

Reviewed By: hl475

Differential Revision: D18900894

Pulled By: houseroad

fbshipit-source-id: 19004f51933b37f848dbe4138e617a7a8e35a9ec
2019-12-10 23:43:56 -08:00
Supriya Rao
e42af97349 Add quantized concat conversion (#30887)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30887

Support to convert quantized concat from pytorch to caffe2

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_cat

Imported from OSS

Differential Revision: D18855676

fbshipit-source-id: 5d0cf3f03c61819e168b080afa368b1255d0419c
2019-12-10 15:46:16 -08:00
neginraoof
5205556782 Export custom ops (#29752)
Summary:
Updated to export API:
When calling this API, a dict containing the custom opsets (domain and version) used to export the model could be provided.
We allow registering one custom opset (domain, version) per ONNX opset. So, when exporting an operator from a custom domain, users need to pass this pair. Default custom opset version is 1.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29752

Reviewed By: hl475

Differential Revision: D18703662

Pulled By: houseroad

fbshipit-source-id: 84d22557d132b526169051193d730761798fce60
2019-12-09 18:48:50 -08:00
BowenBao
63f1b780ba Support exporting aten::copy_ and aten::index_put to ONNX opset 11 (#26941)
Summary:
- [x] Add more comments and refactor the logic of `ReshapeToAdvancedIndexingFormat`
- [x] Add more description here. Cases that are/aren't supported, and how they are supported.
- [x] Need to merge this PR https://github.com/pytorch/pytorch/issues/27186 to enable testing inplace operators.

We are now supporting exporting aten::copy_ and aten::index_put to ONNX.
Here's a breakdown of the different cases in PyTorch code.

```
# Case 1: Scalar Indices
x[0, 1, 2] = data

# Case 2: Slice Indices
x[1:3, :, ::2] = data

# Case 3: Ellipsis Indices
x[..., 0] = data

# Case 4: Tensor Indices
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
x[ind1, ind2] = data

# Case 5: Mixing all the above cases
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
x[1:3, ind1, ind2, ..., 3] = data
```

Limitations:

Tensor indices must be consecutive, and 1-d tensors.

```
# Supported
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
x[ind1, ind2] = data

# Not supported
ind1 = torch.tensor([0, 2])
ind2 = torch.tensor([1, 1])
ind3 = torch.tensor([[0], [1]])
x[ind1, :, ind2] = data
x[ind3] = data
```

Negative indices are not supported.
```
# Not supported
x[-1] = data
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26941

Differential Revision: D17951030

Pulled By: houseroad

fbshipit-source-id: 4357777072f53aa0bc4b297aa1ee53457a7f8dec
2019-12-06 22:48:46 -08:00
Michael Suo
62b10721fb Actually make flake8 do something (#30892)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30892

Fixes all outstanding lints and actually installs a properly configured
flake8

Test Plan: Imported from OSS

Differential Revision: D18862825

Pulled By: suo

fbshipit-source-id: 08e9083338a7309272e17bb803feaa42e348aa85
2019-12-06 17:50:50 -08:00
Supriya Rao
a51c5f5cbf Add JIT pass to insert permutes for conv ops (#30679)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30679

Caffe2 expects quantized ops to be in NHWC format while pytorch inputs are in NCHW.
Add a jit pass to insert permutes to convert from nchw2nhwc before each conv op and add nhwc2nchw permute after the conv op.
Using graph rewriter to find consecutive redundant permutes and remove them from the graph

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps

Imported from OSS

Differential Revision: D18790518

fbshipit-source-id: 4dd39cf0b31b21f5586c0edfdce2260d4e245112
2019-12-05 18:51:16 -08:00
Supriya Rao
980aead1f8 Add support for quantized slice conversion (#30498)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30498

Updated Int8SliceOp to accept dim, start and end index similar to Pytorch.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_slice

Imported from OSS

Differential Revision: D18740519

fbshipit-source-id: 2313f37a4936edb150ce04911b241e591e191801
2019-12-03 14:37:59 -08:00
Lara
4d30415f12 Add ONNX Scripting Conv Support (#30618)
Summary:
Convolution nodes are traced as aten:_convolution and are currently supported in ONNX.
Scripting convolution uses aten:conv<1,2,3>d which are currently not supported in ONNX.
This PR adds the symbolics for aten:conv<1,2,3>d and aten:conv_transpose<1,2,3>d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30618

Reviewed By: hl475

Differential Revision: D18778145

Pulled By: houseroad

fbshipit-source-id: 4af0379f29974a1ce8443024d1d87b3eb8d2dd36
2019-12-03 10:28:38 -08:00
Brian Wignall
e7fe64f6a6 Fix typos (#30606)
Summary:
Should be non-semantic.

Uses https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings/For_machines to find likely typos.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30606

Differential Revision: D18763028

Pulled By: mrshenli

fbshipit-source-id: 896515a2156d062653408852e6c04b429fc5955c
2019-12-02 20:17:42 -08:00
Supriya Rao
968c0d4a46 Add support for converting quantized AvgPool2d and Reshape operations (#30490)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30490

Add symbolic mapping to Int8AvgPool2d and Int8Reshape op in C2

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps

Imported from OSS

Differential Revision: D18740520

fbshipit-source-id: 1606125500c4b549fbc984e7929b7fd5204396a0
2019-12-02 18:15:01 -08:00
Bowen Bao
1e8ed021c6 Support logsoftmax with dim != -1 (#30433)
Summary:
PyTorch dim and ONNX axis have different meanings.
ONNX only supports log_softmax with dim = -1. Transpose must be added before and after log_softmax to support other cases.
This requires input rank to be known at export time.
Fixes https://github.com/pytorch/pytorch/issues/17918
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30433

Reviewed By: hl475

Differential Revision: D18723520

Pulled By: houseroad

fbshipit-source-id: d0ed3b3f051d08d46495a7abfa854edd120dca3a
2019-11-27 08:34:38 -08:00
neginraoof
512c2a2df5 Enable constant folding (#29834)
Summary:
Set default do_constant_folding = True
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29834

Reviewed By: hl475

Differential Revision: D18588037

Pulled By: houseroad

fbshipit-source-id: b35c06161321629c886e177ea666eff31cebf06a
2019-11-27 08:34:20 -08:00
BowenBao
0febff36ac Export dynamic unbind/split and __getitem__ (#29136)
Summary:
In ONNX opset 11, a series of sequence ops were added. Operators that are related to Tensor[] in PyTorch can be exported using these sequence ops.
In this PR, unbind/split that produces Tensor[], and __getitem__ that takes Tensor[] as input, are exported correctly to ONNX opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29136

Reviewed By: hl475

Differential Revision: D18309222

Pulled By: houseroad

fbshipit-source-id: be12c96bf8d0a56900683ef579f1c808c0a1af21
2019-11-26 06:54:06 -08:00
Supriya Rao
2599b9b551 Add output_size argument to caffe2 Int8ResizeNearest (#30202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30202

Pytorch Upsample operator has output_size as an argument.
For quantized tensor inputs we cannot get the input_size to calculate the width and height scale factor.
Instead we pass the output_size directly to caffe2 to calculate the scale factors.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2_quantized.py TestQuantizedOps.test_upsample

Imported from OSS

Differential Revision: D18631478

fbshipit-source-id: 38a39129bc863f4ecf2293acc068e40ab7edc825
2019-11-26 06:54:02 -08:00
Spandan Tiwari
06db5ad707 Provide names for operator nodes in ONNX exported graph. (#27342)
Summary:
The PyTorch exporter does not add any name to the ONNX operators in the exported graph. A common request is to add names to op nodes by default. This helps the readability of the graph in visualization tools such a Netron, or when the ONNX graph is printed as a string. Also, it helps with the debuggability of the ONNX graph.

Therefore this PR adds name to operators in the exporters. The names follow a simple format, <op_type>_<index>. Expect files for tests in `test/onnx/test_operators.py` have been updated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27342

Reviewed By: hl475

Differential Revision: D17790979

Pulled By: houseroad

fbshipit-source-id: 1eaae88b5f51f152735a2ff96e22827837e34d9d
2019-11-26 06:53:53 -08:00
BowenBao
584be86c3f Try exporting ONNX with force_outplace=False (#29466)
Summary:
This should resolve https://github.com/pytorch/pytorch/issues/29008. This flag has two effects on the tracer.
- Remove the underscroll for inplace operators. E.g.: index_put_ ==> index_put. This is handled in utils.py separately as well.
- Add out as input for backward computation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29466

Reviewed By: hl475

Differential Revision: D18422815

Pulled By: houseroad

fbshipit-source-id: 317b6a3c8a5751fe6fe49d7543e429d281ed0d6d
2019-11-26 06:53:49 -08:00
neginraoof
a822a1d2a8 Avoid overwriting output type in onnx graph (#25906)
Summary:
When creating the onnx graph, we overwrite the output type with the output type of the PT graph.
In some special cases, when using scripting, the PT graph does not have type information. We want to avoid overwriting the input type is these cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25906

Reviewed By: hl475

Differential Revision: D18645903

Pulled By: houseroad

fbshipit-source-id: 56acc43e0c15c74ac8ebd689e04f7371054e362e
2019-11-21 21:30:12 -08:00
Lara
bbb3c415c9 ONNX Hardtanh Opset 11 Support (#30169)
Summary:
Add support for hardtanh that was blacklisted in opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30169

Reviewed By: hl475

Differential Revision: D18619552

Pulled By: houseroad

fbshipit-source-id: 0c1bfb0a53d1dd2327c5db7afd03a90482abb9fe
2019-11-20 18:59:00 -08:00
Lara Haidar
45024e7a35 Support Exporting Bitshift to ONNX (#28210)
Summary:
Support exporting left/right bitshifts to ONNX for all opset versions.

ONNX has a bitshift operator in opset 11, but it only supports unsigned ints, so it can't be used in PyTorch (since only uint8 is the only uint type).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28210

Reviewed By: hl475

Differential Revision: D18575512

Pulled By: houseroad

fbshipit-source-id: 74161db67f599996a0614981edcc171af6780d21
2019-11-19 09:25:50 -08:00
neginraoof
267fd4a06c Fix for batch norm 2D with affine=False (#29458)
Summary:
This is a fix for batch norm 2D with affine=False.
Repro: https://github.com/pytorch/pytorch/issues/29271
Error is because the output of the unsqueeze op does not have scalar type information. So I moved the references to scalar type after the unsqueeze line.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29458

Reviewed By: hl475

Differential Revision: D18400975

Pulled By: houseroad

fbshipit-source-id: f5c5633857c584edcef3b9e9946861dcfccccd75
2019-11-18 21:52:11 -08:00
Supriya Rao
91c6d2e51c Add support for quantized operator conversion from PT to C2 via ONNX (#29694)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29694

This PR adds preliminary support required to be able to run quantized pytorch models on a C2 backend.
For quantized ops we use a custom domain name 'caffe2' to register the ops if they are in the "quantized" namespace.
The change also adds JIT pass to unpack the quantized weights and insert the unpacked values into the graph.
The actual tensor values are looked up from the params dict.

Test Plan:
python test/onnx/test_pytorch_onnx_caffe2.py TestQuantizedOps

Imported from OSS

Reviewed By: houseroad

Differential Revision: D18467130

fbshipit-source-id: 53ebd8c43935f7d7e74305dad6c231a2247df176
2019-11-18 12:12:40 -08:00
BowenBao
fbabf72829 Add ONNX support for Logdet (#29767)
Summary:
Exported as combination of ONNX::Log and ONNX::Det.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29767

Reviewed By: hl475

Differential Revision: D18499762

Pulled By: houseroad

fbshipit-source-id: e6f2298635a995f01b2913d8958b5e1ca9d04058
2019-11-15 20:27:43 -08:00
Lara
2acca09e1a Add Support for ONNX scripting Interpolate with missing shape (#29489)
Summary:
- Add support for missing case where interpolate is exported with missing shape information in scripting
- Add warnings
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29489

Reviewed By: hl475

Differential Revision: D18438872

Pulled By: houseroad

fbshipit-source-id: d01f833bec0cc4e881ddc18e7054d22f54e9886b
2019-11-11 21:20:14 -08:00
Lara
8b53515b8a Add ONNX Export Support for torch.scalar_tensor (#28713)
Summary:
Support exporting torch.scalar_tensor() to ONNX.
This will allow making operations on dynamic scalars (like x.size(dim) where x is a tensor of dynamic shape) and exporting them to ONNX.

This is a dummy example of operations that could not be exported dynamically before this PR:

```
size_x = x.size(0)
size_y = y.size(0)
size_x_y_static = torch.tensor([size_x , size_y])  # size_x_y_static is traced as constant

size_x = torch.scalar_tensor(size_x).unsqueeze(0)
size_y = torch.scalar_tensor(size_y).unsqueeze(0)
size_x_y_dynamic = torch.cat((size_x , size_y))  # size_x_y_dynamic is dynamic and depends on x and y's size
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28713

Reviewed By: hl475

Differential Revision: D18438880

Pulled By: houseroad

fbshipit-source-id: c1651e480a41602c7c7452ffc4acba40a2b3827c
2019-11-11 18:27:49 -08:00
eellison
e01fc56ecb move type inference for arange into c++ (#27629)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/17662

I'm not sure if `arange` needs to be in python_arg_parser at all, given the schemas in native_functions.yaml. In any case this at least fixes the dytpe mismatch.

In follow up PRs I will try to handle some of the other ops that do type inference at the python level, like randint.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27629

Differential Revision: D17885939

Pulled By: eellison

fbshipit-source-id: f97a8bc722b7ab77de1c42a992e49a4a3175ad60
2019-11-11 11:26:21 -08:00
neginraoof
64a66e8320 fixed random gerenation export (#29354)
Summary:
Fixed random generator symbolics, and added rand_like.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29354

Reviewed By: hl475

Differential Revision: D18400995

Pulled By: houseroad

fbshipit-source-id: 4a891e91b6c87ebce57c35b2bfa11e32ab93a149
2019-11-08 11:43:30 -08:00
Negin Raoof
7da11f4967 Export weight_norm (#28618)
Summary:
Export _weight_norm
Caffe2 tests are inplace

Looks like there is a conflicting behavior in torch.nn.utils.weight_norm regarding None dim.
Where dim could be negative for backwards axes, but when dim = None, it's overwitten to -1
0c48092b22/torch/nn/utils/weight_norm.py (L10)

For now, this symbolic to matches the current behavior. But this might need to be changed in the torch module.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28618

Reviewed By: hl475

Differential Revision: D18354270

Pulled By: houseroad

fbshipit-source-id: 0d64ee9ee1156bb96d36ed0a25b2e8cc5058ce90
2019-11-07 16:55:56 -08:00
Spandan Tiwari
509d9630ca Disabling ONNX IR v4 sematics for opset 8 or lower. (#28990)
Summary:
Currently, `keep_initializers_as_input` argument in `torch.onnx.export` API can be used to choose whether to export an ONNX model with IR v3 or v4 semantics. Currently, the implementation does not check for which opset is being used for export. This is an issue because ONNX IR v4 is valid only for opset 9 and above (as listed [here](https://github.com/onnx/onnx/releases/tag/v1.4.0)), and opset 8 or lower export with `keep_initializers_as_input=False` will create a illegal ONNX graph.

This change fixes this by introducing a check on opset version when deciding whether to export ONNX IR v3 or v4.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28990

Reviewed By: hl475

Differential Revision: D18352523

Pulled By: houseroad

fbshipit-source-id: 7e9055d405c3faf52b80a8de0d04186d4c350c15
2019-11-06 21:57:21 -08:00
Your Name
fff4f16e45 Clean up file opening for serialization (#29221)
Summary:
Stacked PRs
 * https://github.com/pytorch/pytorch/issues/29232 - Add zipfile serialization
 * https://github.com/pytorch/pytorch/issues/29228 - Expose miniz to Python
 * **https://github.com/pytorch/pytorch/issues/29221 - Clean up file opening for serialization**

This is a small refactor to get things started for zipfile-based serialization
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29221

Differential Revision: D18330932

Pulled By: driazati

fbshipit-source-id: ce91542faf987ae5aa6dfd322e633a0c7335e678
2019-11-06 18:41:40 -08:00
Lu Fang
6c4fd602ff Revert D18350224: Fixed export for random
Test Plan: revert-hammer

Differential Revision:
D18350224

Original commit changeset: 540a07f7def3

fbshipit-source-id: c5755c819191b858f0de4aab8196cf5a46b8f750
2019-11-06 16:01:49 -08:00
neginraoof
364e525f55 Fixed export for random (#28470)
Summary:
[ONNX] Fixed export for random generator ops
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28470

Reviewed By: hl475

Differential Revision: D18350224

Pulled By: houseroad

fbshipit-source-id: 540a07f7def335f66808af8c360b72261d15635b
2019-11-06 14:42:20 -08:00
Spandan Tiwari
bc91e19861 Enable ONNX constant folding for opset 11. (#29011)
Summary:
Currently ONNX constant folding (`do_constant_folding=True` arg in `torch.onnx.export` API) supports only opset 9 and 10 of ONNX. Opset 11 support was recently introduced in the ONNX exporter. For opset 11, it is currently a no-op. This change enables ONNX constant folding for opset 11. Specifically there are three main changes:
1) Turn on constant folding ONNX pass for opset 11.
2) Enable constant folding tests in `test/onnx/test_utility_funs.py` and `test/onnx/test_pytorch_onnx_onnxruntime.py` for opset 11.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29011

Reviewed By: hl475

Differential Revision: D18306998

Pulled By: houseroad

fbshipit-source-id: afeed21ca29e01c278612e51dacd93397dd6e2d8
2019-11-05 23:22:39 -08:00
James Reed
6e38c3b89e Make get_trace_graph private
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29149

Test Plan: Imported from OSS

Differential Revision: D18307559

Pulled By: jamesr66a

fbshipit-source-id: 0b6aec2a1d10810d4e7f6b30b256cca79fc4e854
2019-11-05 17:04:36 -08:00
Negin Raoof
ebc216a076 Opset 11 updates (#28225)
Summary:
This PR contains:
1- pad updates for opset11 symbolic
2- Updated avg_pool for opset11
3- TopK updates for opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28225

Reviewed By: hl475

Differential Revision: D18282928

Pulled By: houseroad

fbshipit-source-id: aff2cabca9a155a9b475e35fed69a678544d6669
2019-11-04 12:16:12 -08:00
Sergei Nikolaev
1e2049c566 #26426 fixed (#28715)
Summary:
This is the fix for reverted https://github.com/pytorch/pytorch/issues/26426
houseroad bddppq soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28715

Reviewed By: hl475

Differential Revision: D18146731

Pulled By: houseroad

fbshipit-source-id: 247366451a6334e84df82d00339521f797b33130
2019-11-01 12:53:01 -07:00
Vitaly Fedyunin
4bfe2f0900 Fix jit outplace tracing and reapply changes to *_like operators. (#28839)
Summary:
Reapply reverted and fix files `gen_variable_type.py` `test_jit.py`

https://github.com/pytorch/pytorch/issues/27891 Cleanup testing of _like operators
https://github.com/pytorch/pytorch/issues/27890 Add memory format support to randn_like operator
https://github.com/pytorch/pytorch/issues/27889 Add memory format support to randint_like operator
https://github.com/pytorch/pytorch/issues/27562 Add memory format support to zeros_like operator
https://github.com/pytorch/pytorch/issues/27561 Add memory format support to rand_like operator
https://github.com/pytorch/pytorch/issues/27270 Add memory format support to ones_like operator
https://github.com/pytorch/pytorch/issues/27262 Add memory format support to full_like operator
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28839

Test Plan:
Imported from GitHub, without a `Test Plan:` line.

buck test mode/dev //language_technology/neural_mt/os/pytorch_translate/test:test_onnx -- 'test_forced_decoder_export_vocab_reduction \(language_technology\.neural_mt\.os\.pytorch_translate\.test\.test_onnx\.TestONNX\)'

Differential Revision: D18203397

Pulled By: VitalyFedyunin

fbshipit-source-id: eea41cbd4c232cf5a54172b1e1b16b173798f298
2019-10-31 13:23:08 -07:00
Vitaly Fedyunin
87c98acf5d Back out "Add memory format support to full_like operator" (#28803)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28803

Original commit changeset: 1761a9939aa7
ghstack-source-id: 92748946

Test Plan: buck test language_technology/neural_mt/os/pytorch_translate/test:test_onnx

Reviewed By: ifedan

Differential Revision: D18175282

fbshipit-source-id: d3f537bbed50a4524797edd96b210b8455ef1bcc
2019-10-28 12:44:53 -07:00
Vitaly Fedyunin
d828fef8ac Back out "Add memory format support to ones_like operator" (#28802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28802

Original commit changeset: 5da9530f6b23
ghstack-source-id: 92748794

Test Plan: buck test language_technology/neural_mt/os/pytorch_translate/test:test_onnx

Reviewed By: ifedan

Differential Revision: D18175303

fbshipit-source-id: ac36c7d345cba901bc2b64dc22661b8d0f179f13
2019-10-28 12:44:49 -07:00
James Reed
f782500ee0 Abstract tracer::enter and tracer::exit into a function
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/28473

Test Plan: Imported from OSS

Differential Revision: D18121007

Pulled By: jamesr66a

fbshipit-source-id: 4c4a4344ad9bcc4630b945d2a645a0b05928933c
2019-10-26 18:41:14 -07:00
Vitaly Fedyunin
7ff272c6da Back out D17980308-D17980313 (#28748)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28748

Found D17980313 to break unit tests, backed out descendants too to avoid conflicts.

Test Plan:
Failed on master:

   buck test mode/dev-nosan language_technology/neural_mt/fb/pytorch_translate/test:test_onnx

Passes with this diff.

Differential Revision: D18157588

fbshipit-source-id: e2b56eac8c5bfccf3ce9a3a2993f6332ab1471e7
2019-10-26 13:08:49 -07:00
Negin Raoof
60d606094c Export Meshgrid (#26037)
Summary:
Exporting meshgrid op in opset 9 symbolics
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26037

Reviewed By: hl475

Differential Revision: D17452325

Pulled By: houseroad

fbshipit-source-id: d556b78e46594a232cdefd8c257cccd8b98221d6
2019-10-25 16:59:22 -07:00
Junjie Bai
d37c2d7c8d Revert D17495965: TensorRT 6.0 support and PyTorch->ONNX->TRT6 unit test
Test Plan: revert-hammer

Differential Revision:
D17495965

Original commit changeset: 3e8dbe8943f5

fbshipit-source-id: d47fcbec22b0d61df41d7dbf15cfdde196ac818f
2019-10-25 13:58:16 -07:00
Sergei Nikolaev
4996e3aca2 TensorRT 6.0 support and PyTorch->ONNX->TRT6 unit test (#26426)
Summary:
This PR makes Caffe2 compatible with TensorRT 6. To make sure it works well, new unit test is added. This test checks PyTorch->ONNX->TRT6 inference flow for all classification models from TorhchVision Zoo.
Note on CMake changes: it has to be done in order to import onnx-tensorrt project. See https://github.com/pytorch/pytorch/issues/18524 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26426

Reviewed By: hl475

Differential Revision: D17495965

Pulled By: houseroad

fbshipit-source-id: 3e8dbe8943f5a28a51368fd5686c8d6e86e7f693
2019-10-25 13:01:57 -07:00
Vitaly Fedyunin
c258cd039a Add memory format support to zeros_like operator (#27562)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27562

Adds memory_format keyword argument (positional for cpp).

'Preserve' behavior now follows next rules:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17980313

Pulled By: VitalyFedyunin

fbshipit-source-id: 9ca8453dc1a554ceea93c6949e01263cc576384b
2019-10-25 07:29:16 -07:00
Vitaly Fedyunin
2c339a24ec Add memory format support to ones_like operator (#27270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27270

Adds memory_format keyword argument (positional for cpp).

'Preserve' behavior now follows next rules:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17980312

Pulled By: VitalyFedyunin

fbshipit-source-id: 5da9530f6b239306dbb66d1dfeefe88237f13bbd
2019-10-25 07:29:08 -07:00
Vitaly Fedyunin
85d5aee863 Add memory format support to full_like operator (#27262)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27262

Adds memory_format keyword argument (positional for cpp).

'Preserve' behavior now follows next rules:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17980309

Pulled By: VitalyFedyunin

fbshipit-source-id: 1761a9939aa7c5ab23e927b897e25e225089a8e7
2019-10-25 07:29:04 -07:00
Lara
d762ad09df Enable Interpolate Tests for ONNX Opset 11 (#28560)
Summary:
- Enable tests for Interpolate in opset 11 for nearest and linear2d modes (linear1d/3d not implemented yet)
- Fix bugs found after enabling tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28560

Reviewed By: hl475

Differential Revision: D18110680

Pulled By: houseroad

fbshipit-source-id: 7f8811e40dc5cedaba6389460dcca52daa048f5f
2019-10-24 14:21:13 -07:00
neginraoof
76d262d4b7 export group_norm (#27071)
Summary:
Updated group_norm symbolic
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27071

Reviewed By: hl475

Differential Revision: D17792249

Pulled By: houseroad

fbshipit-source-id: 08be6071952ca2c256d2c6a0a6bbc19a8442f1fe
2019-10-23 15:14:31 -07:00
neginraoof
d2eb08d17b Fix tracing slice/select with dynamic inputs (#26549)
Summary:
Fix Slice/Select trace arguments. This PR stashes arguments to functions in order to avoid tracing them as constants.
This PR depends on a fix for select op in PR:
https://github.com/pytorch/pytorch/pull/25273
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26549

Reviewed By: hl475

Differential Revision: D17623851

Pulled By: houseroad

fbshipit-source-id: ae314004266688d2c25c5bada2dcedbfc4f39c5b
2019-10-22 17:09:40 -07:00
Negin Raoof
4f70b5a4de Export det (#26958)
Summary:
Added symbolic to export det in opset 11
Updating ONNX submodule is required for det export
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26958

Reviewed By: hl475

Differential Revision: D17844887

Pulled By: houseroad

fbshipit-source-id: 224ae3ff82939dc7ae8584c5a30a31fe6afa05f6
2019-10-22 13:33:15 -07:00
Mikhail Zolotukhin
0aa694ebe5 Move Method::lowered_graph to a separate pass out of the Method class. (#28242)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28242

There is no reason to have it in a general API of Module/Method - it's
just another graph pass. It was there because some time ago modules were
not first class and all graphs were lowered. After that changed, this
API was added for easier transition, but now we don't need it anymore.

Test Plan: Imported from OSS

Differential Revision: D17986724

Pulled By: ZolotukhinM

fbshipit-source-id: 279a1ec450cd8fac8164ee581515b09f1d755630
2019-10-18 12:48:40 -07:00
neginraoof
95922c90b5 Export update for arange and _dim_arange (#26875)
Summary:
Export arange and _dim_arange using onnx::range in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26875

Reviewed By: hl475

Differential Revision: D17623848

Pulled By: houseroad

fbshipit-source-id: 41f0066ca1c42882ccc051a3ee5448dca25ee5d2
2019-10-17 13:55:45 -07:00
Lara
735463f210 ONNX Export Scripted Interpolate Op (#27566)
Summary:
We currently support exporting traced interpolate ops to ONNX.

Scripting interpolate op invokes aten::__interpolate in the Torch IR (instead of aten::upsample_[mode][dim]d), which we do not support yet.
This PR implements the ONNX symbolic for __interpolate() to support exporting interpolate in scripting scenarios.

Related open issue: https://github.com/pytorch/pytorch/issues/25807
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27566

Reviewed By: hl475

Differential Revision: D17817731

Pulled By: houseroad

fbshipit-source-id: e091793df503e2497f24821cf2954ff157492c75
2019-10-16 11:22:22 -07:00
BowenBao
ab50abca5c Export masked_select and masked_scatter in opset 11 (#25949)
Summary:
- masked_select is exported as ONNX::GatherND
- masked_scatter is exported as ONNX::ScatterND
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25949

Reviewed By: hl475

Differential Revision: D17465489

Pulled By: houseroad

fbshipit-source-id: 4c3732617733ca2024a5e306ffa9f6bfcf9725d5
2019-10-15 21:09:37 -07:00
Vitaly Fedyunin
d39ab0312a Add memory_format support to and type operators (#27107)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27107

Adds memory_format keyword argument (positional for cpp).

'Preserve' behavior now follows next rules:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17931062

Pulled By: VitalyFedyunin

fbshipit-source-id: 2c5dd3dd05bf58a9a29f25562cd45190b009c3f9
2019-10-15 12:55:56 -07:00
zou3519
e5d6b75319 Bag of documentation fixes; fix more sphinx warnings (#27850)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27850

Many of these are real problems in the documentation (i.e., link or
bullet point doesn't display correctly).

Test Plan: - built and viewed the documentation for each change locally.

Differential Revision: D17908123

Pulled By: zou3519

fbshipit-source-id: 65c92a352c89b90fb6b508c388b0874233a3817a
2019-10-15 07:31:14 -07:00
Negin Raoof
3d2c90131a opset 11 updates (#27578)
Summary:
Opset 11 updates:
- Enabled ORT tests for updated ops in opset 11
- Updated index_copy and index_fill symbolic for opset 11 to modify onnx::Scatter -> onnx::ScatterElemets
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27578

Reviewed By: hl475

Differential Revision: D17852462

Pulled By: houseroad

fbshipit-source-id: c88747804054d0f3455f2c58fd1d8725e0b2f803
2019-10-11 16:18:40 -07:00
BowenBao
ba792335fc Export traced aten::unbind (#27247)
Summary:
This PR enables exporting aten::unbind created by the tracer. The traced version IR will always have this pattern ```aten::unbind -> prim::ListUnpack```.
Another PR supporting scripted aten::unbind will be submitted separately later.
```
// Unbind is being converted to ONNX as Split + Squeeze.
// Example IR
// graph(%0 : Float(3, 4, 5)):
//   %7 : Long() = prim::Constant[value={0}]()
//   %3 : Tensor[] = aten::unbind(%0, %7)
//   %4 : Float(4, 5), %5 : Float(4, 5), %6 : Float(4, 5) = prim::ListUnpack(%3)
//   return (%4, %5, %6)
//
// Translates to ONNX:
// graph(%0 : Float(3, 4, 5)):
//   %1 : Tensor, %2 : Tensor, %3 : Tensor = onnx::Split[axis=0](%0)
//   %4 : Float(4, 5) = onnx::Squeeze[axes=[0]](%3)
//   %5 : Float(4, 5) = onnx::Squeeze[axes=[0]](%2)
//   %6 : Float(4, 5) = onnx::Squeeze[axes=[0]](%1)
//   return (%6, %5, %4)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27247

Reviewed By: hl475

Differential Revision: D17791095

Pulled By: houseroad

fbshipit-source-id: 83b724275124dd1dedb272583a2fefbdf7035d4c
2019-10-09 18:20:03 -07:00
Lara Haidar
2093fac4ee ONNX Export ConstantOfShape with default dtype (#27577)
Summary:
Exporting a scripted module to ONNX, with ops like torch.zeros(), fails when the dtype is not specified.
This PR adds support to exporting scripted torch.zeros() ops (and similar ops) without specifying the dtype (dtype will default to float).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27577

Reviewed By: hl475

Differential Revision: D17822318

Pulled By: houseroad

fbshipit-source-id: b2d4300b869e782a9b72534fea1263eb83744953
2019-10-09 17:05:35 -07:00
Lu Fang
34662f77c6 Revert D17159707: [pytorch][PR] [ONNX] Fixed Select symbolic to export slice when index = negative one
Test Plan: revert-hammer

Differential Revision:
D17159707

Original commit changeset: 2c3b27542108

fbshipit-source-id: accce910abdbe13270d0f592810a48b1dabe4b01
2019-10-08 01:59:10 -07:00
Negin Raoof
16454095e0 Fixed Select symbolic to export slice when index = negative one (#25273)
Summary:
Exporting torch.select when index = negative one (x[:,-1]) was broken. This PR has the fix in symbolic function for select.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25273

Reviewed By: hl475

Differential Revision: D17159707

Pulled By: houseroad

fbshipit-source-id: 2c3b275421082758f1b63c1c9b6e578f03ca9f76
2019-10-07 14:24:34 -07:00
Negin Raoof
a24291a554 Unfold export (#24970)
Summary:
ONNX export for Unfold in symbolic opset9 + op and ORT tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24970

Reviewed By: hl475

Differential Revision: D17495106

Pulled By: houseroad

fbshipit-source-id: fcd179a1213c0f219628f25c09e66fcfe4c5df50
2019-10-07 13:06:37 -07:00
Negin Raoof
c874dd91a7 export remainder (#24410)
Summary:
Added ONNX export support for torch.remainder and torch.fmod
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24410

Reviewed By: hl475

Differential Revision: D17466791

Pulled By: houseroad

fbshipit-source-id: afe6519e5f370824e3b4a45b69036a7260fb72cf
2019-10-03 20:15:20 -07:00
Vitaly Fedyunin
7b2e8c323c Add memory format argument to the clone operator (#27106)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27106

Adds memory_format option to the `clone` operator.

Introduce new `clone` behavior if used with `input_t.clone(memory_format=torch.preserve_format)`:
1) If tensor is non-overlapping and dense - output tensor will have the same strides as input tensor.
2) If not (1) and tensor is stored in the channels last format, output tensor going to have channels last format.
3) Output tensor is going to be contiguous in all other cases.

 ---
Dense tensor is the tensor that store values in a contiguous block of memory.
Non-overlapping tensor is the tensor in which elements occupy individual non-repetitive memory.

Test Plan: Imported from OSS

Differential Revision: D17699357

Pulled By: VitalyFedyunin

fbshipit-source-id: 5ae1537c2aca1abf0bf1eec4416846129c156f66
2019-10-03 12:08:47 -07:00
albanD
17b1faa2bf Rename jit Function to ScriptFunction
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27219

Test Plan: Imported from OSS

Differential Revision: D17715306

Pulled By: albanD

fbshipit-source-id: d11a7634dbee6a885c7177b240958e5aed2544f3
2019-10-03 08:28:32 -07:00
Negin Raoof
d93fc64776 Update export for topk and sort (#25739)
Summary:
updated export for topk and sort as part of opset11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25739

Reviewed By: hl475

Differential Revision: D17467131

Pulled By: houseroad

fbshipit-source-id: 653be138455728ec8e9bb81ae63dd7ce0c4d0793
2019-10-02 12:20:30 -07:00
Lara Haidar
f4f6d8dda5 Fix ONNX Interpolate
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/27179

Reviewed By: hl475

Differential Revision: D17698364

Pulled By: houseroad

fbshipit-source-id: 8fddd1c13e7af026962cf2d9c05fd7c957d8526e
2019-10-02 01:17:46 -07:00
Lu Fang
a173bea425 Resubmit [pytorch][PR] [ONNX] Updating producer_version in exported O… (#27004)
Summary:
Fix more expect files.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27004

Reviewed By: hl475

Differential Revision: D17641034

Pulled By: houseroad

fbshipit-source-id: 8130397d1af28d33b98ad146146d3b3fa16c15e3
2019-09-27 23:23:31 -07:00
Karl Ostmo
55a358546f Revert D17631902: [pytorch][PR] [ONNX] Updating producer_version in exported ONNX models to PyTorch 1.3.
Test Plan: revert-hammer

Differential Revision:
D17631902

Original commit changeset: 6d5896465740

fbshipit-source-id: ebf9e5e1c582027dbba2db68328ea4136a974c6b
2019-09-27 15:49:36 -07:00
Spandan Tiwari
6b3c0c1f22 Updating producer_version in exported ONNX models to PyTorch 1.3. (#26976)
Summary:
Bumping up the `producer_version` in exported ONNX models in view of the next release. Updating tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26976

Reviewed By: hl475

Differential Revision: D17631902

Pulled By: houseroad

fbshipit-source-id: 6d58964657402ac23963c49c07fcc813386aabf0
2019-09-27 13:50:24 -07:00
Negin Raoof
6b9bcd0606 export baddbmm (#26901)
Summary:
Adding symbolic for baddbmm export
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26901

Reviewed By: hl475

Differential Revision: D17620967

Pulled By: houseroad

fbshipit-source-id: 3931dff5a4afdcb4a45d967fb0efaf84029c16e5
2019-09-26 22:53:21 -07:00
Lara Haidar
614edfce81 Add Support to Dicts and Strings in ONNX for Inputs and Outputs (#25889)
Summary:
ONNX does not support dictionaries for inputs and output. The reason is that the arg flattening and unflattening does not handle Dictionary types.
This PR adds flattening/unflattening support for dictionaries and strings.
However this feature should be handled with caution for input dictionaries; and users need to verify their dict inputs carefully, and keep in mind that dynamic lookups are not available.

This PR will allow exporting cases where models have dictionnary outputs (detection and segmentation models in torchvision), and where dictionary inputs are used for model configurations (MultiScaleRoiAlign in torchvision).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25889

Reviewed By: hl475

Differential Revision: D17613605

Pulled By: houseroad

fbshipit-source-id: c62da4f35e5dc2aa23a85dfd5e2e11f63e9174db
2019-09-26 22:31:09 -07:00
BowenBao
638c4375de Export index_fill and index_copy, fix caffe2 scatter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23052

Reviewed By: hl475

Differential Revision: D16428486

Pulled By: houseroad

fbshipit-source-id: 8c5905052763fd70197c67aba5f28eeff0790721
2019-09-26 16:23:32 -07:00
Lu Fang
b6a1d618b2 Revert D17565828: [pytorch][PR] [ONNX] Export baddbmm
Test Plan: revert-hammer

Differential Revision:
D17565828

Original commit changeset: 85f605a7b3fa

fbshipit-source-id: 7705325087d83362f71a717be880a13e9f575b37
2019-09-25 14:24:18 -07:00
Negin Raoof
63fd10549a Export baddbmm (#25738)
Summary:
Added ONNX export for baddbmm in opset9
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25738

Reviewed By: hl475

Differential Revision: D17565828

Pulled By: houseroad

fbshipit-source-id: 85f605a7b3fa4783ef4f6ced86223133c85062d5
2019-09-25 12:28:06 -07:00
Lara
5001ec4252 Support Negative Axis in Size in ONNX (#26436)
Summary:
Currently, we export invalid ONNX models when size() is used with a negative dim.
This PR fixes the issue and allows exporting these models to ONNX (ex: input.size(-1)).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26436

Reviewed By: hl475

Differential Revision: D17565905

Pulled By: houseroad

fbshipit-source-id: 036bc384b25de77506ef9fbe24ceec0f7e3cff8b
2019-09-25 06:08:16 -07:00
Lara
d396c7332a Update ONNX Export for Interpolate in Opset 11 (#26778)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26778

- Add support for linear and cubic interpolate in opset 11.
- Add support for 1d and 3d interpolate in nearest mode for opset 7 and 8.
- Add tests for all cases of interpolate in ORT tests (nearest/linear/cubic, 1d/2d/3d, upsample/downsample).
Original PR resolved: https://github.com/pytorch/pytorch/pull/24805

Reviewed By: hl475

Differential Revision: D17564911

Pulled By: houseroad

fbshipit-source-id: 591e1f5b361854ace322eca1590f8f84d29c1a5d
2019-09-25 05:43:20 -07:00
Edward Yang
1bb895e1c1 Revert D17330801: [pytorch][PR] Update ONNX Export for Interpolate in Opset 11
Test Plan: revert-hammer

Differential Revision:
D17330801

Original commit changeset: 1bdefff9e72f

fbshipit-source-id: dff07477403170c27260f736ab6e6010f0deca9f
2019-09-24 18:56:45 -07:00
Lu Fang
e95f3125fd Make ONNX_ATEN_FALLBACK also works for _export (#26738)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26738

someone may use torch._export directly. Here we change the onnx_export_type's default value to None,
and if it's pytorch onnx caffe2 bundle, we set it to ONNX_ATEN_FALLBACK, otherwise, it's ONNX.

Test Plan: ci

Reviewed By: hl475

Differential Revision: D17546452

fbshipit-source-id: 38e53926e2b101484bbbce7b58ebcd6af8c42438
2019-09-24 16:30:09 -07:00
Lara
de3d4686ca Update ONNX Export for Interpolate in Opset 11 (#24805)
Summary:
- Add support for linear and cubic interpolate in opset 11.
- Add support for 1d and 3d interpolate in nearest mode for opset 7 and 8.
- Add tests for all cases of interpolate in ORT tests (nearest/linear/cubic, 1d/2d/3d, upsample/downsample).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24805

Reviewed By: hl475

Differential Revision: D17330801

Pulled By: houseroad

fbshipit-source-id: 1bdefff9e72f5e70c51f4721e1d7347478b7505b
2019-09-24 16:29:57 -07:00
Spandan Tiwari
af3b15b74c Setting automatic default selection for ONNX IR v4 semantics in ONNX export API (#26146)
Summary:
This is a follow-up PR for https://github.com/pytorch/pytorch/pull/23284. In that PR we had removed changing the default behavior for `keep_initializers_as_input` argument to the export API. With this PR we are enabling that change in that if `keep_initializers_as_input` is not specified then value/behavior for this argument is chosen automatically depending on whether the export type is ONNX or not.

This was part of the earlier PR was removed for further review. The test points have also been updated.

This change may fail some internal tests which may require explicitly setting `keep_initializers_as_input=True` to preserve old behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26146

Reviewed By: hl475

Differential Revision: D17369677

Pulled By: houseroad

fbshipit-source-id: 2aec2cff50d215714ee8769505ef24d2b7865a11
2019-09-24 10:02:31 -07:00
Mikhail Zolotukhin
76e2ffc877 Remove 'recurse' parameter from Inline. (#26487)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26487

The way it is implemented currently is bad because while we're inlining
to a graph G, we are also mutating all the graphs that are being
inlined. The problem is that the graphs we're inlining are usually the
original graphs of functions, so we're silently changing them behind the
scenes, and we don't have a way to recover 'unoptimized' graphs
afterwards.

Test Plan: Imported from OSS

Differential Revision: D17485748

Pulled By: ZolotukhinM

fbshipit-source-id: 6094ef56077240e9379d4c53680867df1b6e79ef
2019-09-24 00:22:18 -07:00
Lara
c79d116a7d Update ONNX Export for Gather and Scatter for Opset 11
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24790

Reviewed By: hl475

Differential Revision: D17159723

Pulled By: houseroad

fbshipit-source-id: a63bb7c681120de85588dafecd03f04742dde8b7
2019-09-23 17:13:25 -07:00
Lara
3569a1c6dd Fix Exporting RNN/LSTM's Initial State (h0/c0) to ONNX
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22813

Reviewed By: hl475

Differential Revision: D16275791

Pulled By: houseroad

fbshipit-source-id: 6e2259e84e1f5a674daabcbe0df99b1360ed2b35
2019-09-23 17:08:24 -07:00
Negin Raoof
293d73fc92 Export gelu (#24475)
Summary:
Added support for gelu in symbolic opset9 + op and ORT tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24475

Reviewed By: hl475

Differential Revision: D17088708

Pulled By: houseroad

fbshipit-source-id: 9d2f9d7d91481c57829708793d88f786d6c3956f
2019-09-18 21:18:07 -07:00
BowenBao
595c1dfa74 Export clamp for opset 11 (#25797)
Summary:
- Export clamp for opset 11, which enables dynamic min/max inputs.
- Bump ONNX Runtime version in CI to enable opset 11 onnx::clip tests.
~~- Re-enable some disabled tests, now that backend impl & fixes are in.~~
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25797

Reviewed By: hl475

Differential Revision: D17399112

Pulled By: houseroad

fbshipit-source-id: 9b8bfa86b2bddfb5e15d6812f04b31db6e701d26
2019-09-18 20:40:23 -07:00
BowenBao
d02369dac2 add pass for onnx scalar type conversion (#24378)
Summary:
This pass tries to resolve scalar type mismatch issues between input tensors introduced by the implicit type conversions on scalars.

e.g. https://github.com/pytorch/pytorch/issues/23724
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24378

Reviewed By: hl475

Differential Revision: D17088682

Pulled By: houseroad

fbshipit-source-id: 3de710f70c3b70b9f76fd36a7c4c76e168dbc756
2019-09-18 15:55:54 -07:00
neginraoof
fcb100a3e0 Export round (#26126)
Summary:
Added round export in opset 11
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26126

Reviewed By: hl475

Differential Revision: D17403589

Pulled By: houseroad

fbshipit-source-id: f9ac3f7602c50019b9feadda8d5d944a058c5455
2019-09-16 16:40:10 -07:00
Spandan Tiwari
fc93d1ae6b Add ONNX export support for torch.log1p. (#25808)
Summary:
`torch.log1p` operator is not supported in ONNX exporter. This PR adds the support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25808

Reviewed By: zrphercule

Differential Revision: D17298092

Pulled By: houseroad

fbshipit-source-id: 65a919a07797722d7d4df8caf284bd89acd0bb02
2019-09-10 18:17:08 -07:00
Lara Haidar
387d5a4459 Add ONNX Export Support to rsqrt
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/24153

Reviewed By: zrphercule

Differential Revision: D17231150

Pulled By: houseroad

fbshipit-source-id: 621fa9069238a74101bb2a7f4792a6feb1f89606
2019-09-10 14:33:54 -07:00
neginraoof
d291935377 Export Unique
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25050

Differential Revision: D17085391

Pulled By: dzhulgakov

fbshipit-source-id: a17d54cf634650d3874d02c2bfacd906572ccf5f
2019-08-29 23:27:29 -07:00
Michael Suo
fa902c58ee fix inliner bug (#25052)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25052

Previously we would not inline nested functions, now we do.

Test Plan: Imported from OSS

Differential Revision: D16973848

Pulled By: suo

fbshipit-source-id: 94aa0b6f84a2577a663f4e219f930d2c6396d585
2019-08-28 19:45:47 -07:00