Commit Graph

479 Commits

Author SHA1 Message Date
Alexander Grund
93719440b8 Replace map(lambda constructs (#46462)
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal

Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462

Reviewed By: zou3519

Differential Revision: D24422343

Pulled By: ezyang

fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
2020-10-22 09:50:22 -07:00
Negin Raoof
96bc7faa50 [ONNX] Export var, var_mean and std_mean ops (#45678)
Summary:
Adding export for var, var_mean and std_mean ops

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45678

Reviewed By: houseroad

Differential Revision: D24398811

Pulled By: bzinodev

fbshipit-source-id: bf51422a9e035d521156c0fa6e77898aac83a380
2020-10-21 11:23:54 -07:00
Alexander Grund
5b0f400488 Replace list(map(...)) constructs by list comprehensions (#46461)
Summary:
As discussed in https://github.com/pytorch/pytorch/issues/46392 this makes the code more readable and possibly more performant.

It also fixes a bug detected by this where the argument order of `map` was confused: 030a24906e (diff-5bb26bd3a23ee3bb540aeadcc0385df2a4e48de39f87ed9ea76b21990738fe98L1537-R1537)

Fixes https://github.com/pytorch/pytorch/issues/46392

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46461

Reviewed By: ailzhang

Differential Revision: D24367015

Pulled By: ezyang

fbshipit-source-id: d55a67933cc22346b00544c9671f09982ad920e7
2020-10-19 18:42:49 -07:00
Ksenija Stanojevic
7f8b02f5b7 [ONNX] Add test for Batchnorm (#45633)
Summary:
Add test for Batchnorm in training mode

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45633

Reviewed By: VitalyFedyunin

Differential Revision: D24117026

Pulled By: bzinodev

fbshipit-source-id: 2728d8732e856390a2a00c3e8425b9c312c00650
2020-10-19 13:07:40 -07:00
BowenBao
b28b5d3c68 [ONNX] Update squeeze test for opset 9 (#45369)
Summary:
Only under static axes does opset 9 supports no-op squeeze when dim is not 1.
Updating the test case where it was setting dynamic axes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45369

Reviewed By: anjali411

Differential Revision: D24280180

Pulled By: bzinodev

fbshipit-source-id: d7cda88ab338a1c41a68052831dcebe739a3843c
2020-10-14 12:53:13 -07:00
Ksenija Stanojevic
6ca03aeb96 [ONNX] Fix flatten operator (#45632)
Summary:
Even when dim is None, there are cases when flatten can be exported.
Also enable test_densenet in scripting mode

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45632

Reviewed By: VitalyFedyunin

Differential Revision: D24116994

Pulled By: bzinodev

fbshipit-source-id: 76da6c073ddf79bba64397fd56b592de850034c4
2020-10-14 12:44:25 -07:00
shubhambhokare1
9d389b1dcc [ONNX] Preprocess index_put with bool inputs to masked_scatter/masked_fill (#45584)
Summary:
When the input to an indexing operation is a boolean, for example array[True] = value,
the subsequent index_put node formed needs to be converted to masked_scatter/masked_fill node based on the type of val the indexing node is equated. If that value is just a single scalar, then we use the masked_fill functionality and if value is a tensor of appropriate size, we use the masked_scatter functionality.

Fixes https://github.com/pytorch/pytorch/issues/34054

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45584

Reviewed By: VitalyFedyunin

Differential Revision: D24116921

Pulled By: bzinodev

fbshipit-source-id: ebd66e06d62e15f0d49c8191d9997f55edfa520e
2020-10-14 10:58:55 -07:00
neginraoof
5ce31b6f3f [ONNX] Improve error handling for adaptive_pool (#45874)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/43032
This update would also improve error handling for interpolate with 'area' mode.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45874

Reviewed By: albanD

Differential Revision: D24141266

Pulled By: bzinodev

fbshipit-source-id: 7559f1d6af4f1ef3507c15a1aee76fe01fa433cd
2020-10-07 09:20:35 -07:00
Dmytro Dzhulgakov
5177f8de2b Revert D23398534: [pytorch][PR] [ONNX] Improve error handling for adaptive_pool
Test Plan: revert-hammer

Differential Revision:
D23398534 (45ddeb5ce6)

Original commit changeset: f2d60d40340f

fbshipit-source-id: acc9d6c3d031662c37447fcee027b0c97b8492a7
2020-10-05 15:16:59 -07:00
Negin Raoof
45ddeb5ce6 [ONNX] Improve error handling for adaptive_pool (#43032)
Summary:
This would also improve error handling for interpolate with 'area' mode.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43032

Reviewed By: malfet

Differential Revision: D23398534

Pulled By: bzinodev

fbshipit-source-id: f2d60d40340f46e7c0499ea73c1e39945713418d
2020-10-05 11:53:14 -07:00
BowenBao
3da4cea658 [ONNX] Add dim_param support in export with onnx shape inference (#44920)
Summary:
* Support propagating `dim_param` in ONNX by encoding as `ShapeSymbol` in `SymbolicShape` of outputs. If export is called with `dynamic_axes` provided, shape inference will start with these axes set as dynamic.
* Add new test file `test_pytorch_onnx_shape_inference.py`, reusing all test cases from `test_pytorch_onnx_onnxruntime.py`, but focus on validating shape for all nodes in graph. Currently this is not enabled in the CI, since there are still quite some existing issues and corner cases to fix. The test is default to run only at opset 12.
* Bug fixes, such as div, _len, and peephole.cpp passes for PackPadded, and LogSoftmaxCrossEntropy.
* This PR depends on existing PR such as 44332.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44920

Reviewed By: eellison

Differential Revision: D23958398

Pulled By: bzinodev

fbshipit-source-id: 00479d9bd19c867d526769a15ba97ec16d56e51d
2020-09-30 21:56:24 -07:00
Negin Raoof
6b42ca2d69 [ONNX] Update embedding_bag export (#44693)
Summary:
Export of embedding bag with dynamic list of offsets.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44693

Reviewed By: malfet

Differential Revision: D23831980

Pulled By: bzinodev

fbshipit-source-id: 3eaff1a0f20d1bcfb8039e518d78c491be381e1a
2020-09-30 13:36:40 -07:00
Mike Ruberry
6d37126a10 Makes rdiv consistent with div (#45407)
Summary:
In addition to making rdiv consistent with div, this PR significantly expands division testing, accounting for floor_divide actually performing truncation division, too.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/45407

Reviewed By: ngimel

Differential Revision: D23974967

Pulled By: mruberry

fbshipit-source-id: 82b46b07615603f161ab7cd1d3afaa6d886bfe95
2020-09-29 08:34:01 -07:00
Negin Raoof
a77d633db1 [ONNX] Fix view for dynamic input shape (#43558)
Summary:
Export of view op with dynamic input shape is broken when using tensors with a 0-dim.
This fix removes symbolic use of static input size to fix this issue.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43558

Reviewed By: ailzhang

Differential Revision: D23965090

Pulled By: bzinodev

fbshipit-source-id: 628e9d7ee5d53375f25052340ca6feabf7ba7c53
2020-09-28 14:46:51 -07:00
BowenBao
57c18127dc [ONNX] Update div export to perform true divide (#44831)
Summary:
related https://github.com/pytorch/pytorch/issues/43787

Now that PyTorch div is actually performing true divide, update onnx export code to stay consistent.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44831

Reviewed By: eellison

Differential Revision: D23880316

Pulled By: bzinodev

fbshipit-source-id: 3bb8db34142ac4fed4039295ad3c4cb79487987f
2020-09-28 13:53:43 -07:00
Negin Raoof
95a97e51b5 [ONNX] Improve scripting inplace indexing ops (#44351)
Summary:
Fix a couple of issues with scripting inplace indexing in prepare_inplace_ops_for_onnx pass.
1- Tracing index copy (such as cases lik x[1:3] = data) already applies broadcasting on rhs if needed. The broadcasting node (aten::expand) is missing in scripting cases.

2- Inplace indexing with ellipsis (aten::copy_) is replaced with aten::index_put and then handled with slice+select in this pass.
Support for negative indices for this op added.

Shape inference is also enabled for scripting tests using new JIT API.
A few more tests are enabled for scripting.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44351

Reviewed By: ezyang

Differential Revision: D23880267

Pulled By: bzinodev

fbshipit-source-id: 78b33444633eb7ae0fbabc7415e3b16001f5207f
2020-09-28 00:32:36 -07:00
liqunfu
c3bf402cbb handle onnx nll with default ignore index (#44816)
Summary:
in ONNX NegativeLogLikelihoodLoss specification, ignore_index is optional without default value.
therefore, when convert nll op to ONNX, we need to set ignore_index attribute even if it is not specified (e.g. ignore_index=-100).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44816

Reviewed By: ezyang

Differential Revision: D23880354

Pulled By: bzinodev

fbshipit-source-id: d0bdd58d0a4507ed9ce37133e68533fe6d1bdf2b
2020-09-27 23:26:19 -07:00
neginraoof
4005afe94b [ONNX] Update narrow for dynamic inputs (#44039)
Summary:
Update narrow for dynamic inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44039

Reviewed By: mruberry

Differential Revision: D23742215

Pulled By: bzinodev

fbshipit-source-id: 0d58d2fe996f91a124af988a9a21ee433e842d07
2020-09-27 15:52:57 -07:00
Ksenija Stanojevic
0dda65ac77 [ONNX] add jit pass for lists (#43820)
Summary:
Add jit preprocessing pass for adding int lists.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43820

Reviewed By: albanD

Differential Revision: D23674598

Pulled By: bzinodev

fbshipit-source-id: 35766403a073e202563bba5251c07efb7cc5cfb1
2020-09-21 22:05:25 -07:00
shubhambhokare1
0063512a4b [ONNX] Updates to diagnostic tool to find missing ops (#44124)
Summary:
Moved description of tool and changes in function name

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44124

Reviewed By: albanD

Differential Revision: D23674618

Pulled By: bzinodev

fbshipit-source-id: 5db0bb14fc106fc96358b1e0590f08e975388c6d
2020-09-18 10:32:30 -07:00
BowenBao
e535fb3f7d [ONNX] Enable true_divide scripting export with ONNX shape inference (#43991)
Summary:
Fixes the `true_divide` symbolic to cast tensors correctly.
The logic depends on knowing input types at export time, which is a known gap for exporting scripted modules. On that end we are improving exporter by enabling ONNX shape inference https://github.com/pytorch/pytorch/issues/40628, and starting to increase coverage for scripting support.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43991

Reviewed By: mruberry

Differential Revision: D23674614

Pulled By: bzinodev

fbshipit-source-id: 1b1b85340eef641f664a14c4888781389c886a8b
2020-09-17 14:38:24 -07:00
Xiang Gao
20ac736200 Remove py2 compatible future imports (#44735)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44735

Reviewed By: mruberry

Differential Revision: D23731306

Pulled By: ezyang

fbshipit-source-id: 0ba009a99e475ddbe22981be8ac636f8a1c8b02f
2020-09-16 12:55:57 -07:00
Mike Ruberry
686e281bcf Updates div to perform true division (#42907)
Summary:
This PR:

- updates div to perform true division
- makes torch.true_divide an alias of torch.div

This follows on work in previous PyTorch releases that first deprecated div performing "integer" or "floor" division, then prevented it by throwing a runtime error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42907

Reviewed By: ngimel

Differential Revision: D23622114

Pulled By: mruberry

fbshipit-source-id: 414c7e3c1a662a6c3c731ad99cc942507d843927
2020-09-14 15:50:38 -07:00
BowenBao
43406e218a [ONNX] Update ONNX shape inference (#43929)
Summary:
* Support sequence type (de)serialization, enables onnx shape inference on sequence nodes.
* Fix shape inference with block input/output: e.g. Loop and If nodes.
* Fix bugs in symbolic discovered by coverage of onnx shape inference.
* Improve debuggability: added more jit logs. For simplicity, the default log level, when jit log is enabled, will not dump ir graphs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43929

Reviewed By: albanD

Differential Revision: D23674604

Pulled By: bzinodev

fbshipit-source-id: ab6aacb16d0e3b9a4708845bce27c6d65e567ba7
2020-09-14 15:36:19 -07:00
Ksenija Stanojevic
f7cfbac89b [ONNX] Update len symbolic (#43824)
Summary:
Update len symbolic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43824

Reviewed By: izdeby

Differential Revision: D23575765

Pulled By: bzinodev

fbshipit-source-id: 0e5c8c8d4a5297f65e2dc43168993350f784c776
2020-09-14 15:00:44 -07:00
shubhambhokare1
da11d932bc [ONNX] Update arange op to support out argument (#43777)
Summary:
Update arange op to support out argument

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43777

Reviewed By: albanD

Differential Revision: D23674583

Pulled By: bzinodev

fbshipit-source-id: 6fb65e048c6b1a551569d4d2a33223522d2a960c
2020-09-14 14:56:17 -07:00
neginraoof
62ebad4ff9 [ONNX] Export new_empty and new_zeros (#43506)
Summary:
Adding symbolic to export new_empty and new_zeros

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43506

Reviewed By: houseroad

Differential Revision: D23674574

Pulled By: bzinodev

fbshipit-source-id: ecfcdbd4845fd3a3c6618a060129fbeee4df5dd7
2020-09-14 14:48:34 -07:00
Akihiro Nitta
84949672bf Fix exception chaining in test/ (#44193)
Summary:
## Motivation
This PR fixes https://github.com/pytorch/pytorch/issues/43770 and is the continuation of https://github.com/pytorch/pytorch/issues/43836.

## Description of the change
This PR fixes exception chaining only in files under `test/` where appropriate.
To fix exception chaining, I used either:
1. `raise new_exception from old_exception` where `new_exception` itself seems not descriptive enough to debug or `old_exception` delivers valuable information.
2. `raise new_exception from None` where raising both of `new_exception` and `old_exception` seems a bit noisy and redundant.

## List of lines containing `raise` in `except` clause:
I wrote [this simple script](https://gist.github.com/akihironitta/4223c1b32404b36c1b349d70c4c93b4d) using [ast](https://docs.python.org/3.8/library/ast.html#module-ast) to list lines where `raise`ing in `except` clause.

- [x] f8f35fddd4/test/test_cpp_extensions_aot.py (L16)
- [x] f8f35fddd4/test/test_jit.py (L2503)
- [x] f8f35fddd4/test/onnx/model_defs/word_language_model.py (L22)
- [x] f8f35fddd4/test/onnx/verify.py (L73)
- [x] f8f35fddd4/test/onnx/verify.py (L110)
- [x] f8f35fddd4/test/onnx/test_verify.py (L31)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L255)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L2992)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L3025)
- [x] f8f35fddd4/test/distributed/test_c10d.py (L3712)
- [x] f8f35fddd4/test/distributed/test_distributed.py (L3180)
- [x] f8f35fddd4/test/distributed/test_distributed.py (L3198)
- [x] f8f35fddd4/test/distributed/test_data_parallel.py (L752)
- [x] f8f35fddd4/test/distributed/test_data_parallel.py (L776)
- [x] f8f35fddd4/test/test_type_hints.py (L151)
- [x] f8f35fddd4/test/test_jit_fuser.py (L771)
- [x] f8f35fddd4/test/test_jit_fuser.py (L773)
- [x] f8f35fddd4/test/test_dispatch.py (L105)
- [x] f8f35fddd4/test/test_distributions.py (L4738)
- [x] f8f35fddd4/test/test_nn.py (L9824)
- [x] f8f35fddd4/test/test_namedtensor.py (L843)
- [x] f8f35fddd4/test/test_jit_fuser_te.py (L875)
- [x] f8f35fddd4/test/test_jit_fuser_te.py (L877)
- [x] f8f35fddd4/test/test_dataloader.py (L31)
- [x] f8f35fddd4/test/test_dataloader.py (L43)
- [x] f8f35fddd4/test/test_dataloader.py (L365)
- [x] f8f35fddd4/test/test_dataloader.py (L391)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44193

Reviewed By: albanD

Differential Revision: D23681529

Pulled By: malfet

fbshipit-source-id: 7c2256ff17334625081137b35baeb816c1e53e0b
2020-09-14 14:20:16 -07:00
Rong Rong
105132b891 Move ONNX circle ci build to torch and remove all caffe2 CI job/workflows (#44595)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/44595

Reviewed By: seemethere

Differential Revision: D23670280

Pulled By: walterddr

fbshipit-source-id: b32633912f6c8b4606be36b90f901e636567b355
2020-09-14 09:50:13 -07:00
David Reiss
7d78a6fcdd Update interpolate to use new upsample overloads (#43025)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43025

- Use new overloads that better reflect the arguments to interpolate.
- More uniform interface for upsample ops allows simplifying the Python code.
- Also reorder overloads in native_functions.yaml to give them priority.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37177

ghstack-source-id: 106938111

Test Plan:
test_nn has pretty good coverage.

Relying on CI for ONNX, etc.

Didn't test FC because this change is *not* forward compatible.

To ensure backwards compatibility, I ran this code before this change

```python
def test_func(arg):
    interp = torch.nn.functional.interpolate
    with_size = interp(arg, size=(16,16))
    with_scale = interp(arg, scale_factor=[2.1, 2.2], recompute_scale_factor=False)
    with_compute = interp(arg, scale_factor=[2.1, 2.2])
    return (with_size, with_scale, with_compute)

traced_func = torch.jit.trace(test_func, torch.randn(1,1,1,1))

sample = torch.randn(1, 3, 7, 7)
output = traced_func(sample)

assert not torch.allclose(output[1], output[2])

torch.jit.save(traced_func, "model.pt")
torch.save((sample, output), "data.pt")
```

then this code after this change

```python
model = torch.jit.load("model.pt")
sample, golden = torch.load("data.pt")
result = model(sample)
for r, g in zip(result, golden):
    assert torch.allclose(r, g)
```

Reviewed By: AshkanAliabadi

Differential Revision: D21209991

fbshipit-source-id: 5b2ebb7c3ed76947361fe532d1dbdd6faa3544c8
2020-09-11 09:59:14 -07:00
shubhambhokare1
f3bf6a41ca [ONNX] Update repeat op (#43430)
Summary:
Update repeat op so that the inputs to sizes argument can a mixture of dynamic and constant inputs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43430

Reviewed By: houseroad

Differential Revision: D23494257

Pulled By: bzinodev

fbshipit-source-id: 90c5e90e4f73e98f3a9d5c8772850e72cecdf0d4
2020-09-04 18:53:31 -07:00
neginraoof
3d7c22a2ce [ONNX] Enable new scripting passes for functionalization and remove_mutation (#43791)
Summary:
Duplicate of https://github.com/pytorch/pytorch/issues/41413
This PR initiates the process of updating the torchsciprt backend interface used by ONNX exporter.

Replace jit lower graph pass by freeze module pass

Enable ScriptModule tests for ONNX operator tests (ORT backend) and model tests by default.

Replace jit remove_inplace_ops pass with remove_mutation and consolidation all passes for handling inplace ops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43791

Reviewed By: houseroad

Differential Revision: D23421872

Pulled By: bzinodev

fbshipit-source-id: a98710c45ee905748ec58385e2a232de2486331b
2020-09-04 15:21:45 -07:00
neginraoof
539d029d8c [ONNX] Fix split export using slice (#43670)
Summary:
Fix for exporting split with fixed output shape using slice.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43670

Reviewed By: houseroad

Differential Revision: D23420318

Pulled By: bzinodev

fbshipit-source-id: 09c2b58049fe32dca2f2977d91dd64de6ee9a72f
2020-09-04 10:52:44 -07:00
Ksenija Stanojevic
32e0cedc53 [ONNX] Move tests to test_pytorch_onnx_onnxruntime (#42684)
Summary:
Move tests to test_pytorch_onnx_onnxruntime from test_utility_fun

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42684

Reviewed By: smessmer

Differential Revision: D23480360

Pulled By: bzinodev

fbshipit-source-id: 8876ba0a0c3e1d7104511de7a5cca5262b32f574
2020-09-02 21:47:38 -07:00
neginraoof
f6f9d22228 [ONNX] Export KLDivLoss (#41858)
Summary:
Enable export for KLDivLoss

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41858

Reviewed By: mrshenli

Differential Revision: D22918004

Pulled By: bzinodev

fbshipit-source-id: e3debf77a4cf0eae0df6ed5a72ee91c43e482b62
2020-09-02 11:45:13 -07:00
Ksenija Stanojevic
820c4b05a9 [ONNX] Update slice symbolic function (#42935)
Summary:
During scripting, combination of shape (or size()) and slice (e.g x.shape[2:]) produces following error:
 slice() missing 1 required positional argument: 'step'
This happens because aten::slice has 2 signatures:

- aten::slice(Tensor self, int dim, int start, int end, int step) -> Tensor
- aten::slice(t[] l, int start, int end, int step) -> t[]

and when a list is passed instead of tensor the 2nd of the two slice signatures is called, and since it has 4 instead of 5 arguments it produces the above exception.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42935

Reviewed By: houseroad

Differential Revision: D23398435

Pulled By: bzinodev

fbshipit-source-id: 4151a8f878c520cea199b265973fb476b17801fe
2020-09-01 02:08:48 -07:00
Ksenija Stanojevic
ee53a335c0 [ONNX] Floordiv (#43022)
Summary:
Add export of floordiv op

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43022

Reviewed By: houseroad

Differential Revision: D23398493

Pulled By: bzinodev

fbshipit-source-id: f929a88b3bc0c3867e8fbc4e50afdf0c0c71553d
2020-08-31 17:54:40 -07:00
BowenBao
08126c9153 [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628)
Summary:
It is often that the conversion from torch operator to onnx operator requires input rank/dtype/shape to be known. Previously, the conversion depends on tracer to provide these info, leaving a gap in conversion of scripted modules.

We are extending the export with support from onnx shape inference. If enabled, onnx shape inference will be called whenever an onnx node is created. This is the first PR introducing the initial look of the feature. More and more cases will be supported following this PR.

* Added pass to run onnx shape inference on a given node. The node has to have namespace `onnx`.
* Moved helper functions from `export.cpp` to a common place for re-use.
* This feature is currently experimental, and can be turned on through flag `onnx_shape_inference` in internal api `torch.onnx._export`.
* Currently skipping ONNX Sequence ops, If/Loop and ConstantOfShape due to limitations. Support will be added in the future.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/40628

Reviewed By: mrshenli

Differential Revision: D22709746

Pulled By: bzinodev

fbshipit-source-id: b52aeeae00667e66e0b0c1144022f7af9a8b2948
2020-08-30 18:35:46 -07:00
shubhambhokare1
6aaae3b08b [ONNX] Addition of diagnostic tool API (#43020)
Summary:
Added initial diagnostic tool API

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43020

Reviewed By: malfet

Differential Revision: D23398459

Pulled By: bzinodev

fbshipit-source-id: 7a6d9164a19e3ba51676fbcf645c4d358825eb42
2020-08-28 23:04:59 -07:00
neginraoof
cd0bab8d8d [ONNX] Where op (#41544)
Summary:
Extending where op export

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41544

Reviewed By: malfet

Differential Revision: D23279515

Pulled By: bzinodev

fbshipit-source-id: 4627c95ba18c8a5ac8d06839c343e06e71c46aa7
2020-08-28 18:15:01 -07:00
Spandan Tiwari
1a21c92364 [ONNX] Update in scatter ONNX export when scalar src has different type (#43440)
Summary:
`torch.scatter` allows `src` to be of different type when `src` is a scalar. This requires a an explicit cast op to be inserted in the ONNX graph because ONNX `ScatterElements` does not allow different types. This PR updates the export of `torch.scatter` with this logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43440

Reviewed By: hl475

Differential Revision: D23352317

Pulled By: houseroad

fbshipit-source-id: c9eeddeebb67fc3c40ad01def134799ef2b4dea6
2020-08-27 16:45:37 -07:00
shubhambhokare1
9ca338a9d4 [ONNX] Modified slice node in inplace ops pass (#43275)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/42292

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43275

Reviewed By: hl475

Differential Revision: D23352540

Pulled By: houseroad

fbshipit-source-id: 7fce3087c333efe3db4b03e9b678d0bee418e93a
2020-08-26 16:51:20 -07:00
Ralf Gommers
573940f8d7 Fix type annotation errors in torch.functional (#43446)
Summary:
Closes gh-42968

Pull Request resolved: https://github.com/pytorch/pytorch/pull/43446

Reviewed By: albanD

Differential Revision: D23280962

Pulled By: malfet

fbshipit-source-id: de5386a95a20ecc814c39cbec3e4252112340b3a
2020-08-26 08:27:59 -07:00
Kimish Patel
b52e6d00f9 Change quantizer to account for input tensor's memory format. (#42178)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42178

This otherwise introduces unnecessary calls to contiguous in the rest of
the network, where certain ops want channels last format.

Test Plan:
Quantization tests.

Imported from OSS

Reviewed By: jerryzh168

Differential Revision: D22796479

fbshipit-source-id: f1ada1c2eeed84991b9b195120699b943ef6e421
2020-08-22 16:48:50 -07:00
BowenBao
8efa898349 [ONNX] Export split_to_sequence as slice when output number is static (#42744)
Summary:
Optimize exported graph to export slice nodes for aten::split when the number of split outputs are fixed. Previously under some cases these are exported as onnx::SplitToSequence, which is dynamic in tensor output count.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42744

Reviewed By: houseroad

Differential Revision: D23172465

Pulled By: bzinodev

fbshipit-source-id: 11e432b4ac1351f17e48356c16dc46f877fdf7da
2020-08-22 09:11:25 -07:00
BowenBao
da70976e66 [ONNX] Add support for operator add between tensor list (#41888)
Summary:
E.g.
```python
outs = []
outs += [torch.randn(3,4)]
outs = outs + [torch.randn(4,5), torch.randn(5,6)]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41888

Reviewed By: houseroad

Differential Revision: D23172880

Pulled By: bzinodev

fbshipit-source-id: 93865106e3de5908a993e0cfa82f626ba94dab7e
2020-08-20 22:38:23 -07:00
Yael Dekel
3c5e3966f4 [ONNX] Squeeze operator should give an error when trying to apply to a dimension with shape > 1 (#38476)
Summary:
The ONNX spec for the Squeeze operator:

> Remove single-dimensional entries from the shape of a tensor. Takes a parameter axes with a list of axes to squeeze. If axes is not provided, all the single dimensions will be removed from the shape. If an axis is selected with shape entry not equal to one, an error is raised.

Currently, as explained in issue https://github.com/pytorch/pytorch/issues/36796, it is possible to export such a model to ONNX, and this results in an exception from ONNX runtime.

Fixes https://github.com/pytorch/pytorch/issues/36796.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38476

Reviewed By: hl475

Differential Revision: D22158024

Pulled By: houseroad

fbshipit-source-id: bed625f3c626eabcbfb2ea83ec2f992963defa19
2020-08-17 17:41:46 -07:00
Ksenija Stanojevic
e845b0ab51 [Resending] [ONNX] Add eliminate_unused_items pass (#42743)
Summary:
This PR:

- Adds eliminate_unused_items pass that removes unused inputs and initializers.
- Fixes run_embed_params function so it doesn't export unnecessary parameters.
- Removes test_modifying_params in test_verify since it's no longer needed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42743

Reviewed By: hl475

Differential Revision: D23058954

Pulled By: houseroad

fbshipit-source-id: cd1e81463285a0bf4e60766c8c87fc9a350d9c7e
2020-08-11 20:30:50 -07:00
Spandan Tiwari
d83cc92948 [ONNX] Add support for scalar src in torch.scatter ONNX export. (#42765)
Summary:
`torch.scatter` supports two overloads – one where `src` input tensor is same size as the `index` tensor input, and second, where `src` is a scalar. Currrently, ONNX exporter only supports the first overload. This PR adds export support for the second overload of `torch.scatter`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42765

Reviewed By: hl475

Differential Revision: D23025189

Pulled By: houseroad

fbshipit-source-id: 5c2a3f3ce3b2d69661a227df8a8e0ed7c1858dbf
2020-08-10 11:45:42 -07:00
BowenBao
55ac240589 [ONNX] Fix scalar type cast for comparison ops (#37787)
Summary:
Always promote type casts for comparison operators, regardless if the input is tensor or scalar. Unlike arithmetic operators, where scalars are implicitly cast to the same type as tensors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/37787

Reviewed By: hl475

Differential Revision: D21440585

Pulled By: houseroad

fbshipit-source-id: fb5c78933760f1d1388b921e14d73a2cb982b92f
2020-08-09 23:00:57 -07:00