Commit Graph

276 Commits

Author SHA1 Message Date
Spandan Tiwari
dafee117e8 Removing unused arg f from _model_to_graph(). (#19647)
Summary:
Input argument `f` in `_model_to_graph()` method in `torch/onnx/utils.py` is unused. This PR removes it. If there's a reason to keep it around, please let me know.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19647

Reviewed By: dzhulgakov

Differential Revision: D15071720

Pulled By: houseroad

fbshipit-source-id: 59e0dd7a4d5ebd64d0e30f274b3892a4d218c496
2019-04-26 09:40:52 -07:00
Spandan Tiwari
9ef8eb4cbc Fix case for activations attribute in nn.RNN ONNX export. (#19368)
Summary:
This PR addresses the https://github.com/pytorch/pytorch/issues/19366 issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19368

Reviewed By: zrphercule

Differential Revision: D15043949

Pulled By: houseroad

fbshipit-source-id: 9b90410307d31bc5f2fd14aa0cdd33b22572ed7c
2019-04-25 16:31:25 -07:00
Zachary DeVito
31524bda1f @torch.jit.script(fn) now is a torch.jit.Function (#19721)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19721
ghimport-source-id: b4f5024adc845a82dc5197d19aab1496bf85089f

Reviewed By: jamesr66a

Differential Revision: D15078534

Pulled By: zdevito

fbshipit-source-id: 408d3a871302c5ac5d6426dc5de567f2188ebf4c
2019-04-25 15:53:00 -07:00
BowenBao
960513006f Support exporting squeeze & unsqueeze with negative dim attribute
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19297

Reviewed By: zrphercule

Differential Revision: D14953525

Pulled By: houseroad

fbshipit-source-id: 8d7eecd2804b8e27d3ee4ad6e763352818d02d0c
2019-04-24 12:45:59 -07:00
Wanchao Liang
e9c8f372c4 dispatch max_pools with no indices, expose max_pools to torch namespace (#19449)
Summary:
in functional interfaces we do boolean dispatch, but all to max_pool\*d_with_indices. This change it to emit max_pool\*d op instead when it's not necessary to expose with_indices ops to different backends (for jit).

It also bind max_pool\*d to the torch namespace, which is the same behavior with avg_pool\*d
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19449

Differential Revision: D15016839

Pulled By: wanchaol

fbshipit-source-id: f77cd5f0bcd6d8534c1296d89b061023a8288a2c
2019-04-23 11:20:05 -07:00
Vitaly Fedyunin
8be6d5ffd8 Add onnx support for _unique2 operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19582

Reviewed By: ezyang, jamesr66a

Differential Revision: D15037375

fbshipit-source-id: 6060476925bf02fa07f852054e06d2107f046e38
2019-04-22 17:52:47 -07:00
Lara Haidar-Ahmad
9983c24cfc Strip doc_string from exported ONNX models (#18882)
Summary:
Strip the doc_string by default from the exported ONNX models (this string has the stack trace and information about the local repos and folders, which can be confidential).

The users can still generate the doc_string by specifying add_doc_string=True in torch.onnx.export().
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18882

Differential Revision: D14889684

Pulled By: houseroad

fbshipit-source-id: 26d2c23c8dc3f484544aa854b507ada429adb9b8
2019-04-18 22:30:00 -07:00
Spandan Tiwari
a64cce326f Add constant folding to ONNX graph during export (Resubmission) (#18698)
Summary:
Rewritten version of https://github.com/pytorch/pytorch/pull/17771 using graph C++ APIs.

This PR adds the ability to do constant folding on ONNX graphs during PT->ONNX export. This is done mainly to optimize the graph and make it leaner. The two attached snapshots show a multiple-node LSTM model before and after constant folding.
A couple of notes:
1. Constant folding is by default turned off for now. The goal is to turn it on by default once we have validated it through all the tests.
2. Support for folding in nested blocks is not in place, but will be added in the future, if needed.

**Original Model:**
![multiple_lstm_original](https://user-images.githubusercontent.com/23646532/53987630-6ac53980-40d6-11e9-9702-1ccfee124a83.JPG)
**Constant-folded model:**
![multiple_lstm_constant_folded](https://user-images.githubusercontent.com/23646532/53987632-6c8efd00-40d6-11e9-81c5-362c16f68861.JPG)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18698

Differential Revision: D14889768

Pulled By: houseroad

fbshipit-source-id: b6616b1011de9668f7c4317c880cb8ad4c7b631a
2019-04-18 00:10:04 -07:00
Junjie Bai
33443d083e Fix python lint (#19331)
Summary:
VitalyFedyunin jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19331

Differential Revision: D14969435

Pulled By: bddppq

fbshipit-source-id: c1555c52064758ecbe668f92b837f2d7524f6118
2019-04-16 21:47:30 -07:00
Vitaly Fedyunin
1c5073fb4b Adding pin_memory kwarg to zeros, ones, empty, ... tensor constructors (#18952)
Summary:
Make it possible to construct a pinned memory tensor without creating a storage first and without calling pin_memory() function. It is also faster, as copy operation is unnecessary.

Supported functions:
```python
torch.rand_like(t, pin_memory=True)
torch.randn_like(t, pin_memory=True)
torch.empty_like(t, pin_memory=True)
torch.full_like(t, 4, pin_memory=True)
torch.zeros_like(t, pin_memory=True)
torch.ones_like(t, pin_memory=True)
torch.tensor([10,11], pin_memory=True)
torch.randn(3, 5, pin_memory=True)
torch.rand(3, pin_memory=True)
torch.zeros(3, pin_memory=True)
torch.randperm(3, pin_memory=True)
torch.empty(6, pin_memory=True)
torch.ones(6, pin_memory=True)
torch.eye(6, pin_memory=True)
torch.arange(3, 5, pin_memory=True)
```

Part of the bigger: `Remove Storage` plan.

Now compatible with both torch scripts:
 `  _1 = torch.zeros([10], dtype=6, layout=0, device=torch.device("cpu"), pin_memory=False)`
and
`  _1 = torch.zeros([10], dtype=6, layout=0, device=torch.device("cpu"))`

Same checked for all similar functions `rand_like`, `empty_like` and others

It is fixed version of #18455
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18952

Differential Revision: D14801792

Pulled By: VitalyFedyunin

fbshipit-source-id: 8dbc61078ff7a637d0ecdb95d4e98f704d5450ba
2019-04-16 11:06:15 -07:00
Zachary DeVito
dcb5fd3613 get propagate_shape logic out of module.h (#19137)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19137
ghimport-source-id: 2394765f2d401e68ffdfa4c985bfab4cca2517f8

Reviewed By: jamesr66a

Differential Revision: D14885946

Pulled By: zdevito

fbshipit-source-id: daa2894ed9761107e9d273bb172840dc23ace072
2019-04-13 08:42:17 -07:00
Lu Fang
bd55abb463 Fix onnx ints (#19102)
Summary:
If JIT constant propagation doesn't work, we have to handle the ListConstructor in symbolic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19102

Reviewed By: zrphercule

Differential Revision: D14875588

Pulled By: houseroad

fbshipit-source-id: d25c847d224d2d32db50aae1751100080e115022
2019-04-12 12:01:14 -07:00
Lu Fang
33a950924a Skip Slice if it's no op (#19155)
Summary:
If it's identity op, just skip the slice and return the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19155

Reviewed By: zrphercule

Differential Revision: D14890238

Pulled By: houseroad

fbshipit-source-id: f87b93df2cca0cb0e8ae2a1d95ba148044eafd4a
2019-04-11 12:26:32 -07:00
Richard Zou
2a2007e5ac EmbeddingBag CPU forward with per_sample_weights. (#18735)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18735
ghimport-source-id: d81bef54dafd7167d2451250d7be478d3c013920

Reviewed By: cpuhrsch

Differential Revision: D14851415

Pulled By: zou3519

fbshipit-source-id: cea6039e760ad571b90f0a536e420498f34be325
2019-04-09 18:12:55 -07:00
Lu Fang
ba77eadbca add an utility function to check whether it's in the middle of onnx export or not
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19050

Reviewed By: yinghai

Differential Revision: D14849878

Pulled By: houseroad

fbshipit-source-id: a0a4a57f5f9f315ba1334edfccc9284a8099d17f
2019-04-09 10:07:08 -07:00
Lu Fang
443a58e03d Export C10 operator in PyTorch Model (#18210)
Summary:
Almost there, feel free to review.

these c10 operators are exported to _caffe2 domain.

TODO:

- [x] let the onnx checker pass
- [x] test tensor list as argument
- [x] test caffe2 backend and converter
- [x] check the c10 schema can be exported to onnx
- [x] refactor the test case to share some code
- [x] fix the problem in ONNX_ATEN_FALLBACK
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18210

Reviewed By: zrphercule

Differential Revision: D14600916

Pulled By: houseroad

fbshipit-source-id: 2592a75f21098fb6ceb38c5d00ee40e9e01cd144
2019-04-08 16:06:00 -07:00
Vitaly Fedyunin
b7c830b916 Revert "Adding pin_memory kwarg to zeros, ones, empty,... (#18854)
Summary:
This reverts commit c484cf43a0.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18854

Differential Revision: D14778393

Pulled By: VitalyFedyunin

fbshipit-source-id: 4b5a1f5b1c091bbc4a8e75614734cc011d26b452
2019-04-05 06:25:33 -07:00
Lara
1ec1db477d ONNX Export All Cases of Softmax
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18482

Reviewed By: zrphercule

Differential Revision: D14630697

Pulled By: houseroad

fbshipit-source-id: c06f1e3bead10a265c5f4ac3723d49f4caf46801
2019-04-04 13:24:04 -07:00
Lu Fang
65dfe1203f add an assertion to check the param num (#18145)
Summary:
Introduce this check to see whether it will break any existing workflow
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18145

Reviewed By: dzhulgakov

Differential Revision: D14511711

Pulled By: houseroad

fbshipit-source-id: a7bb6ac84c9133fe94d3fe2f1a8566faed14a136
2019-04-03 12:47:23 -07:00
Igor Fedan
3079d95b6c Fix flake8 issues
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18762

Reviewed By: houseroad

Differential Revision: D14734152

Pulled By: ifedan

fbshipit-source-id: 5adf123f88273895ad34ee9041896358d686de08
2019-04-02 21:18:01 -07:00
Vitaly Fedyunin
c484cf43a0 Adding pin_memory kwarg to zeros, ones, empty, ... tensor constructors. (#18455)
Summary:
Make it possible to construct a pinned memory tensor without creating a storage first and without calling pin_memory() function. It is also faster, as copy operation is unnecessary.

Supported functions:
```python
torch.rand_like(t, pin_memory=True)
torch.randn_like(t, pin_memory=True)
torch.empty_like(t, pin_memory=True)
torch.full_like(t, 4, pin_memory=True)
torch.zeros_like(t, pin_memory=True)
torch.ones_like(t, pin_memory=True)
torch.tensor([10,11], pin_memory=True)
torch.randn(3, 5, pin_memory=True)
torch.rand(3, pin_memory=True)
torch.zeros(3, pin_memory=True)
torch.randperm(3, pin_memory=True)
torch.empty(6, pin_memory=True)
torch.ones(6, pin_memory=True)
torch.eye(6, pin_memory=True)
torch.arange(3, 5, pin_memory=True)
```

Part of the bigger: `Remove Storage` plan.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18455

Reviewed By: ezyang

Differential Revision: D14672084

Pulled By: VitalyFedyunin

fbshipit-source-id: 9d0997ec00f59500ee018f8b851934d334012124
2019-04-02 08:48:19 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Spandan Tiwari
1240327c5c Refactoring serialization of ONNX initializers to be name-based (Resubmission) (#17830)
Summary:
houseroad - this is the resubmission of https://github.com/pytorch/pytorch/pull/17420, as suggested.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17830

Reviewed By: zrphercule

Differential Revision: D14398714

Pulled By: houseroad

fbshipit-source-id: bda475f1ae8a5273ebdb0f6883fc66036c29d326
2019-03-29 15:23:29 -07:00
bddppq
1989716ae5 Resubmit PR-18512: Improved onnx export for 3 onnx ops (#18571)
Summary:
Fix ROCm CI failure
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18571

Differential Revision: D14669323

Pulled By: bddppq

fbshipit-source-id: 022afe5c20e680295c9cfdfe1ec14650305955a8
2019-03-28 18:12:49 -07:00
Junjie Bai
77280b11e3 Revert D14635130: Improved onnx export for 3 onnx ops.
Differential Revision:
D14635130

Original commit changeset: d54a2b6e2950

fbshipit-source-id: f624e2befdde245cb88435a95508b2a8e6b12e61
2019-03-28 10:26:34 -07:00
Benoit Steiner
eee760dbd3 Improved onnx export for 3 onnx ops. (#18512)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18512

Ceil and Floor have been supported since version 6 of ONNX: export them using the native onnx ops instead of an Aten op.
Similarly, support for the Where op has been added in version 9, so we don't need to wrap these op in an Aten op.

Reviewed By: houseroad

Differential Revision: D14635130

fbshipit-source-id: d54a2b6e295074a6214b5939b21051a6735c9958
2019-03-28 08:55:21 -07:00
eellison
e4f1681c82 Rename isTensor api -> isCompleteTensor (#18437)
Summary:
Is Tensor has been brought up as misleading a couple times, rename it isCompleteTensor for clarity.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18437

Differential Revision: D14605223

Pulled By: eellison

fbshipit-source-id: 189f67f12cbecd76516a04e67d8145c260c79036
2019-03-27 14:46:06 -07:00
Soumith Chintala
66628f78b7 Revert D14605905: [pytorch][PR] Add return_counts to torch.unique
Differential Revision:
D14605905

Original commit changeset: 555f5a12a8e2

fbshipit-source-id: c7874f5987893e956c022180a37763d88bba38db
2019-03-26 17:18:01 -07:00
BowenBao
654e59fcac Minor fix for onnx ConstantOfShape export (#18199)
Summary:
Set value as tensor of 1 element instead of scalar, according to ONNX spec.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18199

Reviewed By: dzhulgakov

Differential Revision: D14542588

Pulled By: houseroad

fbshipit-source-id: 70dc978d870ebe6ef37c519ba4a20061c3f07372
2019-03-26 13:23:16 -07:00
Xiang Gao
bf2a30cb22 Support dim=None for argmax and argmin (#18264)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/18263
cc: houseroad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18264

Reviewed By: ezyang

Differential Revision: D14559234

Pulled By: houseroad

fbshipit-source-id: c5b8623752d6c6af41c6d715fd9585a65294868d
2019-03-25 20:43:34 -07:00
Xiang Gao
e2730ddb21 Add return_counts to torch.unique (#18391)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/12598

This PR was originally authorized by ptrblck at https://github.com/pytorch/pytorch/pull/15495, but since there was no update for months after the request change, I clone that branch and resolve the code reviews here. Hope everything is good now. Especially, the implementation of count is changed from ptrblck's original algorithm to the one ngimel suggest, i.e. using `unique_by_key` and `adjacent_difference`.

The currently implementation of `_unique_dim` is VERY slow for computing inverse index and counts, see https://github.com/pytorch/pytorch/issues/18405. I will refactor `_unique_dim` in a later PR. For this PR, please allow me to keep the implementation as is.

cc: ptrblck ezyang ngimel colesbury
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18391

Reviewed By: soumith

Differential Revision: D14605905

Pulled By: VitalyFedyunin

fbshipit-source-id: 555f5a12a8e28c38b10dfccf1b6bb16c030bfdce
2019-03-25 20:38:17 -07:00
Lu Fang
2016daaf51 Fix ONNX symbolic for argmin and argmax (#18261)
Summary:
Fix the problem introduced in https://github.com/pytorch/pytorch/pull/17103
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18261

Reviewed By: bddppq

Differential Revision: D14558781

Pulled By: houseroad

fbshipit-source-id: 7bb50072e77d1d7b2a93f4011fa1362f26e9df1c
2019-03-20 22:51:13 -07:00
Lu Fang
18b31b73fb Retain the parameter names in ONNX exporter (#17551)
Summary:
So, we will keep the names of ONNX initializers the same as the names in PyTorch state dict.

Later, we will make this as the default behavior.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17551

Reviewed By: dzhulgakov

Differential Revision: D14491920

Pulled By: houseroad

fbshipit-source-id: f355c02e1b90d7ebbebf4be7c0fb6ae208ec795f
2019-03-20 12:11:23 -07:00
Alexandr Morev
abc171bd53 Fix typo in docstring
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18216

Differential Revision: D14539824

Pulled By: ezyang

fbshipit-source-id: 490b72951a75f3f8b949a2d692d660a3693ee98a
2019-03-20 11:16:36 -07:00
Lara Haidar-Ahmad
001cffed9d ONNX Export IsNan op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17698

Reviewed By: zrphercule

Differential Revision: D14470646

Pulled By: houseroad

fbshipit-source-id: d3e6adc83c4f9fa288c5fe0ae4c6af71fdd47905
2019-03-15 12:19:03 -07:00
BowenBao
8f07a9da30 Update nonzero onnx export (#18047)
Summary:
The output format of NonZero in ONNX(numpy https://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html) differs from that in PyTorch:
In ONNX: `[rank_of_input, num_of_nonzeros]`, whereas in PyTorch: `[num_of_nonzeros, rank_of_input]`.
To resolve the difference a Transpose op after the nonzero output is added in the exporter.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18047

Differential Revision: D14475081

Pulled By: ezyang

fbshipit-source-id: 7a3e4899f3419766b6145d3e9261e92859e81dc4
2019-03-14 22:19:20 -07:00
Lu Fang
cc07f968f8 Revert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initializers to be name-based
Differential Revision:
D14361993

Original commit changeset: da93e945d557

fbshipit-source-id: 15eea001fbcd059ac13903405aeb9ea182c6ee8b
2019-03-08 16:31:14 -08:00
Lu Fang
1043ff6d68 Set the default ONNX opset to the latest stable opset (i.e., 9) (#17736)
Summary:
1) The changes in the new opset won't affect internal pipeline.
2) The CI won't be affected by the ONNX changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17736

Reviewed By: zrphercule

Differential Revision: D14358710

Pulled By: houseroad

fbshipit-source-id: 4ef15d2246b50f6875ee215ce37ecf92d555ca6a
2019-03-07 10:56:06 -08:00
David Riazati
a2381fa346 Add module attributes (#17309)
Summary:
Similar to `nn.Parameter`s, this PR lets you store any `IValue` on a module as an attribute on a `ScriptModule` (only from the Python front-end currently). To mark something as an attribute, it should wrapped in `jit.Attribute(value, type)` (ex. `self.table = torch.jit.Attribute(table, Dict[str, torch.Tensor])`)

Followup Work:
* (de)serializing for use in C++
* change `self.training` to be a `bool` attribute instead of a buffer
* mutable attributes
* string frontend support
* documentation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17309

Differential Revision: D14354316

Pulled By: driazati

fbshipit-source-id: 67e08ab5229366b67fbc837e67b58831a4fb3318
2019-03-07 10:44:10 -08:00
Spandan Tiwari
e4c9d75008 - refactoring serialization of ONNX initializers to be name-based (#17420)
Summary:
Currently, serialization of model parameters in ONNX export depends on the order in which they are stored in a container (`list` on Python side and `std::vector` on C++ side). This has worked fine till now, but if we need to do any pass on that graph that mutates the parameter list, then strictly order-based serialization may not work.

This PR is the first in a set to bring in more passes (such as constant folding) related to ONNX export. This PR lays the groundwork by moving the serialization in ONNX export from order-based to name based approach, which is more amenable to some of the passes.

houseroad - As discussed this change uses a map for export, and removes the code from `export.cpp` that relies on the order to compute initializer names.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17420

Differential Revision: D14361993

Pulled By: houseroad

fbshipit-source-id: da93e945d55755c126de06641f35df87d1648cc4
2019-03-07 10:25:00 -08:00
Lara Haidar-Ahmad
3f94fc4862 ONNX Export for Max and Average Pooling in CEIL_MODE
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16769

Differential Revision: D14362175

Pulled By: houseroad

fbshipit-source-id: 65cfb1dfba6a43d39cc85374add368fe8e4e5645
2019-03-07 10:10:21 -08:00
Lara Haidar
3dba1285ab ONNX Export Narrow op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17550

Differential Revision: D14350401

Pulled By: houseroad

fbshipit-source-id: 4d88079bb7a8bbd270b0272009826eb3b202cc33
2019-03-06 22:37:58 -08:00
Lu Fang
4db3f8f806 Improve ONNX symbolic for logsoftmax and softmax (#17672)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17672

support dtype in the onnx symbolic

Reviewed By: zrphercule

Differential Revision: D14313987

fbshipit-source-id: e9364621b3f795191d880599711dfbcb220d0e31
2019-03-06 15:02:08 -08:00
Lara Haidar-Ahmad
073634612f ONNX Export Argmin and Argmax ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17382

Differential Revision: D14338811

Pulled By: houseroad

fbshipit-source-id: be07548d8063d1aa94f1801c18137738365b85fb
2019-03-06 12:11:47 -08:00
Wanchao Liang
ab95b5c6cc Rename prim::Undefined to prim::AutogradZero (#17611)
Summary:
supersedes #17245
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17611

Differential Revision: D14283581

Pulled By: wanchaol

fbshipit-source-id: 8022d02b8a021ea2fee9a18a2c8920eb123200c5
2019-03-01 15:13:18 -08:00
Elias Ellison
221edddd18 disallow shape analysis with resize ops (#17518)
Summary:
resize_ and resize_as resize the input tensor. because our shape analysis
is flow invariant, we don't do shape analysis on any op that relies on a Tensor that can alias a resized Tensor.

E.g. in the following graph the x += 10 x may have been resized.
```
torch.jit.script
def test(x, y):
    for i in range(10):
        x += 10
        x.resize_as_([1 for i in int(range(torch.rand())))
    return x

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17518

Differential Revision: D14249835

Pulled By: eellison

fbshipit-source-id: f281b468ccb8c29eeb0f68ca5458cc7246a166d9
2019-02-27 19:02:09 -08:00
Lara Haidar
5f06dcc4d7 ONNX Export Adaptive Pooling
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17412

Differential Revision: D14247923

Pulled By: houseroad

fbshipit-source-id: 5530cea8f80da7368bff1e29cf89c45ad53accee
2019-02-27 14:57:54 -08:00
Lu Fang
63519df07a Bump up the ONNX default opset version to 10 (#17419)
Summary:
Align with the master of ONNX.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17419

Reviewed By: zrphercule

Differential Revision: D14197985

Pulled By: houseroad

fbshipit-source-id: 13fc1f7786aadbbf5fe83bddf488fee3dedf58ce
2019-02-26 12:37:27 -08:00
Lu Fang
b0c18570ca add the support for stable ONNX opsets in exporter (#16068)
Summary:
Still wip, need more tests and correct handling for opset 8 in symbolics.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16068

Reviewed By: zrphercule

Differential Revision: D14185855

Pulled By: houseroad

fbshipit-source-id: 55200be810c88317c6e80a46bdbeb22e0b6e5f9e
2019-02-22 12:05:17 -08:00
Lara Haidar
b8d1f4a423 ONNX Export Maxpool Indices
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16455

Differential Revision: D14140375

Pulled By: houseroad

fbshipit-source-id: 12d02c447e7fe0fae49969d1daf40a87660ed416
2019-02-19 21:10:14 -08:00