Commit Graph

46 Commits

Author SHA1 Message Date
BowenBao
960513006f Support exporting squeeze & unsqueeze with negative dim attribute
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19297

Reviewed By: zrphercule

Differential Revision: D14953525

Pulled By: houseroad

fbshipit-source-id: 8d7eecd2804b8e27d3ee4ad6e763352818d02d0c
2019-04-24 12:45:59 -07:00
Spandan Tiwari
a64cce326f Add constant folding to ONNX graph during export (Resubmission) (#18698)
Summary:
Rewritten version of https://github.com/pytorch/pytorch/pull/17771 using graph C++ APIs.

This PR adds the ability to do constant folding on ONNX graphs during PT->ONNX export. This is done mainly to optimize the graph and make it leaner. The two attached snapshots show a multiple-node LSTM model before and after constant folding.
A couple of notes:
1. Constant folding is by default turned off for now. The goal is to turn it on by default once we have validated it through all the tests.
2. Support for folding in nested blocks is not in place, but will be added in the future, if needed.

**Original Model:**
![multiple_lstm_original](https://user-images.githubusercontent.com/23646532/53987630-6ac53980-40d6-11e9-9702-1ccfee124a83.JPG)
**Constant-folded model:**
![multiple_lstm_constant_folded](https://user-images.githubusercontent.com/23646532/53987632-6c8efd00-40d6-11e9-81c5-362c16f68861.JPG)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18698

Differential Revision: D14889768

Pulled By: houseroad

fbshipit-source-id: b6616b1011de9668f7c4317c880cb8ad4c7b631a
2019-04-18 00:10:04 -07:00
Lu Fang
bd55abb463 Fix onnx ints (#19102)
Summary:
If JIT constant propagation doesn't work, we have to handle the ListConstructor in symbolic.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19102

Reviewed By: zrphercule

Differential Revision: D14875588

Pulled By: houseroad

fbshipit-source-id: d25c847d224d2d32db50aae1751100080e115022
2019-04-12 12:01:14 -07:00
Lu Fang
443a58e03d Export C10 operator in PyTorch Model (#18210)
Summary:
Almost there, feel free to review.

these c10 operators are exported to _caffe2 domain.

TODO:

- [x] let the onnx checker pass
- [x] test tensor list as argument
- [x] test caffe2 backend and converter
- [x] check the c10 schema can be exported to onnx
- [x] refactor the test case to share some code
- [x] fix the problem in ONNX_ATEN_FALLBACK
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18210

Reviewed By: zrphercule

Differential Revision: D14600916

Pulled By: houseroad

fbshipit-source-id: 2592a75f21098fb6ceb38c5d00ee40e9e01cd144
2019-04-08 16:06:00 -07:00
Lara
1ec1db477d ONNX Export All Cases of Softmax
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18482

Reviewed By: zrphercule

Differential Revision: D14630697

Pulled By: houseroad

fbshipit-source-id: c06f1e3bead10a265c5f4ac3723d49f4caf46801
2019-04-04 13:24:04 -07:00
Edward Yang
173f224570 Turn on F401: Unused import warning. (#18598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18598
ghimport-source-id: c74597e5e7437e94a43c163cee0639b20d0d0c6a

Stack from [ghstack](https://github.com/ezyang/ghstack):
* **#18598 Turn on F401: Unused import warning.**

This was requested by someone at Facebook; this lint is turned
on for Facebook by default.  "Sure, why not."

I had to noqa a number of imports in __init__.  Hypothetically
we're supposed to use __all__ in this case, but I was too lazy
to fix it.  Left for future work.

Be careful!  flake8-2 and flake8-3 behave differently with
respect to import resolution for # type: comments.  flake8-3 will
report an import unused; flake8-2 will not.  For now, I just
noqa'd all these sites.

All the changes were done by hand.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Differential Revision: D14687478

fbshipit-source-id: 30d532381e914091aadfa0d2a5a89404819663e3
2019-03-30 09:01:17 -07:00
Spandan Tiwari
1240327c5c Refactoring serialization of ONNX initializers to be name-based (Resubmission) (#17830)
Summary:
houseroad - this is the resubmission of https://github.com/pytorch/pytorch/pull/17420, as suggested.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17830

Reviewed By: zrphercule

Differential Revision: D14398714

Pulled By: houseroad

fbshipit-source-id: bda475f1ae8a5273ebdb0f6883fc66036c29d326
2019-03-29 15:23:29 -07:00
Xiang Gao
bf2a30cb22 Support dim=None for argmax and argmin (#18264)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/18263
cc: houseroad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18264

Reviewed By: ezyang

Differential Revision: D14559234

Pulled By: houseroad

fbshipit-source-id: c5b8623752d6c6af41c6d715fd9585a65294868d
2019-03-25 20:43:34 -07:00
Lara Haidar-Ahmad
001cffed9d ONNX Export IsNan op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17698

Reviewed By: zrphercule

Differential Revision: D14470646

Pulled By: houseroad

fbshipit-source-id: d3e6adc83c4f9fa288c5fe0ae4c6af71fdd47905
2019-03-15 12:19:03 -07:00
Lara Haidar-Ahmad
3f94fc4862 ONNX Export for Max and Average Pooling in CEIL_MODE
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16769

Differential Revision: D14362175

Pulled By: houseroad

fbshipit-source-id: 65cfb1dfba6a43d39cc85374add368fe8e4e5645
2019-03-07 10:10:21 -08:00
Lara Haidar
3dba1285ab ONNX Export Narrow op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17550

Differential Revision: D14350401

Pulled By: houseroad

fbshipit-source-id: 4d88079bb7a8bbd270b0272009826eb3b202cc33
2019-03-06 22:37:58 -08:00
Lara Haidar-Ahmad
073634612f ONNX Export Argmin and Argmax ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17382

Differential Revision: D14338811

Pulled By: houseroad

fbshipit-source-id: be07548d8063d1aa94f1801c18137738365b85fb
2019-03-06 12:11:47 -08:00
Spandan Tiwari
c658d9b21b Temporarily disable Upsample operator tests in pytorch-onnx tests (#17696)
Summary:
In discussion with houseroad, because Upsample op is being updated in ONNX https://github.com/onnx/onnx/pull/1773 and these tests are blocking it. These tests will be updated once the ONNX PR goes in.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17696

Differential Revision: D14338845

Pulled By: houseroad

fbshipit-source-id: cfaf8cf1ab578ae69dd3bf21b1c0681b572b9b6f
2019-03-06 11:25:34 -08:00
Lara Haidar
5f06dcc4d7 ONNX Export Adaptive Pooling
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17412

Differential Revision: D14247923

Pulled By: houseroad

fbshipit-source-id: 5530cea8f80da7368bff1e29cf89c45ad53accee
2019-02-27 14:57:54 -08:00
BowenBao
2634e306e4 Extend support for exporting reshape to onnx. (#16971)
Summary:
Resolve issue with reshape_as test case.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16971

Differential Revision: D14098871

Pulled By: houseroad

fbshipit-source-id: ed6b966821462d374313256abbbe27f96ce11b2c
2019-02-15 00:17:05 -08:00
Dwarak Rajagopal
65d6f1014a Add support of count_include_pad and test end to end test for AveragePool (#17034)
Summary:
Add support of count_include_pad end to end test for AveragePool

We can export AveragePool from PyTorch with count_include_pad attribute. However, we don't directly support it in Caffe2's ONNX backend.
We also want to check whether we can pass the end to end test for average pool operator with count_include_pad attribute (pytorch => onnx => caffe2)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17034

Reviewed By: houseroad

Differential Revision: D14060186

Pulled By: dwarakrajagopal

fbshipit-source-id: 10dae532611c71f8c8cfc3fa701cc7c1c1c02695
2019-02-14 11:48:42 -08:00
Lara Haidar-Ahmad
dff8165d04 ONNX Export Flatten operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16240

Reviewed By: bddppq

Differential Revision: D13800025

Pulled By: houseroad

fbshipit-source-id: ae4c5e42026477b28ffd44eda2438d93936ea510
2019-01-30 11:05:00 -08:00
bddppq
1a09a2a27f Export PyTorch erf to ONNX Erf and add Caffe2 Erf operator
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16106

Differential Revision: D13709490

Pulled By: bddppq

fbshipit-source-id: 1b5b32261f06543371f7bd7ac9b11957a5eb4ad0
2019-01-17 09:18:08 -08:00
zrphercule
3d44eeec0a Fix different types in rsub caused bug (#15707)
Summary:
Before this pr, rsub did not convert two elements into the same dtype, therefore "1 - x" may export to an onnx model that two elements of rsub having different dtype.
By adding this symbolic patch this bug should be fixed.
Related test cases also created.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15707

Differential Revision: D13583042

Pulled By: zrphercule

fbshipit-source-id: 3a2de47a1a8d1ded1a0adfb911adbe6ac729cdef
2019-01-04 16:14:13 -08:00
Lu Fang
d63740bc3f Export group norm as ATen and add test (#15569)
Summary:
Short term solution, export group norm as an ATen op to unblock users.
Long term will add GroupNorm to onnx.

Add an end to end test for this one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15569

Differential Revision: D13554293

Pulled By: houseroad

fbshipit-source-id: b4974c9ea2a1b81338ca1e5c6747efe2715d7932
2018-12-27 14:44:29 -08:00
Lu Fang
6fccca4278 improve ONNX tests on torch.Linear
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/14821

Reviewed By: zrphercule

Differential Revision: D13348773

Pulled By: houseroad

fbshipit-source-id: 611ca6e28f715e5518649c8c16f702ac3433308c
2018-12-05 17:07:10 -08:00
zrphercule
02d3787a19 Support new upsample in symbolic, caffe2 backend & caffe2 frontend (#13272)
Summary:
We updated the description of upsample_op in onnx: https://github.com/onnx/onnx/pull/1467
Therefore, we need to support the new upsample_op in caffe2-onnx backend as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13272

Reviewed By: houseroad

Differential Revision: D12833656

Pulled By: zrphercule

fbshipit-source-id: 21af5282abaae12d2d044e4018a2b152aff79917
2018-11-05 19:13:57 -08:00
Wanchao Liang
af4a228426 Fix erase_number_type pass, negative indices in c2 and some onnx symbolics (#12888)
Summary:
The PR did two things:

1. fix the bug in erase_number_type on node inputs
2. handle negative indices for dim-reduce in caffe2
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12888

Reviewed By: houseroad

Differential Revision: D12833486

Pulled By: wanchaol

fbshipit-source-id: c3ceb400d91f0173b73ad95e392b010c3c14db7d
2018-11-05 19:13:49 -08:00
Wanchao Liang
f74fa91b8e Fix EraseListConstruct pass during ONNX export (#13195)
Summary:
There should really be a single place to erase or do special treatment to the prim::ListConstruct during ONNX export, this will make it consistent across different calls. e.g it will give a correct output graph in the following case:
```python
class Test(torch.nn.Module):
    def forward(self, input):
        return torch.cat([input, torch.zeros(input.size(0), 1).type_as(input)], dim=1)
```
Before this PR, we have the onnx graph as:

```
graph(%0 : Byte(2, 3)) {
  %1 : Long() = onnx::Constant[value={0}](), scope: Test
  %2 : Dynamic = onnx::Shape(%0), scope: Test
  %3 : Long() = onnx::Gather[axis=0](%2, %1), scope: Test
  %4 : Long() = onnx::Constant[value={1}](), scope: Test
  %5 : Dynamic = onnx::Unsqueeze[axes=[0]](%3)
  %6 : Dynamic = onnx::Unsqueeze[axes=[0]](%4)
  %7 : int[] = onnx::Concat[axis=0](%5, %6)
  %8 : Float(2, 1) = onnx::ConstantFill[dtype=1, input_as_shape=1, value=0](%7), scope: Test
  %9 : Byte(2, 1) = onnx::Cast[to=2](%8), scope: Test
  %10 : Byte(2, 4) = onnx::Concat[axis=1](%0, %9), scope: Test
  return (%10);
}

```
Which is wrong since onnx does not have a concept of `int[]`, here is the onnx graph after this PR:
```
graph(%0 : Byte(2, 3)) {
  %1 : Long() = onnx::Constant[value={0}](), scope: Test
  %2 : Dynamic = onnx::Shape(%0), scope: Test
  %3 : Long() = onnx::Gather[axis=0](%2, %1), scope: Test
  %4 : Long() = onnx::Constant[value={1}](), scope: Test
  %5 : Dynamic = onnx::Unsqueeze[axes=[0]](%3)
  %6 : Dynamic = onnx::Unsqueeze[axes=[0]](%4)
  %7 : Dynamic = onnx::Concat[axis=0](%5, %6)
  %8 : Float(2, 1) = onnx::ConstantFill[dtype=1, input_as_shape=1, value=0](%7), scope: Test
  %9 : Byte(2, 1) = onnx::Cast[to=2](%8), scope: Test
  %10 : Byte(2, 4) = onnx::Concat[axis=1](%0, %9), scope: Test
  return (%10);
}
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13195

Differential Revision: D12812541

Pulled By: wanchaol

fbshipit-source-id: db6be8bf0cdc85c426d5cbe09a28c5e5d860eb3e
2018-11-02 15:09:06 -07:00
zrphercule
c6defa0847 Add randn in onnx symbolic (#12880)
Summary:
In this pr we added operator randn in onnx symbolic. Also, related tests are added.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12880

Reviewed By: houseroad

Differential Revision: D10501788

Pulled By: zrphercule

fbshipit-source-id: ba8bb00ca848c4b95decabf638a1bc13fe11d03e
2018-10-25 14:11:23 -07:00
Lu Fang
058c1284be Fix the symbolic for pixel shuffle (#12192)
Summary:
Using Transpose + Reshape, not using DepthToSpace, since they are not available in C2 yet.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12192

Reviewed By: BIT-silence

Differential Revision: D10129913

Pulled By: houseroad

fbshipit-source-id: b60ee6d53b8ee95fd22f12e628709b951a83fab6
2018-10-15 19:53:35 -07:00
ArmenAg
d5eae90537 update onnx tests (#12619)
Summary:
Fixes #12586
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12619

Reviewed By: ezyang

Differential Revision: D10377548

Pulled By: houseroad

fbshipit-source-id: 1166e40aa8b98f1fe015fb1bdb2e90acfad3c356
2018-10-15 11:59:19 -07:00
James Reed
1f94ce1f97 Fix aten::to export in ONNX
Summary: D10356994 broke ONNX export for casting, this fixes it

Reviewed By: wanchaol

Differential Revision: D10366103

Pulled By: jamesr66a

fbshipit-source-id: 039454cce571a1186265708e7ddcb946814cc8b0
2018-10-12 21:20:01 -07:00
Lu Fang
c1d0784dcb enable onnx integration tests
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/12592

Reviewed By: BIT-silence, zrphercule

Differential Revision: D10363056

Pulled By: houseroad

fbshipit-source-id: 4d1dc0302a8cbe3d6ff1594f0d038330ba4efc81
2018-10-12 11:34:16 -07:00
James Reed
c3987a0fc3 Fix issues with ATenOp handling methods where self is not the first arg (#12353)
Summary:
ATenOp was handling `torch.where` incorrectly. Whereas the `torch.where` overload (and `aten::` function) had arguments in the order `Tensor condition, Tensor self, Tensor other`, ATenOp was emitting code that assumed that `self` was the 0th argument, and thus was trying to interpret the wrong value as the condition.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12353

Differential Revision: D10218435

Pulled By: jamesr66a

fbshipit-source-id: afe31c5d4f941e5fa500e6b0ef941346659c8d95
2018-10-08 15:25:39 -07:00
Wanchao Liang
3db9738b30 add torch factory methods (zeros/ones) to onnx symbolic
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11477

Differential Revision: D9761637

Pulled By: wanchaol

fbshipit-source-id: 401f8d43a831685a444e88509bace94ce5b94e52
2018-10-03 13:55:54 -07:00
James Reed
0f1ca569ce End-to-end dynamic slicing with ONNX DynamicSlice experimental operator (#11255)
Summary:
Requires https://github.com/onnx/onnx/pull/1377

This PR makes it so that slices with dynamic boundary values can be exported from pytorch and run in caffe2 via ONNX.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11255

Differential Revision: D9790216

Pulled By: jamesr66a

fbshipit-source-id: 6adfcddc5788df4d34d7ca98341077140402a3e2
2018-09-13 12:39:52 -07:00
Adam Paszke
3e665cc29b Improve support for tracing sizes, add more tracer warnings (#11288)
Summary:
Many constructors like `torch.zeros` or `torch.randn` didn't support
size tracing correctly which is fixed by this pass. Same issue has been
fixed in legacy tensor constructors.

Additionally, new tensor constructors, which do not participate in
tracing (most notably `torch.tensor`, `torch.as_tensor` and
`torch.from_numpy`) raise a warning when they are used.

Finally, entering a traceable operation disables the tracing in its body.
This is needed because

zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11288

Reviewed By: ezyang

Differential Revision: D9751183

Pulled By: apaszke

fbshipit-source-id: 51444a39d76a3e164adc396c432fd5ee3c8d5f7f
2018-09-10 15:22:48 -07:00
Lu Fang
f866574afc Fix the batchnorm onnx exporting when affine=False
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/11249

Reviewed By: Ac2zoom

Differential Revision: D9652526

Pulled By: houseroad

fbshipit-source-id: 12a9038beddd227a2f9e2178edf4e8d623488c3e
2018-09-05 11:10:25 -07:00
Adam Paszke
f3c3127c67 Don't flatten output lists in the JIT IR (#10949)
Summary:
Operators like aten::chunk used to return a number of tensors, but
now return a list. To make it easier to do shape prop through
aten::chunk and fuse it, I've also introduced prim::ConstantChunk,
which behaves like the previous implementation (has a variable length
output list).

The downside of this PR is that the introduction of more lists to the IR causes the LSTM and MiLSTM graphs to be considered as non-differentiable by the graph executor. I verified that they are still optimize correctly, and my next patch (that changes how the specializations/differentiation works) will restore those.

zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10949

Reviewed By: zdevito

Differential Revision: D9556823

Pulled By: apaszke

fbshipit-source-id: 33e63b17fc7247cac6cfc05eb7eb9bf069b499ee
2018-08-30 19:54:39 -07:00
Lu Fang
562fc7631f Add test cases for ONNX unsqueeze (#10924)
Summary:
PyTorch exporting test and end to end cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10924

Reviewed By: Ac2zoom

Differential Revision: D9548210

Pulled By: houseroad

fbshipit-source-id: 2381d1ad92a4e07f97060eb65c9fd09f60ad3de6
2018-08-29 11:10:21 -07:00
James Reed
db0abe1890 Fix bugs in handling of negative slice + gather indices (#10973)
Summary:
This fixes multiple bugs in the handling of negative indices in both slicing and gather operations. These were uncovered by @[1466077526:Elias Ellison]'s diff D9493614, which made it so that we actually emit negative indices when we see them in PyTorch code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10973

Reviewed By: jhcross

Differential Revision: D9546183

Pulled By: jamesr66a

fbshipit-source-id: 6cb0e84e8ad399e47e24a96c44025f644c17b375
2018-08-28 23:40:40 -07:00
James Reed
ddf187c198 Dont assume serialized integral types were widened to int32 in raw_data (#10718)
Summary:
zdevito et al came to the conclusion that the ONNX spec does not mandate the widening conversion of integral types when serializing tensor data into raw_data, as opposed to serializing the data into int32_data. PyTorch recently made this change in the export code, which caused import in caffe2 to break because it did not match semantics. This fixes that
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10718

Differential Revision: D9423712

Pulled By: jamesr66a

fbshipit-source-id: 479fbae67b028bf4f9c1ca1812c2c7b0c6cccd12
2018-08-21 18:41:31 -07:00
Jason Gauci
b4684db698 Add support for Log()
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10694

Reviewed By: houseroad

Differential Revision: D9405612

Pulled By: MisterTea

fbshipit-source-id: 6d83d3c2db933a3822076c7faf578ac0e92e60c6
2018-08-20 13:25:21 -07:00
Xiang Gao
83066e9b30 Add trigonometry functions for ONNX export (#7540)
Summary:
Trigonometry functions are newly added to ONNX in a recent PR https://github.com/onnx/onnx/pull/869

This PR makes pytorch support exporting graphs with trigonometry functions.

This PR might need to wait until it is ready to change
```python
_onnx_opset_version = 6
```
to
```python
_onnx_opset_version = 7
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/7540

Differential Revision: D9395041

Pulled By: bddppq

fbshipit-source-id: bdf3e9d212b911c8c4eacf5a0753bb092e4748d2
2018-08-19 23:01:28 -07:00
James Reed
0f05f5fb07 ATen layer norm symbolic (#10513)
Summary:
We can't rely on the ATen fallback pathway here because we need to parse out the constant attributes explicitly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10513

Reviewed By: dzhulgakov

Differential Revision: D9322133

Pulled By: jamesr66a

fbshipit-source-id: 52af947e6c44532ef220cb4b94838ca838b5df06
2018-08-15 08:28:52 -07:00
Junjie Bai
ba5d33bede Re-Enable ATen in C2 in integration builds to test ONNX ATen conversions
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10060

Differential Revision: D9081387

Pulled By: bddppq

fbshipit-source-id: 13cbff63df5241e013d4ebacfcd6da082e7196f6
2018-07-31 15:27:05 -07:00
Junjie Bai
cba03e2ebe Handle dynamic repeats in onnx symbolic (#10052)
Summary:
ONNX Tile can takes the `repeats` as dynamic input
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10052

Differential Revision: D9076841

Pulled By: bddppq

fbshipit-source-id: ddd692c5f5846c8fdba019baa9fad83ef9638da4
2018-07-31 10:39:50 -07:00
Gregory Chanan
6fb9acfc16 Revert empty n-dim and ATen in C2 integration builds
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/10064

Differential Revision: D9082082

Pulled By: gchanan

fbshipit-source-id: ae49470f5b4c89b13beb55fd825de1ba05b6a4fa
2018-07-31 07:25:56 -07:00
Junjie Bai
57750bd638 Enable ATen in C2 in integration builds to test ONNX ATen conversions (#10014)
Summary:
zrphercule
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10014

Reviewed By: houseroad

Differential Revision: D9061842

Pulled By: bddppq

fbshipit-source-id: 1e1c2aeae62dd2cc5c6a8d5e1d395ea5cf882734
2018-07-30 15:01:13 -07:00
Junjie Bai
4a192bcc3d Rename onnx integration tests file to avoid confusion
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/9913

Differential Revision: D9026787

Pulled By: bddppq

fbshipit-source-id: a3e7e79973abc4f5fe163f3e86b24382a1efd082
2018-07-26 23:40:41 -07:00