Commit Graph

2352 Commits

Author SHA1 Message Date
Xiaomeng Yang
54b33503ec Optimize channel_stats_op (#16243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16243

Optimize channel_stats_op and add NHWC impl

Reviewed By: takatosp1

Differential Revision: D13775515

fbshipit-source-id: decb889e646f5316d4afefdf9f9b6bc6343613cd
2019-03-12 12:08:00 -07:00
Hector Yuen
5bf9e41938 move half<->float conversions to oss operators (#17548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17548

expose half float operators to OSS

common/math/Float16.h is the original implementation
this is substituted by caffe2/c10/util/Half.h

from the comments seems like the both implementations don't handle denormals

Reviewed By: jspark1105

Differential Revision: D14244200

fbshipit-source-id: f90ba28c5bf6a2b451b429cc4925b8cc376ac651
2019-03-07 13:00:13 -08:00
Ahmed Aly
f8778aef78 Implement a Caffe2 standalone LSTM operator (#17726)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17726

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17725

Pull Request resolved: https://github.com/pytorch/pytorch/pull/17461

Implementing a standalone LSTM Operator in Caffe2 adopted from this Aten implementation: diffusion/FBS/browse/master/fbcode/caffe2/aten/src/ATen/native/RNN.cpp. The most tricky thing in this exercise was that caffe2::Tensor has no copy constructor that made it necessary to implement a custom templated copy constructor for the different Tensor containers used in the code. Also there was no way to use off-the-shelf C2 operators in my code easily so I had to copy some code that is doing basic matmul, cat, split, transpose and linear as utility functions.

Two things missing:

- Profiling this implementation against the current ONNXified LSTM op
- Make this operator available to use in PyTorch

Reviewed By: dzhulgakov

Differential Revision: D14351575

fbshipit-source-id: 3b99b53212cf593c7a49e45580b5a07b90809e64
2019-03-07 01:08:49 -08:00
youkaichao
b87abdfc12 typo fix
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17653

Differential Revision: D14302003

Pulled By: ezyang

fbshipit-source-id: 8ad90985a392b07127c7e315d4e74ce77962b573
2019-03-06 11:36:44 -08:00
Deepali Chourasia
e3516d0a95 omit group conv NHWC test for GPU (#17715)
Summary:
Observed the test `TestGroupConvolution.test_group_convolution` to fail with the following error:

```
Falsifying example: test_group_convolution(self=<caffe2.python.operator_test.group_conv_test.TestGroupConvolution testMethod=test_group_convolution>, stride=3, pad=0, kernel=5, size=8, group=4, input_channels_per_group=7, output_channels_per_group=8, batch_size=2, order='NHWC', engine='', use_bias=False, gc=, dc=[, device_type: 1])

You can reproduce this example by temporarily adding reproduce_failure('3.59.1', b'AAAA') as a decorator on your test case
```
This example generated by hypothesis has `group=2, order='NHWC' and dc=[, device_type: 1])`.
I think this example should be skipped.

I have mimicked the change corresponding to [PR#13554](https://github.com/pytorch/pytorch/pull/13554) to skip this example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17715

Differential Revision: D14346642

Pulled By: ezyang

fbshipit-source-id: b1f1fef09f625fdb43d31c7213854e61a96381ba
2019-03-06 11:32:35 -08:00
Edward Yang
1e6acc676f Replace caffe2::DeviceGuard with c10::cuda::CUDAGuard (#17623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17623

Despite it's generic sounding name, caffe2::DeviceGuard actually
only worked on CUDA devices.  Rename it to something that more
clearly spells out its applicability.

I'm not sure if it's the right call, but in this patch I added
'using CUDAGuard = c10::cuda::CUDAGuard', as this seems to be more
in-line with how the Caffe2 codebase is currently written.  More
idiomatic c10 namespace style would be to say cuda::CUDAGuard.
Willing to change this if people shout.

This is a respin of D13156470 (#14284)

Reviewed By: dzhulgakov

Differential Revision: D14285504

fbshipit-source-id: 93b8ab938b064572b3b010c307e1261fde0fff3d
2019-03-06 10:48:15 -08:00
Soumith Chintala
507c93bad2 Revert D14160172: Implement a Caffe2 standalone LSTM operator
Differential Revision:
D14160172

Original commit changeset: c33e3f9e8aea

fbshipit-source-id: cffe35d93f0ac75ca93aa98a3b82af3d372f2fc1
2019-03-06 08:44:25 -08:00
Ahmed Aly
bfe7a58f69 Implement a Caffe2 standalone LSTM operator (#17461)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17461

Implementing a standalone LSTM Operator in Caffe2 adopted from this Aten implementation: diffusion/FBS/browse/master/fbcode/caffe2/aten/src/ATen/native/RNN.cpp. The most tricky thing in this exercise was that caffe2::Tensor has no copy constructor that made it necessary to implement a custom templated copy constructor for the different Tensor containers used in the code. Also there was no way to use off-the-shelf C2 operators in my code easily so I had to copy some code that is doing basic matmul, cat, split, transpose and linear as utility functions.

Two things missing:

- Profiling this implementation against the current ONNXified LSTM op
- Make this operator available to use in PyTorch

Reviewed By: dzhulgakov

Differential Revision: D14160172

fbshipit-source-id: c33e3f9e8aeae578b64d97593cb031a251216029
2019-03-05 17:34:44 -08:00
Sebastian Messmer
910519e45b Expose cuda kernel for caffe2::GenerateProposals
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17066

Reviewed By: ezyang, wat3rBro

Differential Revision: D14071130

fbshipit-source-id: 6fe26503f6069c36ec31d6c09b549b932d5db242
2019-03-04 14:59:08 -08:00
Dmytro Dzhulgakov
dec116e96f PyTorch/Caffe2 tensor interop in Python (#17190)
Summary:
Because of two separate python extensions with different pybind
instances I have to go through void* conversion. Since it's hidden from
user, it's fine.

New APIs added on C2 side:
- workspace.FetchTorch('blob')
- workspace.Workspace.current.blobs['blob'].to_torch()
- workspace.FeedBlob('blob', pytorch_tensor)

Works on CPU an GPU.

The only glitches are with resizing because of variable/tensor split.
But data sharing works properly.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17190

Reviewed By: ezyang

Differential Revision: D14163882

Pulled By: dzhulgakov

fbshipit-source-id: d18e5b8fcae026f393c842a1149e972515732de2
2019-03-04 11:34:01 -08:00
Martin Schatz
5b835682e3 Remove GPU dependency from ProfileObserver (#17592)
Summary:
Remove GPU dependency and register ProfileObserver.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17592

Reviewed By: ezyang

Differential Revision: D14265801

Pulled By: mdschatz

fbshipit-source-id: f98c0c32653c64a8b087c58ece4f864dfbe1d4b8
2019-03-04 10:00:46 -08:00
peter
698f947463 Revert #17191 and #17215 that no longer apply on Windows (#17567)
Summary:
They are previously merged to resolve #17051. However, since it was resolved by the upstream, and it was causing some issues like https://github.com/abjer/tsds/issues/8, I think it's time to revert these changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17567

Differential Revision: D14265241

Pulled By: kostmo

fbshipit-source-id: 7fa2b7dd4ebc5148681acb439cf82d983898694e
2019-03-01 10:37:27 -08:00
Huan Gui
d3fcd0d798 add dropout during eval (#17549)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17549

Currently Dropout is only enabled in training, we enable the option of having dropout in Eval.

This is to follow [1]. This functionality would be used for uncertainty estimation in exploration project.

[1] Gal, Yarin, and Zoubin Ghahramani. "Dropout as a bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. 2016.

Reviewed By: Wakeupbuddy

Differential Revision: D14216216

fbshipit-source-id: 87c8c9cc522a82df467b685805f0775c86923d8b
2019-02-28 23:21:29 -08:00
rohithkrn
8c72217817 Enable boolean_mask, adadelta, adagrad fp16 on ROCm (#17235)
Summary:
-  Fix bugs, indentation for adadelta and adagrad tests to enable fp16
- Enable boolean_mask fp16  on ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17235

Differential Revision: D14240828

Pulled By: bddppq

fbshipit-source-id: ab6e8f38aa7afb83b4b879f2f4cf2277c643198f
2019-02-27 10:07:36 -08:00
Lu Fang
29c27d7b99 Automatic update of fbcode/onnx to e18bb41d255a23daf368ffd62a2645db55db4c72 (#17460)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17460

Previous import was 4c091e048ca42682d63ccd3c1811560bc12b732d

Included changes:
- **[e18bb41](https://github.com/onnx/onnx/commit/e18bb41)**: Infer shape of the second output of Dropout op (#1822) <Shinichiro Hamaji>
- **[cb544d0](https://github.com/onnx/onnx/commit/cb544d0)**: Clarify dtype of Dropout's mask output (#1826) <Shinichiro Hamaji>
- **[b60f693](https://github.com/onnx/onnx/commit/b60f693)**: Fix shape inference when auto_pad  is notset (#1824) <Li-Wen Chang>
- **[80346bd](https://github.com/onnx/onnx/commit/80346bd)**: update test datat (#1825) <Rui Zhu>
- **[b37fc6d](https://github.com/onnx/onnx/commit/b37fc6d)**: Add stringnormalizer operator to ONNX (#1745) <Dmitri Smirnov>

Reviewed By: zrphercule

Differential Revision: D14206264

fbshipit-source-id: 0575fa3374ff2b93b2ecee9989cfa4793c599117
2019-02-25 11:09:08 -08:00
Tongliang Liao
65ecef1509 Export ElementwiseLinear to ONNX (Mul + Add). (#17411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17411

Reshape-based approach to support dynamic shape.
The first Reshape flatten inner dimensions and the second one recover the actual shape.
No Shape/Reshape will be generated unless necessary.

![image](https://user-images.githubusercontent.com/5203025/52215001-114ace80-28ce-11e9-815f-28ad190d3189.png)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16716

Reviewed By: zrphercule

Differential Revision: D14094532

Pulled By: houseroad

fbshipit-source-id: bad6a1fbf5963ef3dd034ef4bf440f5a5d6980bc
2019-02-25 08:11:13 -08:00
Junjie Bai
d92ddcf7ca Skip convnets benchmark in rocm CI (#17331)
Summary:
random coredump
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17331

Differential Revision: D14162018

Pulled By: bddppq

fbshipit-source-id: 3ed15a79b7bca2498c50f6af80cbd6be7229dea8
2019-02-20 21:12:24 -08:00
Cheng,Penghui
376bb40379 Implementation convolutionTranspose operator for mkl-dnn (#12866)
Summary:
the speed-up of a single operation is up to 2-3X on BDW.
This PR depend on https://github.com/pytorch/pytorch/pull/14308
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12866

Differential Revision: D13936110

Pulled By: ezyang

fbshipit-source-id: 34e3c2ca982a41e8bf556e2aa0477c999fc939d3
2019-02-20 17:26:10 -08:00
Cheng,Penghui
c02e2ff0b0 Support multi-device configuration for MKL-DNN (#12856)
Summary:
MKL-DNN support multi-node mode,but not support multi-devices mode,this commit will support multi-devices for MKL-DNN.This commit  depend on https://github.com/pytorch/pytorch/pull/11330
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12856

Differential Revision: D13735075

Pulled By: ezyang

fbshipit-source-id: b63f92b7c792051f5cb22e3dda948013676e109b
2019-02-20 16:57:43 -08:00
Peizhao Zhang
54e4c4d7de Removed obsolete argument correct_transform_coords in bbox_transform op. (#16723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16723

Removed obsolete argument correct_transform_coords in bbox_transform op.
* It was only for backward compatibility. We should not have models using it now.

Differential Revision: D13937430

fbshipit-source-id: 504bb066137ce408c12dc9dcc2e0a513bad9b7ee
2019-02-20 13:22:33 -08:00
Hector Yuen
075c7b1fef make the threshold for acurracy more precise (#17194)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17194

we found that there is a per row absolute error due to int8 quant
and a relative error table-wide in case fp16 is used

Reviewed By: csummersea

Differential Revision: D14113353

fbshipit-source-id: c7065aa9d15c453c2e5609f421ad0155145af889
2019-02-20 13:14:11 -08:00
peter
972fc5f191 Fix dll loading issue for Caffe2 and Windows
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17215

Reviewed By: orionr

Differential Revision: D14138445

Pulled By: kostmo

fbshipit-source-id: 0bb4f2f1ed5bda7416ba7e4c6b0618414b328934
2019-02-19 17:04:06 -08:00
Junjie Bai
bf16a6bc3c Skip onnx logsoftmax tests in rocm (#17170)
Summary:
similar to softmax there are issues of getting nan randomly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17170

Differential Revision: D14110515

Pulled By: bddppq

fbshipit-source-id: 5c97661184d45a02122fd69d35a839fdf4520c8c
2019-02-16 18:06:04 -08:00
Hector Yuen
cde7204636 change the epsilon for fp32/fp16 to uint8 to be the same (#17062)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17062

from jiyan's training jobs it seems like we found a quantization bug

fp32
fp32->rowwise int8 is fine
fp16 is fine
fp16->rowwise int8 is not fine

we are preconverting everything to fp32 and using the existing code, so there is no need to change the epsilon in the case of fp16 since at the time of converting, everything is a float

Reviewed By: jspark1105

Differential Revision: D14063271

fbshipit-source-id: 747297d64ed8c6fdf4be5bb10ac584e1d21a85e6
2019-02-15 18:33:37 -08:00
Yinghai Lu
70ee257ad4 Fix batch insert (#17158)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17158

Because of Reshape op, batch size can be changed. This diff addresses first order issue raised from multiple batch size system. We need to export different real_batch_size for different max_batch_size input and attach it to the right output.

It also fixes a false exception.

Reviewed By: ipiszy

Differential Revision: D14099541

fbshipit-source-id: 0fa9e86826f417a11d2b5dd2ee60dff64a7ce8c4
2019-02-15 12:28:23 -08:00
Yinghai Lu
58648a19df Create BackendTransformerBase to host common functions used for backend lowering (#17074)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17074

There are some common functionalities in backend lowering. This diff creates a base class which hosts these common stuff.

Reviewed By: ipiszy

Differential Revision: D14073192

fbshipit-source-id: 9617603d0e73db6f7fcc5572756b9dbab506dae5
2019-02-14 17:57:03 -08:00
Yinghai Lu
b515ebc6f1 Remove fake inference for shape info in ONNXIFI transform (#17046)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17046

As we are moving to use bound shape inference, we can remove the awkward fake inference run path and make the code cleaner.

Reviewed By: ipiszy

Differential Revision: D14061501

fbshipit-source-id: b3ace98b3dabef3c3359086a0bb1410518cefa26
2019-02-14 15:12:20 -08:00
Michael Liu
92a516b9ff Apply modernize-use-override - 2/2
Summary:
Use C++11’s override and remove virtual where applicable.
Change are automatically generated.

Reviewed By: Orvid

Differential Revision: D14054721

fbshipit-source-id: 15d266fa1779b1e3ea6270f00841d7fb1e4d44ee
2019-02-13 21:01:28 -08:00
Tongliang Liao
a670824fee Support FC (Caffe2) -> Gemm (ONNX) with variable input shape. (#16184)
Summary:
For >2D input, previously the code uses static shape captured during tracing and reshape before/after `Gemm`.
Now we add `-1` to the first `Reshape`, and uses `Shape(X) => Slice(outer) => Concat(with -1 for inner) => Reshape` for the second.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16184

Differential Revision: D14070754

Pulled By: ezyang

fbshipit-source-id: 86c69e9b254945b3406c07e122e57a00dfeba3df
2019-02-13 17:12:34 -08:00
Junjie Bai
2ad5dcbbe4 Make timeout in resnet50_trainer configurable (#17058)
Summary:
xw285cornell petrex dagamayank
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17058

Differential Revision: D14068458

Pulled By: bddppq

fbshipit-source-id: 15df4007859067a22df4c6c407df4121e19aaf97
2019-02-13 17:03:48 -08:00
Tongliang Liao
491f2d4cb8 Support conversion from Caffe2 MergeDim to ONNX Reshape + Squeeze. (#16189)
Summary:
`MergeDim` can be done by `Reshape([1, -1, 0, 0, ...]) + Squeeze`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16189

Differential Revision: D14070676

Pulled By: ezyang

fbshipit-source-id: 28d7e9b35cc2c1dcbd4afb3fbdf7383e219b1777
2019-02-13 15:53:38 -08:00
Sebastian Messmer
9696fee635 Register CUDA kernels for caffe2 operators (#16691)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16691

Previous diffs already introduced a macro that registers caffe2 CPU kernels with c10.
This now also registers the CUDA kernels with it.

Reviewed By: bwasti

Differential Revision: D13901619

fbshipit-source-id: c15e5b7081ff10e5219af460779b88d6e091a6a6
2019-02-12 17:24:01 -08:00
Yinghai Lu
f435fb8290 Allow customization of blob node in net_drawer (#16915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16915

TSIA

Reviewed By: ipiszy

Differential Revision: D14018010

fbshipit-source-id: df5ccc06fa37f08e7a02a8acc466c4ad47afe04e
2019-02-12 15:02:50 -08:00
Tongliang Liao
0eee56fff7 Export ReduceMean/ReduceFrontMean/ReduceBackMean (Caffe2) to ReduceMean (ONNX). (#16727)
Summary:
The second input (`lengths`) is not supported.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16727

Differential Revision: D14054105

Pulled By: houseroad

fbshipit-source-id: 36b8d00460f9623696439e1bd2a6bc60b7bb263c
2019-02-12 13:35:32 -08:00
Kimish Patel
4292d13240 Keep weights name unchanged during SsaRewrite (#16932)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16932

During onnxifi transformation net ssa is rewritten. At the last step the weight
names are changed back to what they were before. The diff keeps the weight
names unchanged thru the process.

Reviewed By: yinghai

Differential Revision: D13972597

fbshipit-source-id: 7c29857f788a674edf625c073b345f2b44267b33
2019-02-11 14:55:31 -08:00
Sebastian Messmer
920c684367 Expose GenerateProposals to PyTorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16880

Reviewed By: bwasti

Differential Revision: D13998092

fbshipit-source-id: 23ab886ba137377312557fa718f262f4c8149cc7
2019-02-11 14:15:47 -08:00
Sebastian Messmer
0c02d317ea Expose BBoxTransform to pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16879

Reviewed By: bwasti

Differential Revision: D13998093

fbshipit-source-id: ddfe4bff83e9a1a4cedf1e520e6d2977b21cb3af
2019-02-11 14:15:45 -08:00
Junjie Bai
f169f398d0 Change the default image size from 227 to 224 in resnet50 trainer (#16924)
Summary:
cc xw285cornell
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16924

Differential Revision: D14018509

Pulled By: bddppq

fbshipit-source-id: fdbc9e94816ce6e4b1ca6f7261007bda7b80e1e5
2019-02-09 11:18:58 -08:00
Chandler Zuo
6737190b5c Make the exception raised from "numpy.dtype(numpy.void, (INT,))" less cryptic (#16809)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16809

https://fb.facebook.com/groups/582508038765902/permalink/736710343345670/?comment_id=824042307945806&reply_comment_id=824318864584817

numpy.dtype(numpy.void, (<INT>, )) raises a cryptic message "invalid itemsize in generic type tuple" that is hard to debug.

This diff adds the message to ask the user to investigate the error causing blob.

Reviewed By: kennyhorror

Differential Revision: D13973359

fbshipit-source-id: 43a0c492ffafbabdfd7f7541c08a258e5ac0280f
2019-02-08 16:46:50 -08:00
Nikita Shulga
0799a81cb7 Extend Net.RunAllOnGPU() to support RecurrentNetwork op (#15713)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15713

[caffe2] Extend Net.RunAllOnGPU() to support RecurrentNetwork op

Reviewed By: dzhulgakov

Differential Revision: D13576507

fbshipit-source-id: f517127492c9d516ece663d42fef84338c70344e
2019-02-08 15:48:42 -08:00
Gu, Jinghui
5ada54e0bc Impl ExpandDims op and fallback to CPU if needed (#15264)
Summary:
Impl ExpandDims op and fallback to CPU if needed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15264

Differential Revision: D13808797

Pulled By: yinghai

fbshipit-source-id: 7795ec303a46e85f84e5490273db0ec76e8b9374
2019-02-08 12:04:53 -08:00
peter.yeh@amd.com
c65b03b9f8 Enable arg_ops_test/unique_ops_test on AMD/rocm (#16853)
Summary:
Verified both tests are passing on rocm 2.1 env.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16853

Differential Revision: D13996279

Pulled By: bddppq

fbshipit-source-id: c0df610d7d9ca8d80ed2d1339cdadef59105a71c
2019-02-07 16:51:15 -08:00
Sebastian Messmer
64339dbd51 Fix and re-enable test case (#16643)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16643

The test was disabled in D13908117 because it conflicted with another diff that was about to land.
Now fixed the merge conflict and re-landing it.

Reviewed By: ezyang

Differential Revision: D13911775

fbshipit-source-id: b790f1c3a3f207916eea41ac93bc104d011f629b
2019-02-07 13:58:16 -08:00
Sebastian Messmer
6750e1e3e9 C10_REGISTER_CAFFE2_OPERATOR: Macro for registering c2 kernels (#16548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16548

With this macro, a caffe2 operator can now directly be registered with c10.
No need to write custom wrapper kernels anymore.

Differential Revision: D13877076

fbshipit-source-id: e56846238c5bb4b1989b79855fd44d5ecf089c9c
2019-02-07 13:58:14 -08:00
Wanchao Liang
ac00e85e36 Remove undefined tensor in jit script (#16379)
Summary:
This PR is a follow up of #15460, it did the following things:

* remove the undefined tensor semantic in jit script/tracing mode
* change ATen/JIT schema for at::index and other index related ops with `Tensor?[]` to align with what at::index is really doing and to adopt `optional[tensor]` in JIT
* change python_print to correctly print the exported script
* register both TensorList and ListOfOptionalTensor in JIT ATen ops to support both
* Backward compatibility for `torch.jit.annotate(Tensor, None)`

List of follow ups:

* remove the undefined tensor semantic in jit autograd, autodiff and grad_of
* remove prim::Undefined fully

For easy reviews, please turn on `hide white space changes` in diff settings.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16379

Differential Revision: D13855677

Pulled By: wanchaol

fbshipit-source-id: 0e21c14d7de250c62731227c81bfbfb7b7da20ab
2019-02-07 11:02:14 -08:00
rohithkrn
aa88c2c0b6 Unify gpu_support variable in python tests (#16748)
Summary:
Assign `has_gpu_support = has_cuda_support or has_hip_support` and make according changes in python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16748

Differential Revision: D13983132

Pulled By: bddppq

fbshipit-source-id: ca496fd8c6ae3549b736bebd3ace7fa20a6dad7f
2019-02-07 00:29:51 -08:00
Yinghai Lu
e5e0bf4152 Add AdjustBatch Op (#16676)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16676

This op is used for changing batch size (first dimension) of the tensor.

Reviewed By: bertmaher, ipiszy

Differential Revision: D13929200

fbshipit-source-id: 4f2c3faec072d468be8301bf00c80d33adb3b5b3
2019-02-06 19:15:41 -08:00
Yinghai Lu
1b919ca93e Use bound shape inference in onnxifi transform (#16598)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16598

ATT.

Reviewed By: bertmaher, rdzhabarov

Differential Revision: D13893698

fbshipit-source-id: 8d2ad9814fe76924a46b450eb7ebd3601fbdbbc7
2019-02-06 16:34:37 -08:00
Jongsoo Park
929cd23da1 no EIGEN engine for DeformConv (#16785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16785

There's no EIGEN engine implemented for DeformConv but unit test was checking it.

Reviewed By: BIT-silence

Differential Revision: D13967306

fbshipit-source-id: e29c19f59f5700fc0501c59f45d60443b87ffedc
2019-02-06 11:59:31 -08:00
Jongsoo Park
8d4b2db529 format deform_conv_test.py (#16786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16786

Format to prepare D13967306

Reviewed By: BIT-silence

Differential Revision: D13967317

fbshipit-source-id: 2de895f8474b04c55ba067fbf788c553dc010c60
2019-02-06 11:59:29 -08:00