Commit Graph

792 Commits

Author SHA1 Message Date
Yanghan Wang
81e70ffa19 fix bug of not using get_score_cls_index in BoxWithNMSLimitOp (#20868)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20868

When `input_boxes_include_bg_cls` is false (which means `input_scores_fg_cls_starting_id` is 0), It doesn't map the class index of score currectly when sorting and limiting the detections over all classes after nms.

Reviewed By: newstzpz

Differential Revision: D15472706

fbshipit-source-id: dc1e808b63ad09fb4bd95acf866771bb3fa92d69
2019-05-24 22:31:01 -07:00
Yanghan Wang
371bd043d6 register ResizeNearestOp to C10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20928

Reviewed By: smessmer

Differential Revision: D15499661

fbshipit-source-id: 5af24d5c9d7ff739b8355e19dfe66b496bc026a5
2019-05-24 14:39:11 -07:00
Kittipat Virochsiri
fd2aa93b37 Exposing LengthsSum/Mean/Max in pytorch (#20802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20802

Need this for sequence model

Reviewed By: dzhulgakov

Differential Revision: D15448529

fbshipit-source-id: cd5abe3b689fc0e02feff10faf8cd61c99369f4f
2019-05-22 13:55:19 -07:00
Huan Gui
fbdafdffa1 Move bucketize_op to open source
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19952

Reviewed By: houseroad

Differential Revision: D15145552

fbshipit-source-id: e0074c878a5c164324a9cc477783285dedffd188
2019-05-20 18:03:27 -07:00
Jongsoo Park
ea9c6e7581 eliminate FE_INVALID in unit test (#20502)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20502

Following D15307410 removing more floating point exceptions in unit tests

Reviewed By: hx89

Differential Revision: D15340930

fbshipit-source-id: 269fc75e0800bc9d39126767a0f3ca15cd8b0cad
2019-05-16 21:55:28 -07:00
Yanghan Wang
373e6a78bf make box plus one a legacy argument in detection ops
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20550

Reviewed By: newstzpz

Differential Revision: D15348610

fbshipit-source-id: 12b1e119e9bc9191ba9f2aa6d695ef215780c349
2019-05-16 18:17:12 -07:00
Yanghan Wang
61012080c8 split and register CollectAndDistributeFpnRpnProposals with C10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20509

Reviewed By: newstzpz

Differential Revision: D15302181

fbshipit-source-id: 7d3b29b667cd900f2976101f35200e1ee20b0f64
2019-05-16 13:40:46 -07:00
Jongsoo Park
5f8e849d84 eliminate FE_INVALID in optimizer related operators and tests (#20501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20501

Fixing unit tests related to optimizer related operators and tests

Reviewed By: hx89

Differential Revision: D15307410

fbshipit-source-id: e5400c26e08f26191ee542fe6b02e0a69bc4e1ae
2019-05-16 08:23:46 -07:00
David Reiss
1891614aa5 Add GivenTensorInt16Fill (#20515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20515

Needed by the upcoming quantized version of GenerateProposals

Reviewed By: dzhulgakov

Differential Revision: D14430952

fbshipit-source-id: ea852f04cc4b070f8fbe7a1e6535bba4d5b230fd
2019-05-15 19:45:15 -07:00
Cheng Cheng
fd18b89c98 shape inference for learning rate op (#20020)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20020

Add shape inference for LearningRate op. The output (lr) should have similar shape with input (iteration), but not the same type (float vs int).

Reviewed By: un-disclosed

Differential Revision: D15112300

fbshipit-source-id: 09969aefa15172a6f3c70cd9b2548e3020da5d7a
2019-05-14 23:34:32 -07:00
Bilge Acun
3ee97183b0 ScaleBlobs Operator (#19660)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19660

Implementation of aggregated Scale operator.
The operator takes a list of tensors as an input and scales all of them them with the argument float value.
The tensor sizes can be different, therefore bookkeeping of the sizes and pointers to the tensors are
necessary for the GPU version of the kernel.

Reviewed By: BIT-silence

Differential Revision: D14984233

fbshipit-source-id: 37cc97159a4f2c38cd6fff4f5710ab7d3a773611
2019-05-08 17:57:33 -07:00
Jongsoo Park
42e9a619b3 add decay parameter in ref_adagrad (#15329)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15329

Add decay parameter to match with C++ Adagrad implementation.

Reviewed By: chocjy

Differential Revision: D13300991

fbshipit-source-id: db734df0202d8f5fd156f2742207d0b5a3aa7348
2019-05-07 18:58:58 -07:00
Xue Feng
23ba0561c3 Add Gate Policy GateLearningRateOp (#20044)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20044

We do not have a gating functor. This diff adds it. I'm leveraging existing learning rate op because there are other policies I'll need to use as a union together.
* Since there are other policy in LearningRateOp which will be used as a union, I chose to add it as a LearningRateOp.
   * constantwarmup cannot do step function of nonzero first and zero later

* There are multiple uses for it,
    * e.g. as a gating blob generator that is useful for turning off.
   * e.g. as a learning rate switcher at certain iteration.
* For generalizability, no regulation or constraint is applied on the range of the values

* see figure below for illustration

{F157366621}

Reviewed By: ccheng16

Differential Revision: D15178229

fbshipit-source-id: 1e66e9a4bc1bfb946a57f8aefc97d8170f6be731
2019-05-05 20:11:04 -07:00
Xiaomeng Yang
271f005eeb Add elementwise_affine for LayerNormGradientOp (#19982)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19982

Add elementwise_affine for LayerNormGradientOp

Reviewed By: houseroad

Differential Revision: D15157493

fbshipit-source-id: 7465f2c1d4df4649b4903b93483c4861e9c7afa9
2019-05-03 15:33:46 -07:00
Yanghan Wang
a285cbcccf support different class modes for bbox in box_with_nms_limit_op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19820

Reviewed By: newstzpz

Differential Revision: D15112955

fbshipit-source-id: a757622a32cff7159c39735607103138dbbafc24
2019-04-30 16:02:44 -07:00
Chandler Zuo
472be69a73 Avoid Output Uninitialized Blobs in Load with load_all=1 (#19133)
Summary:
When output blob names are specified while load_all=1, output blob names are ignored. However, this behavior is not documented. In this diff, we just disallow users to provide blob names when load_all=1.

See discussion at https://fb.workplace.com/groups/1405155842844877/permalink/2714909788536136/
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19133

Reviewed By: dzhulgakov

Differential Revision: D14883698

Pulled By: chandlerzuo

fbshipit-source-id: 6e4171e36c4ccc4f857e79da98b858a06b7d8ad6
2019-04-27 10:45:44 -07:00
Xiaomeng Yang
2ce39de3fc Add elementwise_affine for layer_norm_op (#19713)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19713

Add elementwise_affine for layer_norm_op

Reviewed By: houseroad

Differential Revision: D15075454

fbshipit-source-id: e8a7d3da1c81e49fa55323f5e74a68bc4ef8d83f
2019-04-26 17:20:01 -07:00
Xiaomeng Yang
f5fe7aa0b2 Fix relu bug for empty tensor (#19451)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19451

Fix relu bug for empty tensor

Reviewed By: xianjiec

Differential Revision: D15009811

fbshipit-source-id: b75e567c3bec08d7d12b950d8f1380c50c138704
2019-04-19 15:21:07 -07:00
Yinghai Lu
f1f31b634d Eliminate AdjustBatch ops (#19083)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19083

As we have discussed, there are too many of AdjustBatch ops and they incur reallocation overhead and affects the performance. We will eliminate these ops by
- inling the input adjust batch op into Glow
- inling the output adjust batch op into OnnxifiOp and do that only conditionally.

This is the C2 part of the change and requires change from Glow side to work e2e.

Reviewed By: rdzhabarov

Differential Revision: D14860582

fbshipit-source-id: ac2588b894bac25735babb62b1924acc559face6
2019-04-17 10:00:25 -07:00
Huamin Li
c480798a1c use C10_REGISTER for GELU op
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/19090

Reviewed By: BIT-silence

Differential Revision: D14864737

fbshipit-source-id: 8debd53171f7068726f0ab777a13ca46becbfbdf
2019-04-12 11:41:04 -07:00
Xiaomeng Yang
fd40c0eba0 Add gelu op (#18992)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18992

Add gelu op

Reviewed By: houseroad

Differential Revision: D14814811

fbshipit-source-id: 00f126b8b83763c57ebbf28fbd2de5a8fab6d491
2019-04-08 21:58:29 -07:00
Lu Fang
443a58e03d Export C10 operator in PyTorch Model (#18210)
Summary:
Almost there, feel free to review.

these c10 operators are exported to _caffe2 domain.

TODO:

- [x] let the onnx checker pass
- [x] test tensor list as argument
- [x] test caffe2 backend and converter
- [x] check the c10 schema can be exported to onnx
- [x] refactor the test case to share some code
- [x] fix the problem in ONNX_ATEN_FALLBACK
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18210

Reviewed By: zrphercule

Differential Revision: D14600916

Pulled By: houseroad

fbshipit-source-id: 2592a75f21098fb6ceb38c5d00ee40e9e01cd144
2019-04-08 16:06:00 -07:00
Xiaomeng Yang
b145dcca04 Add support for group ConvTranspose (#18794)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18794

Add support for group ConvTranspose

Reviewed By: houseroad

Differential Revision: D14741327

fbshipit-source-id: 5d947ca044bf8495dd7f8f56122441ebbcc6c7e4
2019-04-04 11:52:06 -07:00
Duc Ngo
16f07d7dac caffe2 - set up correct inheritance structure for remaining operator test classes (#18622)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18622

Set up correct inheritance structure for remaining operator test classes

Reviewed By: ezyang

Differential Revision: D14685941

fbshipit-source-id: a6b1b3be325935b7fec7515be13a4994b3016bf0
2019-04-01 15:53:22 -07:00
Yanghan Wang
f4e35d30ed register BoxWithNMSLimit with C10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17956

Reviewed By: houseroad

Differential Revision: D14417300

fbshipit-source-id: eb5e2ba84513b3b7bfa509dc442424b13fe9148f
2019-03-29 13:41:40 -07:00
Ahmed Aly
9eb0f435d9 Inference LSTM integration test (#18559)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18559

Adding integration test for inference LSTM

Reviewed By: houseroad

Differential Revision: D14656698

fbshipit-source-id: 80fb2a72be30fcb695f4471b72bf9d6e3965bf81
2019-03-28 11:31:06 -07:00
Duc Ngo
6a1a019c0a caffe2 - support flaky operator tests for caffe2 build (#18155)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18155

- Make a python decorator caffe2_flaky for caffe2 operator unit tests.
- The environment variable CAFFE2_RUN_FLAKY_TESTS are now used to mark flaky test mode

During test run,
- If flaky tests mode are on, only flaky tests are run
- If flaky tests mode are off, only non-flaky tests are run

Mark ctc_beam_search_decoder_op_test as flaky

Reviewed By: ezyang, salexspb

Differential Revision: D14468816

fbshipit-source-id: dceb4a48daeb5437ad9cc714bef3343e9761f3a4
2019-03-25 16:58:34 -07:00
Gerard Goossen
46990c20fa Verify def before infer fensor (#18129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18129

A lot of tensor interference function assume the operator passes the schema.
So call Verity to make sure this is actually the case.

Created diff before to add checking in Concat (https://github.com/pytorch/pytorch/pull/17110), but I encountered lot more places where this is assumed (for example ElementwiseOpShapeInference)

Reviewed By: mdschatz

Differential Revision: D14503933

fbshipit-source-id: cf0097b8c3e4beb1cded6b61e092a6adee4b8fcb
2019-03-22 06:36:25 -07:00
Jongsoo Park
c7448aa13c remove unused parameters in optimizer tests (#18084)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18084

data_strategy parameter was not used in some of unit tests for optimizers

Reviewed By: hyuen

Differential Revision: D14487830

fbshipit-source-id: d757cd06aa2965f4c0570a4a18ba090b98820ef4
2019-03-15 18:06:15 -07:00
Sebastian Messmer
7a3488e0fc Expose c10 cuda ops to caffe2 (#18036)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18036

- Add macros to export c10 cuda operators to caffe2 frontend
- Instead of having a separate caffe2 registry for the c10 operator wrappers, use the existing caffe2 registries

Reviewed By: ezyang

Differential Revision: D14467495

fbshipit-source-id: 7715ed2e38d2bbe16f1446ae82c17193a3fabcb9
2019-03-15 16:58:12 -07:00
Yanghan Wang
53fb9a462a register RoIAlign with C10
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17889

Reviewed By: smessmer

Differential Revision: D14411630

fbshipit-source-id: c3b7941d725ae2c78e8d79f52a7983db92b75807
2019-03-14 11:55:29 -07:00
Jongsoo Park
8bd9465b79 make momentum non negative in adagrad test (#18009)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18009

momentum should be initialized with non-negative values

Reviewed By: hyuen

Differential Revision: D14450841

fbshipit-source-id: 5bbbd11645db9e6f2dc42b26a00ff3caf378c59f
2019-03-14 03:15:07 -07:00
Xiaomeng Yang
54b33503ec Optimize channel_stats_op (#16243)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16243

Optimize channel_stats_op and add NHWC impl

Reviewed By: takatosp1

Differential Revision: D13775515

fbshipit-source-id: decb889e646f5316d4afefdf9f9b6bc6343613cd
2019-03-12 12:08:00 -07:00
youkaichao
b87abdfc12 typo fix
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17653

Differential Revision: D14302003

Pulled By: ezyang

fbshipit-source-id: 8ad90985a392b07127c7e315d4e74ce77962b573
2019-03-06 11:36:44 -08:00
Deepali Chourasia
e3516d0a95 omit group conv NHWC test for GPU (#17715)
Summary:
Observed the test `TestGroupConvolution.test_group_convolution` to fail with the following error:

```
Falsifying example: test_group_convolution(self=<caffe2.python.operator_test.group_conv_test.TestGroupConvolution testMethod=test_group_convolution>, stride=3, pad=0, kernel=5, size=8, group=4, input_channels_per_group=7, output_channels_per_group=8, batch_size=2, order='NHWC', engine='', use_bias=False, gc=, dc=[, device_type: 1])

You can reproduce this example by temporarily adding reproduce_failure('3.59.1', b'AAAA') as a decorator on your test case
```
This example generated by hypothesis has `group=2, order='NHWC' and dc=[, device_type: 1])`.
I think this example should be skipped.

I have mimicked the change corresponding to [PR#13554](https://github.com/pytorch/pytorch/pull/13554) to skip this example.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17715

Differential Revision: D14346642

Pulled By: ezyang

fbshipit-source-id: b1f1fef09f625fdb43d31c7213854e61a96381ba
2019-03-06 11:32:35 -08:00
Sebastian Messmer
910519e45b Expose cuda kernel for caffe2::GenerateProposals
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17066

Reviewed By: ezyang, wat3rBro

Differential Revision: D14071130

fbshipit-source-id: 6fe26503f6069c36ec31d6c09b549b932d5db242
2019-03-04 14:59:08 -08:00
rohithkrn
8c72217817 Enable boolean_mask, adadelta, adagrad fp16 on ROCm (#17235)
Summary:
-  Fix bugs, indentation for adadelta and adagrad tests to enable fp16
- Enable boolean_mask fp16  on ROCm
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17235

Differential Revision: D14240828

Pulled By: bddppq

fbshipit-source-id: ab6e8f38aa7afb83b4b879f2f4cf2277c643198f
2019-02-27 10:07:36 -08:00
Peizhao Zhang
54e4c4d7de Removed obsolete argument correct_transform_coords in bbox_transform op. (#16723)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16723

Removed obsolete argument correct_transform_coords in bbox_transform op.
* It was only for backward compatibility. We should not have models using it now.

Differential Revision: D13937430

fbshipit-source-id: 504bb066137ce408c12dc9dcc2e0a513bad9b7ee
2019-02-20 13:22:33 -08:00
Sebastian Messmer
9696fee635 Register CUDA kernels for caffe2 operators (#16691)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16691

Previous diffs already introduced a macro that registers caffe2 CPU kernels with c10.
This now also registers the CUDA kernels with it.

Reviewed By: bwasti

Differential Revision: D13901619

fbshipit-source-id: c15e5b7081ff10e5219af460779b88d6e091a6a6
2019-02-12 17:24:01 -08:00
Sebastian Messmer
920c684367 Expose GenerateProposals to PyTorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16880

Reviewed By: bwasti

Differential Revision: D13998092

fbshipit-source-id: 23ab886ba137377312557fa718f262f4c8149cc7
2019-02-11 14:15:47 -08:00
Sebastian Messmer
0c02d317ea Expose BBoxTransform to pytorch
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/16879

Reviewed By: bwasti

Differential Revision: D13998093

fbshipit-source-id: ddfe4bff83e9a1a4cedf1e520e6d2977b21cb3af
2019-02-11 14:15:45 -08:00
peter.yeh@amd.com
c65b03b9f8 Enable arg_ops_test/unique_ops_test on AMD/rocm (#16853)
Summary:
Verified both tests are passing on rocm 2.1 env.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16853

Differential Revision: D13996279

Pulled By: bddppq

fbshipit-source-id: c0df610d7d9ca8d80ed2d1339cdadef59105a71c
2019-02-07 16:51:15 -08:00
Sebastian Messmer
64339dbd51 Fix and re-enable test case (#16643)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16643

The test was disabled in D13908117 because it conflicted with another diff that was about to land.
Now fixed the merge conflict and re-landing it.

Reviewed By: ezyang

Differential Revision: D13911775

fbshipit-source-id: b790f1c3a3f207916eea41ac93bc104d011f629b
2019-02-07 13:58:16 -08:00
Sebastian Messmer
6750e1e3e9 C10_REGISTER_CAFFE2_OPERATOR: Macro for registering c2 kernels (#16548)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16548

With this macro, a caffe2 operator can now directly be registered with c10.
No need to write custom wrapper kernels anymore.

Differential Revision: D13877076

fbshipit-source-id: e56846238c5bb4b1989b79855fd44d5ecf089c9c
2019-02-07 13:58:14 -08:00
rohithkrn
aa88c2c0b6 Unify gpu_support variable in python tests (#16748)
Summary:
Assign `has_gpu_support = has_cuda_support or has_hip_support` and make according changes in python tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16748

Differential Revision: D13983132

Pulled By: bddppq

fbshipit-source-id: ca496fd8c6ae3549b736bebd3ace7fa20a6dad7f
2019-02-07 00:29:51 -08:00
Yinghai Lu
e5e0bf4152 Add AdjustBatch Op (#16676)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16676

This op is used for changing batch size (first dimension) of the tensor.

Reviewed By: bertmaher, ipiszy

Differential Revision: D13929200

fbshipit-source-id: 4f2c3faec072d468be8301bf00c80d33adb3b5b3
2019-02-06 19:15:41 -08:00
Jongsoo Park
929cd23da1 no EIGEN engine for DeformConv (#16785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16785

There's no EIGEN engine implemented for DeformConv but unit test was checking it.

Reviewed By: BIT-silence

Differential Revision: D13967306

fbshipit-source-id: e29c19f59f5700fc0501c59f45d60443b87ffedc
2019-02-06 11:59:31 -08:00
Jongsoo Park
8d4b2db529 format deform_conv_test.py (#16786)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16786

Format to prepare D13967306

Reviewed By: BIT-silence

Differential Revision: D13967317

fbshipit-source-id: 2de895f8474b04c55ba067fbf788c553dc010c60
2019-02-06 11:59:29 -08:00
Edward Yang
a3f600e394 Revert D13854304: [redo][c10] LayerNorm Registration Example
Differential Revision:
D13854304

Original commit changeset: ec463ce22721

fbshipit-source-id: 4262b9a2ef486e1c7c0283ea021331ac97cc5f56
2019-02-06 08:26:23 -08:00
Edward Yang
fc0e88dd77 Revert D13855525: [c10] Expose RoIAlign to torch
Differential Revision:
D13855525

Original commit changeset: cfee7bb1544d

fbshipit-source-id: 0b4124b78c4082b52e592a1275069c879a9aed39
2019-02-06 08:26:22 -08:00