Commit Graph

20 Commits

Author SHA1 Message Date
Richard Barnes
805dff354e Avoid type qualifier specified more than once (#72411)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72411

Fixes
```

caffe2/caffe2/operators/max_pool_with_index.cu(16): warning: type qualifier specified more than once

caffe2/caffe2/operators/max_pool_with_index.cu(28): warning: type qualifier specified more than once

caffe2/caffe2/operators/max_pool_with_index.cu(61): warning: type qualifier specified more than once

caffe2/caffe2/operators/max_pool_with_index.cu(62): warning: type qualifier specified more than once

caffe2/caffe2/operators/max_pool_with_index.cu(74): warning: type qualifier specified more than once
```

Test Plan: Sandcastle

Reviewed By: malfet

Differential Revision: D34034382

fbshipit-source-id: 2b73c55358632090baf673b32b800656ae874040
(cherry picked from commit ab3f3f9a79)
2022-02-07 18:25:29 +00:00
Richard Barnes
6c03f8d9e5 Drop unused variables and add some const (#71106)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/71106

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D33490855

fbshipit-source-id: 9fc4a4e4a7ad5e6c31f394ec6d8221b964fdf043
2022-01-11 12:38:59 -08:00
Richard Barnes
9409a3a39b Check kernel launches in caffe2/operators (#52240)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52240

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D26408330

fbshipit-source-id: 60779ba0e38c8f90e0e341c8faa2661e631112dd
2021-02-16 16:42:05 -08:00
Jerry Zhang
ac87488bd3 Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#17764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17764

Original commit changeset: f1923fdca4a1

reverted int8 ops fixes the original runtime regression.
We'll ignore the memory regression since it is flaky, see D14228484

Reviewed By: dzhulgakov

Differential Revision: D13885233

fbshipit-source-id: ccbe4b94acb44b7b4cb3ae4d73e3f6091e1e1195
2019-03-07 18:38:53 -08:00
Jerry Zhang
db5a3c274d Tensor construction codemod (#16568)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16568

In caffe2/caffe2/operators and caffe2/caffe2/fb/operators
(Resize + mutable_data) and (ResizeLike + mutable_data)
motivation: https://github.com/pytorch/pytorch/pull/12407

Reviewed By: dzhulgakov

Differential Revision: D13863416

fbshipit-source-id: 90ad3971850b89bf4b2ac81e9fa59d3bc43dc1c9
2019-02-05 18:51:02 -08:00
Jerry Zhang
ae5fd10b02 Tensor method rename size()->numel() - 2/3 (#16745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16745

Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: dzhulgakov

Differential Revision: D13944353

fbshipit-source-id: 25c2ca22204706544ee67e59c663bf495f2b4f6b
2019-02-04 23:59:10 -08:00
Jerry Zhang
db4235f31d Tensor method rename ndim()->dim() - 2/3 (#16679)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16679

Codemod generated with clangr shard mode, 25 files per diff,

Reviewed By: houseroad

Differential Revision: D13929450

fbshipit-source-id: fcc222744c28b41f2cedffc0c2ef5d04aceaa5af
2019-02-04 11:12:57 -08:00
Jerry Zhang
2af95d8e3e Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" (#16516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16516

Original commit changeset: 64abce3dbaed

Reviewed By: dzhulgakov

Differential Revision: D13863715

fbshipit-source-id: f1923fdca4a1a82768d9c280a8493ff15a7eb2ba
2019-01-30 12:50:38 -08:00
Jerry Zhang
ff963d4b9f Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#16273)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16273

Previously we have SetOutputSize which accept a partially initialized Output Tensor and set it to the correct size,
the diff change this to GetOutputSize that returns the correct size instead.
e.g.
```
auto* Y = Output(0);
ConvPoolOp<Context>::SetOutputSize(X, Y, channels);
...
Y->mutable_data<T>...
```
-->
```
auto sizes = ConvPoolOp<Context>::GetOutputSize(X, channels);
auto* Y = Output(0, sizes, at::dtype<T>());
```

Reviewed By: dzhulgakov

Differential Revision: D13736281

fbshipit-source-id: 64abce3dbaed0b375098463333dfd0ea5a3b1945
2019-01-28 15:56:34 -08:00
Roy Li
30521a37ad codemod: caffe::float16 -> at::Half (#11785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11785

Replace each instead of float16 with Half.

Reviewed By: Yangqing

Differential Revision: D9892158

fbshipit-source-id: b9225ca7bd5c84fd1c04a9d24b026c8b6cbff120
2018-09-20 18:55:19 -07:00
Jerry Zhang
aebf3b47ae Remove template parameter from Tensor (#9939)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9939

Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: ezyang, houseroad

Differential Revision: D9024330

fbshipit-source-id: e0b8295d2dc6ebe2963383ded5af799ad17164ba
2018-07-27 10:56:39 -07:00
Jerry Zhang
969b62f276 Revert D8121878: Remove template parameter from Tensor
Differential Revision:
D8121878

Original commit changeset: 4a5e9a677ba4

fbshipit-source-id: d8e2c0bb145b52fbcca323b22d1d3346f0b3249e
2018-07-26 14:02:04 -07:00
Jerry Zhang
cd5adc7b5f Remove template parameter from Tensor (#13)
Summary:
Pull Request resolved: https://github.com/facebookresearch/weakly-supervised-action-detection/pull/13

Pull Request resolved: https://github.com/pytorch/translate/pull/166

Pull Request resolved: https://github.com/pytorch/pytorch/pull/9125

Closes https://github.com/pytorch/pytorch/pull/9125

Use inheritance for polymorphism, and remove template parameter
This is to change the templating in call sites, the core implementations will change later

Before Caffe2 Tensor class was compile-time fixed to bind to a particular device/context. With this change, we're making it a runtime property (stored inside the tensor), but preserve the same semantics. For example, one has to specify device type in order to create a Tensor - there are no uninitialized tensors. More specifically the changes are:

1. We added an extra argument *DeviceType* to most of the constructors of the tensor, e.g. (Tensor(DeviceType type)),
2. Semantics of constructor Tensor(const Tensor<SrcContext>& src, ContextForCopy* context); is changed, in this constructor, the second context is passed in to enable us to call the templated Copy function, it could be in a different context as source and target previously, now we'll enforce that the context should have same device type as src, if it is provided.
3. To preserve 'get-or-construct' semantics of Blob, we added specialized getter Blob::GetMutableTensor that verifies both that Blob contains a Tensor and that it's of a correct type
4. Specifically, Tensor type is not default-constructible any more (as we don't have unknown device tensors) and thus some of the code handling STL containers needs to change

Note: Some changes are postponed just to keep this diff a bit smaller. Please see `TODO`s.

Reviewed By: xw285cornell

Differential Revision: D8121878

fbshipit-source-id: 4a5e9a677ba4ac82095df959851a054c81eccf81
2018-07-26 10:25:23 -07:00
Peter Yeh
54db14e390 HIP Operators Generator--> HipOpG (#9322)
Summary:
The goal of this PR is to add an infrastructure; to convert(hipify) CUDA ops into [HIP](https://github.com/ROCm-Developer-Tools/HIP) ops , at **compile** time.

Note that HIP ops, which are portable c++ code, can run on AMD and NVIDIA platform.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9322

Differential Revision: D8884707

Pulled By: bddppq

fbshipit-source-id: dabc6319546002c308c10528238e6684f7aef0f8
2018-07-19 00:26:06 -07:00
Orion Reblitz-Richardson
1d5780d42c Remove Apache headers from source.
* LICENSE file contains details, so removing from individual source files.
2018-03-27 13:10:18 -07:00
Yangqing Jia
8286ce1e3a Re-license to Apache
Summary: Closes https://github.com/caffe2/caffe2/pull/1260

Differential Revision: D5906739

Pulled By: Yangqing

fbshipit-source-id: e482ba9ba60b5337d9165f28f7ec68d4518a0902
2017-09-28 16:22:00 -07:00
Henry Lu
10667a914e Add linter for enforcing caffe operator documentation
Summary: Add check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5443110

fbshipit-source-id: 3793c3d29bea1228078cb30bdf8243ac0ab90664
2017-07-24 15:27:47 -07:00
Aapo Kyrola
95291f0f74 Revert D5348078: Add linter for enforcing caffe operator documentation
Summary: This reverts commit c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e

Differential Revision: D5348078

fbshipit-source-id: f536e647cbd221b26ccbc105a5f5f8bdbcc119ab
2017-07-17 18:36:38 -07:00
Henry Lu
32b13d6243 Add linter for enforcing caffe operator documentation
Summary: Add lint rule to check that every time we register a caffe operator to CPU or GPU that documentation is added for the particular operator.

Reviewed By: dzhulgakov

Differential Revision: D5348078

fbshipit-source-id: c3fa22fc7ca8066d5fc8fa780b23d7867fd3380e
2017-07-17 08:17:23 -07:00
Simon Layton
1d0ba2cfbd New cudnn ops
Summary:
cuDNN versions of dropout and LRN (for native fp16 support), port of Caffe's max pooling algo that uses an explicit mask to store locations (also supports fp16 storage)
Closes https://github.com/caffe2/caffe2/pull/396

Reviewed By: akyrola

Differential Revision: D4990880

Pulled By: asaadaldien

fbshipit-source-id: a716acffb656843e9b31e3e6808bd2d8aa959d03
2017-05-08 16:33:21 -07:00