Commit Graph

38 Commits

Author SHA1 Message Date
Pruthvi Madugundu
ab7a472980 [ROCm] Update HIP_VERSION to TORCH_HIP_VERSION (#62786)
Summary:
- HIP_VERSION semantic versioning will change in ROCm4.3. The changes essentially remove the dependency on HIP_VERSION provided in the hip header to keep code compatible with older and newer versions of ROCm.
- TORCH_HIP_VERSION is derived from HIP_VERSION_MAJOR and HIP_VERSION_MINOR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62786

Reviewed By: bdhirsh

Differential Revision: D30281682

Pulled By: seemethere

fbshipit-source-id: e41e69fb9e13de5ddd1af99ba5bbdcbb7b64b673
2021-08-13 15:00:43 -07:00
Sam Estep
8c798e0622 Forbid trailing whitespace (#53406)
Summary:
Context: https://github.com/pytorch/pytorch/pull/53299#discussion_r587882857

These are the only hand-written parts of this diff:
- the addition to `.github/workflows/lint.yml`
- the file endings changed in these four files (to appease FB-internal land-blocking lints):
  - `GLOSSARY.md`
  - `aten/src/ATen/core/op_registration/README.md`
  - `scripts/README.md`
  - `torch/csrc/jit/codegen/fuser/README.md`

The rest was generated by running this command (on macOS):
```
git grep -I -l ' $' -- . ':(exclude)**/contrib/**' ':(exclude)third_party' | xargs gsed -i 's/ *$//'
```

I looked over the auto-generated changes and didn't see anything that looked problematic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/53406

Test Plan:
This run (after adding the lint but before removing existing trailing spaces) failed:
- https://github.com/pytorch/pytorch/runs/2043032377

This run (on the tip of this PR) succeeded:
- https://github.com/pytorch/pytorch/runs/2043296348

Reviewed By: walterddr, seemethere

Differential Revision: D26856620

Pulled By: samestep

fbshipit-source-id: 3f0de7f7c2e4b0f1c089eac9b5085a58dd7e0d97
2021-03-05 17:22:55 -08:00
lcskrishna
302cf6835e [ROCm][Caffe2] Enable MIOpen 3D Pooling (#38260)
Summary:
This PR contains the following updates:
1. MIOpen 3D pooling enabled in Caffe2.
2. Refactored the MIOpen pooling code in caffe2.
3. Enabled unit test cases for 3D pooling.

CC: ezyang jeffdaily ashishfarmer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/38260

Differential Revision: D21524754

Pulled By: xw285cornell

fbshipit-source-id: ddfe09dc585cd61e42eee22eff8348d326fd0c3b
2020-07-08 17:42:55 -07:00
Shai Szulanski
0ddaaf6a92 [codemod][caffe2] Run clang-format - 5/7
Summary:
This directory is opted-in to clang-format but is not format-clean. This blocks continuous formatting from being enabled on fbcode, and causes hassle for other codemods that leave inconsistent formatting. This diff runs clang-format, which is widely used and considered safe.

If you are unhappy with the formatting of a particular block, please *accept this diff* and then in a stacked commit undo the change and wrap that code in `// clang-format off` and `// clang-format on`, or `/* clang-format off */` and `/* clang-format on */`.

drop-conflicts

Test Plan: sandcastleit

Reviewed By: jerryzh168

Differential Revision: D22311706

fbshipit-source-id: 1ca59a82e96156a4a5dfad70ba3e64d44c5e762a
2020-06-30 15:45:11 -07:00
Johannes M Dieterich
6ade7e3a15 [ROCm] Enable 3D convolutions through ROCm (#33067)
Summary:
For both the Caffe2 and PyTorch backends, enable 3D convolutions through MIOpen.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33067

Reviewed By: BIT-silence

Differential Revision: D19880495

Pulled By: bddppq

fbshipit-source-id: 8f6f970910654c1c5aa871b48a04c1054875691c
2020-02-14 13:19:10 -08:00
Chaitanya Sri Krishna Lolla
2635055229 [ROCm] Enable 3D batch norms through MIOpen (#33262)
Summary:
Enable test for Caffe2
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33262

Differential Revision: D19880486

Pulled By: bddppq

fbshipit-source-id: af663a11137a53302e55198f38117ab6bdc9ec89
2020-02-13 11:29:51 -08:00
Peter Yeh
65a3dbdfb0 Remove hip device sync in miopen Conv implementation (#21791)
Summary:
xw285cornell bddppq
Note there are other optimizations coming.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21791

Differential Revision: D15829238

Pulled By: bddppq

fbshipit-source-id: 66c62c646f315d65b4e432ca20890faded843db4
2019-06-14 12:32:50 -07:00
Junjie Bai
814c1df29a Fix caffe2 miopen conv transpose gradient op for case of no dX gradient
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/18809

Reviewed By: ezyang

Differential Revision: D14759762

Pulled By: bddppq

fbshipit-source-id: ff795b7e58c82f67a1d7284b5ab06b0e0e5fd3ae
2019-04-04 17:29:30 -07:00
Jerry Zhang
ac87488bd3 Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#17764)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17764

Original commit changeset: f1923fdca4a1

reverted int8 ops fixes the original runtime regression.
We'll ignore the memory regression since it is flaky, see D14228484

Reviewed By: dzhulgakov

Differential Revision: D13885233

fbshipit-source-id: ccbe4b94acb44b7b4cb3ae4d73e3f6091e1e1195
2019-03-07 18:38:53 -08:00
Xiaomeng Yang
4ae9ab24b6 Update conv_base to support empty batch (#16603)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16603

Update conv_base to support empty batch

Reviewed By: houseroad

Differential Revision: D13894111

fbshipit-source-id: fc4370ff16ba6046f374e77bd845d28e6af05ea3
2019-01-31 23:46:18 -08:00
Jerry Zhang
2af95d8e3e Back out "[pt1][tensor] Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize" (#16516)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16516

Original commit changeset: 64abce3dbaed

Reviewed By: dzhulgakov

Differential Revision: D13863715

fbshipit-source-id: f1923fdca4a1a82768d9c280a8493ff15a7eb2ba
2019-01-30 12:50:38 -08:00
Jerry Zhang
52ca4b86db Change SetOutputSize in ConvTransposeUnpoolBaseOp (#16179)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16179

to avoid passing partially initialized Tensor around.

Reviewed By: ezyang

Differential Revision: D13744009

fbshipit-source-id: 4c545765e1cd164b3e87ce08ec4c1cb1e37e2b8f
2019-01-28 18:28:11 -08:00
Jerry Zhang
ff963d4b9f Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#16273)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16273

Previously we have SetOutputSize which accept a partially initialized Output Tensor and set it to the correct size,
the diff change this to GetOutputSize that returns the correct size instead.
e.g.
```
auto* Y = Output(0);
ConvPoolOp<Context>::SetOutputSize(X, Y, channels);
...
Y->mutable_data<T>...
```
-->
```
auto sizes = ConvPoolOp<Context>::GetOutputSize(X, channels);
auto* Y = Output(0, sizes, at::dtype<T>());
```

Reviewed By: dzhulgakov

Differential Revision: D13736281

fbshipit-source-id: 64abce3dbaed0b375098463333dfd0ea5a3b1945
2019-01-28 15:56:34 -08:00
Xiaomeng Yang
0a2d14dd7c Optimize SpatialBNOp on GPU (#16395)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16395

Optimize SpatialBNOp on GPU

i-am-not-moving-c2-to-c10

Reviewed By: houseroad

Differential Revision: D13829833

fbshipit-source-id: 04d2a63e8e9830c4c39a91cf87fcd7aa765dc55f
2019-01-28 09:36:45 -08:00
Hoa Dinh
69b5ae4c54 Revert D13747581: Optimize SpatialBN on GPU
Differential Revision:
D13747581

Original commit changeset: 48a885a240ef

fbshipit-source-id: 58cec6023843d7459865eb80c9db8dac463cb96c
2019-01-24 15:26:37 -08:00
Xiaomeng Yang
45c3cc9174 Optimize SpatialBN on GPU (#16202)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16202

Optimize SpatialBN on GPU

Reviewed By: houseroad

Differential Revision: D13747581

fbshipit-source-id: 48a885a240ef2a325235e8f89ebbe50e7c780c84
2019-01-24 02:55:10 -08:00
Jerry Zhang
5e72e99c86 Remaining Tensor API fixes - dims() -> sizes() (#15743)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15743

Remaining fixes so that D12812029 will compile

Reviewed By: dzhulgakov

Differential Revision: D13535559

fbshipit-source-id: 2c8b3403570c8c35ac8efe2d827233abc0e6e0d1
2019-01-15 18:42:02 -08:00
Junjie Bai
692caa7211 Revert D13598894: [pytorch][PR] [Caffe2] [ROCm] Use correct workspace alloc call in MIOpen conv operator
Differential Revision:
D13598894

Original commit changeset: 44886161abdf

fbshipit-source-id: 6c6057136f1ea741fcd1734695356709aeb4bf12
2019-01-09 10:03:50 -08:00
ashishfarmer
cc402d8fa1 Use correct workspace alloc call in MIOpen conv operator (#15712)
Summary:
This PR contains changes for:
1. Using memory alloc from HIPContext while allocating workspace for MIOpen conv and transpose_conv operators rather than direct HIP mem alloc
2. Minor cleanup and removing an unnecessary sync call from MIOpen conv op

Differential Revision: D13598894

Pulled By: bddppq

fbshipit-source-id: 44886161abdf91cd29c7c93b3e23620e1b09c7c9
2019-01-08 11:38:45 -08:00
rohithkrn
11a9248d01 Enable fp16 for MIOPEN operators in Caffe2 (#14905)
Summary:
This PR enables fp16 MIOPEN operators in Caffe2.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14905

Differential Revision: D13383439

Pulled By: bddppq

fbshipit-source-id: 840afa8d08bef2952ca0039dee2423f1542bb330
2018-12-07 17:26:44 -08:00
Junjie Bai
0d7a986da1 Change hip filename extension to .hip (#14036)
Summary:
xw285cornell

- To make hip files to have unique filename extension we change hip files from _hip.cc to .hip (it's the only blessing option other than .cu in hipcc 3d51a1fb01/bin/hipcc (L552)).
- Change to use host compiler to compile .cc|.cpp files. Previously we use hcc to compile them which is unnecessary
- Change the hipify script to not replace "gpu" with "hip" in the filename of the generated hipified files. Previously we do this because hcc has a bug when linking files that have same filename. We have now changed to use host linker to do linking so this is unnecessary anymore.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14036

Reviewed By: xw285cornell

Differential Revision: D13091813

Pulled By: bddppq

fbshipit-source-id: ea3d887751d8abb39d75f5d5104aa66ce66b9ee0
2018-11-16 11:55:59 -08:00
bddppq
874a8a321b Fix out of order member fields initializaitons (#14015)
Summary:
xw285cornell

Unfortunately it's not easy to add -Werror=reorder flag since there are out of order initializations in thrust headers as well, and the rocm cmake macro hip_include_directories doesn't offer a way to include headers as external headers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14015

Reviewed By: soumith

Differential Revision: D13081104

Pulled By: bddppq

fbshipit-source-id: 2540421cb29cf556c79f2d86c460bde6ea5a182e
2018-11-15 17:11:50 -08:00
Ashish
f4e502a8c5 Added MIOpen conv transpose op (#13938)
Summary:
This pull request contains changes for:
1. Removing ConvTranspose related changes from caffe2/operators/hip/conv_op_miopen.cc
2. Adding the file caffe2/operators/hip/conv_transpose_op_miopen.cc
3. Modifying the tests to run convTranspose op using MIOpen engine

Differential Revision: D13055099

Pulled By: bddppq

fbshipit-source-id: ca284f8f9a073005b22013c375cc958257815865
2018-11-13 21:01:52 -08:00
Xiaodong Wang
fbd497f169 Fix initialization order in MIOpen file (#13264)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13264

Simply change the initialization order to make hcc happy. Otherwise will have to add -Wno-error=reorder.

Reviewed By: bddppq

Differential Revision: D12827635

fbshipit-source-id: 6f4cd67209f2aa8ae85cfbdc53df0efb3b3cc473
2018-10-29 16:48:54 -07:00
Ashish
df8c5a3572 Refactoring MIOpen activation ops (#13187)
Summary:
This pull request contains changes for:
1. Adding a generalized MIOpen activation class to be used by activation operators
2. Refactoring MIOpen ReLU op to use the new class
3. Adding ELU, Tanh and Sigmoid MIOpen ops

Differential Revision: D12810112

Pulled By: bddppq

fbshipit-source-id: 9519b3a0cd733b906bcba5d8948be089029c43ac
2018-10-27 00:22:54 -07:00
Junjie Bai
ccfaf46431 Make CUDNN an alias of MIOPEN for HIP ops (#12278)
Summary:
This is mostly for reusing all the cudnn test cases in our python operator_tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12278

Differential Revision: D10842592

Pulled By: bddppq

fbshipit-source-id: 4b3ed91fca64ff02060837b3270393bc2f9a9898
2018-10-24 17:07:31 -07:00
Ashish
d0df1e8ec9 Remove MIOpen Softmax operator (#12727)
Summary:
This PR contains changes for:
1. Removing MIOpen softmax operator. Will be added later with the required functionality
2. Enabling softmax_ops_test on ROCm target

Differential Revision: D10416079

Pulled By: bddppq

fbshipit-source-id: 288099903aa9e0c3378e068fffe6e7d6a9a84841
2018-10-16 16:45:46 -07:00
Edward Yang
54d9823d00 Make caffe2::Tensor::dims() return an IntList instead of a const vector& (#12180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12180

I had to fix a lot of call sites, because a lot of places assume that
you can actually get a const vector&, and if the internal representation
of sizes in a tensor is NOT a vector, it's not possible to fulfill
this API contract.

Framework changes:
- I deleted TensorImpl::dims(); caffe2::Tensor::dims() just forwards to
  sizes() now.
- De-templatized SetDims; now it is an explicit list of ArrayRef and
  variadic overloads.  This makes implicit conversions work again,
  so I don't need to explicitly list the std::vector cases too.
  - As a knock-on effect, this causes Reset() to accept at::IntList as well as
    const std::vector<int64_t>&
- Edited variadic overloads of SetDims to all forward to the underlying
  arbitrary-dim implementation, reducing code duplication. (It's probably
  marginally less efficient in the new world.)
- Replace Tensor constructor accepting const std::vector<int64_t>& with at::IntList
- Make MKLTensor accept ArrayRef along with vector in constructor and
  Reset (unfortunately, no implicit conversions here, since it's templated on
  index type.)
- There are a few other places, like cudnn, where I changed functions
  that previously took const std::vector<int64_t>& to take at::IntList
  instead.

Classification of call site changes:
- 'const std::vector<int64_t>& x_dims = x.dims()' ==>
  'at::IntList x_dims = x.dims()'
- 'std::vector<int64_t> x_dims = x.dims()' ==>
  'std::vector<int64_t> x_dims = x.dims().vec()' (we need a copy!)
  Usually this is because we're about to mutably modify the vector
  to compute some new dimension.  However, it also very commonly occurs in the
  form: 'x_dims_ = x.dims()' because we frequently cache sizes in operators.
- Instead of constructing std::vector<int64_t>{blah, blah}, construct an
  at::IntList directly

ArrayRef changes:
- cbegin()/cend() iterators, they operate the same aas begin()/end() because
  everything on ArrayRef is const.
- Moved operator<< into ArrayRef.h, so that it's always available when
  working with ArrayRef.  I also templated it, so it now works on an
  ArrayRef of any type.
- Add operator== overload for ArrayRef, and also add variants to permit
  comparison of ArrayRef with std::vector, a very common operation.
  (The non-templated version of operator== can get these automatically
  via implicit conversion, but with templates C++ refuses to do
  any explicit conversions.)

I'm planning to audit all dims() call sites to make sure they don't
expect 'auto x = t.dims()' to give you an x whose lifetime can validly
outlive the tensor.

I opted not to do a dims() to sizes() rename, because dims() also matches
the protobufs accessor.  Bad news!

Reviewed By: jerryzh168

Differential Revision: D10111759

fbshipit-source-id: a2a81dc4b92c22ad4b3b8ef4077a7e97b6479452
2018-10-05 15:57:41 -07:00
Ashish
c029c839a1 MIOpen 1.5 group conv API integration (#12273)
Summary:
This PR contains changes for:
1. Group convolutions introduced in MIOpen 1.5
2. Checks to initialize MIOpen conv operator descriptors only when needed (inputs or weights changed)

Differential Revision: D10174611

Pulled By: bddppq

fbshipit-source-id: cd3d61fae350c4a5e540ce1a6e08012e0e2689fe
2018-10-03 12:26:58 -07:00
Christian Puhrsch
a6630e25af Remove many caffe2::TIndex and replace them with int64_t (#11943)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11943

See title

Reviewed By: ezyang

Differential Revision: D9992645

fbshipit-source-id: e8f80d6ea762971513e5e8072975ceea53e1f11a
2018-09-22 18:11:04 -07:00
Roy Li
30521a37ad codemod: caffe::float16 -> at::Half (#11785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11785

Replace each instead of float16 with Half.

Reviewed By: Yangqing

Differential Revision: D9892158

fbshipit-source-id: b9225ca7bd5c84fd1c04a9d24b026c8b6cbff120
2018-09-20 18:55:19 -07:00
Ashish
c7751f4df0 MIOpen bug fixes and performance enhancements (#11766)
Summary:
This PR contains changes for:
1. Performance enhancements for group conv using MIOpen
2. Performance enhancements by removing unnecessary computations while running pooling through MIOpen
3. Added check for bwdData comptutation while running MIOpen convGradient operator
4. Fix in MIOpen poolingGradient operator to compute window size for global pooling case
5. Minor code cleanup in MIOpen spatial batch norm operator

Differential Revision: D9979050

Pulled By: bddppq

fbshipit-source-id: fabc7a44a2f9ca0307d99564d1ce8fe1de9a6fbb
2018-09-20 15:31:46 -07:00
Xiaomeng Yang
4c8cc36e34 Fix igios build (#11392)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11392

Fix igios build

Reviewed By: houseroad

Differential Revision: D9720833

fbshipit-source-id: 33acc3c658c22addd4bad142433824076233e901
2018-09-07 15:55:23 -07:00
Xiaomeng Yang
ec5404a449 Add cuda version of SpatialBNOp also optimize SpatialBN on CPU (#10888)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10888

Add cuda version of SpatialBNOp also optimize SpatialBN on CPU

Reviewed By: houseroad

Differential Revision: D9512435

fbshipit-source-id: 6f828c88d56d30dc9a2f98a297a161c35cc511b1
2018-09-06 18:26:13 -07:00
root
c3fe071483 Update hip files (#9826)
Summary:
The goal of this PR is to update the hip files to reflect relevant changes in cuda source files.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9826

Differential Revision: D9032840

Pulled By: bddppq

fbshipit-source-id: 504e55c46308eebfee3c9a7beea1f294fe03470f
2018-07-27 16:54:39 -07:00
Ashish
e70fc145a9 MIOpen fixes for Caffe2 (#9842)
Summary:
The PR contains:
Fixes for running MIOpen conv operator in a multi worker scenario, along with a performance fix
Fixing a typo in MIOpen pool op and adding some extra checks for MIOpen spatial BN op

bddppq
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9842

Differential Revision: D9012512

Pulled By: bddppq

fbshipit-source-id: 270e1323c20fbfbc4b725f9a4ff34cd073ddaaa8
2018-07-26 02:42:26 -07:00
Peter Yeh
54db14e390 HIP Operators Generator--> HipOpG (#9322)
Summary:
The goal of this PR is to add an infrastructure; to convert(hipify) CUDA ops into [HIP](https://github.com/ROCm-Developer-Tools/HIP) ops , at **compile** time.

Note that HIP ops, which are portable c++ code, can run on AMD and NVIDIA platform.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9322

Differential Revision: D8884707

Pulled By: bddppq

fbshipit-source-id: dabc6319546002c308c10528238e6684f7aef0f8
2018-07-19 00:26:06 -07:00
Peter Yeh
c37e5b7137 [Caffe2] Enable AMD/MIOPEN ops for Caffe2 (#8306)
* Add hip support for caffe2 core

* Add MIOPEN header/wrapper to caffe2 core

* Add HIP device into caffe2 PB

* top level makefile change for rocm/hip

* makefile scaffolding for AMD/RocM/HIP

* Makefile scafodding for AMD/RocM/HIP; add makefile/utility for HIP files

* caffe2 PB update for AMD/ROCM HIP device

* Add AMD/RocM/Thrust dependency

* HIP threadpool update

* Fix makefile macro

* makefile fix: duplicate test/binary name

* makefile clean-up

* makefile clean-up

* add HIP operator registry

* add utilities for hip device

* Add USE_HIP to config summary

* makefile fix for BUILD_TEST

* merge latest

* Fix indentation

* code clean-up

* Guard builds without HIP and use the same cmake script as PyTorch to find HIP

* Setup rocm environment variables in build.sh (ideally should be done in the docker images)

* setup locale

* set HIP_PLATFORM

* Revert "set HIP_PLATFORM"

This reverts commit 8ec58db2b390c9259220c49fa34cd403568300ad.

* continue the build script environment variables mess

* HCC_AMDGPU_TARGET

* Cleanup the mess, has been fixed in the lastest docker images

* Assign protobuf field hip_gpu_id a new field number for backward compatibility

* change name to avoid conflict

* Fix duplicated thread pool flag

* Refactor cmake files to not add hip includes and libs globally

* Fix the wrong usage of environment variables detection in cmake

* Add MIOPEN CNN operators

* Revert "Add MIOPEN CNN operators"

This reverts commit 6e89ad4385b5b8967a7854c4adda52c012cee42a.

* Add MIOPEN pooling operator

* Add MIOPEN activation operator

* Add MIOPEN softmax operator

* Add MIOPEN spatial batch norm operator

* Add MIOPEN loacl response normalization operator

* Add MIOPEN conv operator

* Clean-up LRN ops

* enable fp16 in MIOPEN pool ops

* Enable fp16 for MIOPEN relu op

* Enable fp16 for MIOPEN spatial batch norm op

* code clean-up

* revert float16 support

* Create Caffe2 python binding for AMD/ROCM/HIP

* Add op fallback for HIP operator

* add hip src/test files in cmake

* exclude hip src/test files

* fix python binding for hip backend

* fix MIOPEN pooling op workspace

* hack to compile miopen operators

* fix include path for MIOPEN ops

* Fix include path

* Add HIP math utilities

* Fix path for HIP math utils

* cmake fix

* Cmake fix / hipcc for hip files

* suppress hipcc warning

* cmake fix /replcae USE_HIP with USE_ROCM

* revert LoadHIP.cmake change

* fix include for thrust/cub-hip

* include path fix for conversion.h

* Updated with latest upstream changes

* clang format fixes

* Context_hip updates

* Fixed typo in rocblas handle get function

* Updated hipified math utils

* Updated math hip test util

* Updated context hip test

* Updated common_hip

* Updated net async dag for HIP

* Added MIOPEN in operator hip test

* fix

* C2 dependencies clean-up

* fix include path for building custom protobuf

* Decouple miopen pool op and conv_pool_op base

* cmake refactor

* fix operator_hip_test

* move all hip/miopen ops files into caffe2/operators/hip

* sanitize cmake

* permission issue

* remove extra parenthesis

* remove artifact from resolving merge conflict

* cont. sanitize cmake files

* fix syntax error

* sanitize conversion.h

* .

* Revert "."

This reverts commit 56020cb0e996a31ae27bf1f8f491955ed0b121b9.

* clang-format
2018-06-13 04:00:39 -07:00