pytorch/caffe2/operators/scale_op_gpu.cc
Roy Li 30521a37ad codemod: caffe::float16 -> at::Half (#11785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11785

Replace each instead of float16 with Half.

Reviewed By: Yangqing

Differential Revision: D9892158

fbshipit-source-id: b9225ca7bd5c84fd1c04a9d24b026c8b6cbff120
2018-09-20 18:55:19 -07:00

14 lines
309 B
C++

#include "caffe2/core/context_gpu.h"
#include "caffe2/operators/scale_op.h"
namespace caffe2 {
template <>
bool ScaleOp<CUDAContext>::RunOnDevice() {
return DispatchHelper<TensorTypes<at::Half, float>>::call(this, Input(0));
}
REGISTER_CUDA_OPERATOR(Scale, ScaleOp<CUDAContext>);
} // namespace caffe2