pytorch/caffe2/operators/copy_op.h
Junjie Bai 246f5c412e Revert "Tensor construction codemod(raw_mutable_data) (#16373)" (#18680)
Summary:
This reverts commit d73c830e23.

We have observed significant perf drop when training ResNext101 with multiple amd GPUs:

Before:
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-bench/1636/console
2 GPUs ResNext training got 150\~160 imgs/sec
4 GPUs ResNext training got 270\~280 imgs/sec

After:
https://ci.pytorch.org/jenkins/job/caffe2-builds/job/py2-clang7-rocmdeb-ubuntu16.04-bench/1637/console
Both 2 and 4 GPUs ResNext training drop to 110\~120 imgs/sec

Similar perf drop are seen on ResNet50 training jobs as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/18680

Differential Revision: D14702941

Pulled By: bddppq

fbshipit-source-id: 828141805afc23f25c08d4a2eb6d4b99f817c128
2019-04-01 14:39:13 -07:00

40 lines
1.1 KiB
C++

#ifndef CAFFE2_OPERATORS_COPY_OP_H_
#define CAFFE2_OPERATORS_COPY_OP_H_
#include "caffe2/core/context.h"
#include "caffe2/core/operator.h"
namespace caffe2 {
template <class Context, class DstContext, class SrcContext>
class CopyOp : public Operator<Context> {
public:
USE_OPERATOR_CONTEXT_FUNCTIONS;
USE_SIMPLE_CTOR_DTOR(CopyOp)
bool RunOnDevice() override {
auto& input = this->template Input<Tensor>(0, SrcContext::GetDeviceType());
auto* output =
this->template Output<Tensor>(0, DstContext::GetDeviceType());
output->ResizeLike(input);
this->context_.template CopyItems<SrcContext, DstContext>(
input.dtype(),
input.numel(),
input.raw_data(),
output->raw_mutable_data(input.dtype()));
return true;
}
};
template <class Context, class DstContext, class SrcContext>
class CopyOnDeviceLikeOp : public CopyOp<Context, DstContext, SrcContext> {
public:
template <class... Args>
explicit CopyOnDeviceLikeOp(Args&&... args)
: CopyOp<Context, DstContext, SrcContext>(std::forward<Args>(args)...) {}
};
} // namespace caffe2
#endif // CAFFE2_OPERATORS_COPY_OP_H_