mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Summary:
This is the last step in the custom operator implementation: providing a way to build from C++ and Python. For this I:
1. Created a `FindTorch.cmake` taken largely from ebetica with a CMake function to easily create simple custom op libraries
2. Created a ` torch/op.h` header for easy inclusion of necessary headers,
3. Created a test directory `pytorch/test/custom_operator` which includes the basic setup for a custom op.
1. It defines an op in `op.{h,cpp}`
2. Registers it with the JIT using `RegisterOperators`
3. Builds it into a shared library via a `CMakeLists.txt`
4. Binds it into Python using a `setup.py`. This step makes use of our C++ extension setup that we already have. No work, yey!
The pure C++ and the Python builds are separate and not coupled in any way.
zdevito soumith dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10226
Differential Revision: D9296839
Pulled By: goldsborough
fbshipit-source-id: 32f74cafb6e3d86cada8dfca8136d0dfb1f197a0
19 lines
392 B
C++
19 lines
392 B
C++
#include <torch/op.h>
|
|
|
|
#include <cstddef>
|
|
#include <vector>
|
|
|
|
std::vector<at::Tensor> custom_op(
|
|
at::Tensor tensor,
|
|
double scalar,
|
|
int64_t repeat) {
|
|
std::vector<at::Tensor> output;
|
|
output.reserve(repeat);
|
|
for (int64_t i = 0; i < repeat; ++i) {
|
|
output.push_back(tensor * scalar);
|
|
}
|
|
return output;
|
|
}
|
|
|
|
static torch::RegisterOperators registry("custom::op", &custom_op);
|