mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Summary:
This is the last step in the custom operator implementation: providing a way to build from C++ and Python. For this I:
1. Created a `FindTorch.cmake` taken largely from ebetica with a CMake function to easily create simple custom op libraries
2. Created a ` torch/op.h` header for easy inclusion of necessary headers,
3. Created a test directory `pytorch/test/custom_operator` which includes the basic setup for a custom op.
1. It defines an op in `op.{h,cpp}`
2. Registers it with the JIT using `RegisterOperators`
3. Builds it into a shared library via a `CMakeLists.txt`
4. Binds it into Python using a `setup.py`. This step makes use of our C++ extension setup that we already have. No work, yey!
The pure C++ and the Python builds are separate and not coupled in any way.
zdevito soumith dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10226
Differential Revision: D9296839
Pulled By: goldsborough
fbshipit-source-id: 32f74cafb6e3d86cada8dfca8136d0dfb1f197a0
26 lines
609 B
C++
26 lines
609 B
C++
#include "op.h"
|
|
|
|
#include <cassert>
|
|
#include <vector>
|
|
|
|
int main() {
|
|
auto& ops = torch::jit::getAllOperatorsFor(
|
|
torch::jit::Symbol::fromQualString("custom::op"));
|
|
assert(ops.size() == 1);
|
|
|
|
auto& op = ops.front();
|
|
assert(op->schema().name == "custom::op");
|
|
|
|
torch::jit::Stack stack;
|
|
torch::jit::push(stack, torch::ones(5), 2.0, 3);
|
|
op->getOperation()(stack);
|
|
std::vector<at::Tensor> output;
|
|
torch::jit::pop(stack, output);
|
|
|
|
assert(output.size() == 3);
|
|
for (const auto& tensor : output) {
|
|
assert(tensor.allclose(torch::ones(5) * 2));
|
|
}
|
|
std::cout << "success" << std::endl;
|
|
}
|