Commit Graph

9 Commits

Author SHA1 Message Date
Thomas Viehmann
4c3b76c402 Add std::string to the getTypePtr for JIT inference of custom op types (#13683)
Summary:
This allows custom ops to take string parameters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13683

Differential Revision: D13017010

Pulled By: soumith

fbshipit-source-id: 7c40aca7f57ba3f8812d34bc55828ff362c69bd2
2018-11-10 12:58:53 -08:00
Peter Goldsborough
393ad6582d Use torch:: instead of at:: in all C++ APIs (#13523)
Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
2018-11-06 14:32:25 -08:00
Peter Goldsborough
9ea19cb079 Windows CI integration for custom ops (#12928)
Summary:
Resubmission of https://github.com/pytorch/pytorch/pull/11527

ezyang orionr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12928

Differential Revision: D10501342

Pulled By: goldsborough

fbshipit-source-id: 7ce74795aab2f13efeb38f56ce82f53055f5eade
2018-10-23 09:18:09 -07:00
Will Feng
9473e57eca Revert D10444104: [pytorch][PR] Windows CI integration for custom ops
Differential Revision:
D10444104

Original commit changeset: 4c447beeb967

fbshipit-source-id: ead52444aefa27692e3f36dadad986e2313261bd
2018-10-18 14:08:18 -07:00
Peter Goldsborough
12be60cc04 Windows CI integration for custom ops (#11527)
Summary:
This is likely currently broken due to symbol visibility issues, but we will investigate it using this PR.

CC orionr yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11527

Differential Revision: D10444104

Pulled By: goldsborough

fbshipit-source-id: 4c447beeb9671598ecfc846cb5c507ef143459fe
2018-10-18 07:55:05 -07:00
Peter Goldsborough
63c811b3a6 Include some JIT things in C++ docs (#11712)
Summary:
Since we're making parts of the JIT public as part of loading script modules, they should be on the cppdocs website.

Orthogonal: We decided not to export things like `IValue` into the `torch` namespace, so `RegisterOperators` shouldn't be there either.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11712

Differential Revision: D9837578

Pulled By: goldsborough

fbshipit-source-id: 4c06d2fa9dd4b4216951f27424c2ce795febab9c
2018-09-17 23:29:04 -07:00
Peter Goldsborough
7949250295 Fixes for Torch Script C++ API (#11682)
Summary:
A couple fixes I deem necessary to the TorchScript C++ API after writing the tutorial:

1. When I was creating the custom op API, I created `torch/op.h` as the one-stop header for creating custom ops. I now notice that there is no good header for the TorchScript C++ story altogether, i.e. when you just want to load a script module in C++ without any custom ops necessarily. The `torch/op.h` header suits that purpose just as well of course, but I think we should rename it to `torch/script.h`, which seems like a great name for this feature.

2. The current API for the CMake we provided was that we defined a bunch of variables like `TORCH_LIBRARY_DIRS` and `TORCH_INCLUDES` and then expected users to add those variables to their targets. We also had a CMake function that did that for you automatically. I now realized a much smarter way of doing this is to create an `IMPORTED` target for the libtorch library in CMake, and then add all this stuff to the link interface of that target. Then all downstream users have to do is `target_link_libraries(my_target torch)` and they get all the proper includes, libraries and compiler flags added to their target. This means we can get rid of the CMake function and all that stuff. orionr  AFAIK this is a much, much better way of doing all of this, no?

3. Since we distribute libtorch with `D_GLIBCXX_USE_CXX11_ABI=0`, dependent libraries must set this flag too. I now add this to the interface compile options of this imported target.

4. Fixes to JIT docs.

These could likely be 4 different PRs but given the release I wouldn't mind landing them all asap.

zdevito dzhulgakov soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11682

Differential Revision: D9839431

Pulled By: goldsborough

fbshipit-source-id: fdc47b95f83f22d53e1995aa683e09613b4bfe65
2018-09-17 09:54:50 -07:00
Peter Goldsborough
71ddd837d7 Support custom ops in ScriptModule and tidy up test files (#10610)
Summary:
This PR adds support for using custom ops in ScriptModules, the last step for our custom op strategy. You can now write

```
import torch

torch.ops.load_library('libcustom_ops.so')

class Model(torch.jit.ScriptModule):
    def __init__(self):
        super(Model, self).__init__()

    torch.jit.script_method
    def forward(self, input):
        return torch.ops.custom.op(input) + 1

model = Model()
model.forward(torch.ones(5)) # Works
model.save("model.pt") # Works
model = torch.jit.load("model.pt") # Works
```

You can then load the `model.pt` in C++ and execute its `forward` method!

Missing for this was the fact that the script compiler didn't know to convert `ops.custom.op` into a `BuiltinFunction` which then emits a function call. For this I came up with  the following strategy inside `torch/csrc/jit/scrip/init.cpp`:

1. When we access `torch.ops`, we return a `CustomOpValue` (subclass of `PythonValue`), whose purpose is only to return a `CustomOpNamespaceValue` (subclass of `PythonValue`) whenever something under it is accessed.
2. `CustomOpNamespaceValue` will then for each field accessed on it return a `BuiltinFunction`.

This doesn't reduce performance for any calls that are not to `torch.ops` (as opposed to inspecting every function call's name the call site, for example).

I also had to fix `BuiltinFunction` to not assume the namespace is always `aten::`.

A lot of other changes are just tidying up the Python and C++ test harness before I integrate it in CI.

zdevito dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10610

Differential Revision: D9387832

Pulled By: goldsborough

fbshipit-source-id: c00f431db56c7502a66fe1f813fe78067f428ecb
2018-08-21 18:41:27 -07:00
Peter Goldsborough
c101a57a74 Build mechanism for custom operators (#10226)
Summary:
This is the last step in the custom operator implementation: providing a way to build from C++ and Python. For this I:

1. Created a `FindTorch.cmake` taken largely from ebetica with a CMake function to easily create simple custom op libraries
2. Created a ` torch/op.h` header for easy inclusion of necessary headers,
3. Created a test directory `pytorch/test/custom_operator` which includes the basic setup for a custom op.
    1. It defines an op in `op.{h,cpp}`
    2. Registers it with the JIT using `RegisterOperators`
    3. Builds it into a shared library via a `CMakeLists.txt`
    4. Binds it into Python using a `setup.py`. This step makes use of our C++ extension setup that we already have. No work, yey!

The pure C++ and the Python builds are separate and not coupled in any way.

zdevito soumith dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10226

Differential Revision: D9296839

Pulled By: goldsborough

fbshipit-source-id: 32f74cafb6e3d86cada8dfca8136d0dfb1f197a0
2018-08-16 18:56:17 -07:00