Commit Graph

9 Commits

Author SHA1 Message Date
Sebastian Messmer
0d7391f8b2 Test cases for custom ops with autograd (#31003)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31003

-
ghstack-source-id: 95663728

Test Plan: unit tests

Differential Revision: D18896189

fbshipit-source-id: d71f7678fff644536fe30452ee21a5a7df1f1f0b
2019-12-15 22:37:24 -08:00
Thomas Viehmann
4c3b76c402 Add std::string to the getTypePtr for JIT inference of custom op types (#13683)
Summary:
This allows custom ops to take string parameters.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13683

Differential Revision: D13017010

Pulled By: soumith

fbshipit-source-id: 7c40aca7f57ba3f8812d34bc55828ff362c69bd2
2018-11-10 12:58:53 -08:00
Peter Goldsborough
393ad6582d Use torch:: instead of at:: in all C++ APIs (#13523)
Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
2018-11-06 14:32:25 -08:00
Peter Goldsborough
9ea19cb079 Windows CI integration for custom ops (#12928)
Summary:
Resubmission of https://github.com/pytorch/pytorch/pull/11527

ezyang orionr
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12928

Differential Revision: D10501342

Pulled By: goldsborough

fbshipit-source-id: 7ce74795aab2f13efeb38f56ce82f53055f5eade
2018-10-23 09:18:09 -07:00
Will Feng
9473e57eca Revert D10444104: [pytorch][PR] Windows CI integration for custom ops
Differential Revision:
D10444104

Original commit changeset: 4c447beeb967

fbshipit-source-id: ead52444aefa27692e3f36dadad986e2313261bd
2018-10-18 14:08:18 -07:00
Peter Goldsborough
12be60cc04 Windows CI integration for custom ops (#11527)
Summary:
This is likely currently broken due to symbol visibility issues, but we will investigate it using this PR.

CC orionr yf225
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11527

Differential Revision: D10444104

Pulled By: goldsborough

fbshipit-source-id: 4c447beeb9671598ecfc846cb5c507ef143459fe
2018-10-18 07:55:05 -07:00
Peter Goldsborough
7949250295 Fixes for Torch Script C++ API (#11682)
Summary:
A couple fixes I deem necessary to the TorchScript C++ API after writing the tutorial:

1. When I was creating the custom op API, I created `torch/op.h` as the one-stop header for creating custom ops. I now notice that there is no good header for the TorchScript C++ story altogether, i.e. when you just want to load a script module in C++ without any custom ops necessarily. The `torch/op.h` header suits that purpose just as well of course, but I think we should rename it to `torch/script.h`, which seems like a great name for this feature.

2. The current API for the CMake we provided was that we defined a bunch of variables like `TORCH_LIBRARY_DIRS` and `TORCH_INCLUDES` and then expected users to add those variables to their targets. We also had a CMake function that did that for you automatically. I now realized a much smarter way of doing this is to create an `IMPORTED` target for the libtorch library in CMake, and then add all this stuff to the link interface of that target. Then all downstream users have to do is `target_link_libraries(my_target torch)` and they get all the proper includes, libraries and compiler flags added to their target. This means we can get rid of the CMake function and all that stuff. orionr  AFAIK this is a much, much better way of doing all of this, no?

3. Since we distribute libtorch with `D_GLIBCXX_USE_CXX11_ABI=0`, dependent libraries must set this flag too. I now add this to the interface compile options of this imported target.

4. Fixes to JIT docs.

These could likely be 4 different PRs but given the release I wouldn't mind landing them all asap.

zdevito dzhulgakov soumith
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11682

Differential Revision: D9839431

Pulled By: goldsborough

fbshipit-source-id: fdc47b95f83f22d53e1995aa683e09613b4bfe65
2018-09-17 09:54:50 -07:00
Peter Goldsborough
a0d4106c07 Integrate custom op tests with CI (#10611)
Summary:
This PR is stacked on https://github.com/pytorch/pytorch/pull/10610, and only adds changes in one file `.jenkins/pytorch/test.sh`, where we now build the custom op tests and run them.

I'd also like to take this PR to discuss whether the [`TorchConfig.cmake`](https://github.com/pytorch/pytorch/blob/master/cmake/TorchConfig.cmake.in) I made is robust enough (we will also see in the CI) orionr Yangqing dzhulgakov what do you think?

Also ezyang for CI changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10611

Differential Revision: D9597627

Pulled By: goldsborough

fbshipit-source-id: f5af8164c076894f448cef7e5b356a6b3159f8b3
2018-09-10 15:40:21 -07:00
Peter Goldsborough
c101a57a74 Build mechanism for custom operators (#10226)
Summary:
This is the last step in the custom operator implementation: providing a way to build from C++ and Python. For this I:

1. Created a `FindTorch.cmake` taken largely from ebetica with a CMake function to easily create simple custom op libraries
2. Created a ` torch/op.h` header for easy inclusion of necessary headers,
3. Created a test directory `pytorch/test/custom_operator` which includes the basic setup for a custom op.
    1. It defines an op in `op.{h,cpp}`
    2. Registers it with the JIT using `RegisterOperators`
    3. Builds it into a shared library via a `CMakeLists.txt`
    4. Binds it into Python using a `setup.py`. This step makes use of our C++ extension setup that we already have. No work, yey!

The pure C++ and the Python builds are separate and not coupled in any way.

zdevito soumith dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10226

Differential Revision: D9296839

Pulled By: goldsborough

fbshipit-source-id: 32f74cafb6e3d86cada8dfca8136d0dfb1f197a0
2018-08-16 18:56:17 -07:00