pytorch/torch
Horace He f81db8afb8 Initial torchbind prototype (#21098)
Summary:
I have some test code in there as well, along with a script "test_libtorch" to run it. You'll need to modify `test_libtorch` to point to where you have `pytorch` built. I currently require that `pybind11` is included as a subdirectory of the test, but added it to the `.gitignore` to make this reviewable.

Currently, something like this works:
```cpp
struct Foo {
  int x, y;
  Foo(): x(2), y(5){}
  Foo(int x_, int y_) : x(x_), y(y_) {}
  void display() {
    cout<<"x: "<<x<<' '<<"y: "<<y<<endl;
  }
  int64_t add(int64_t z) {
    return (x+y)*z;
  }
};
static auto test = torch::jit::class_<Foo>("Foo")
                    .def(torch::jit::init<int64_t, int64_t>())
                    .def("display", &Foo::display)
                    .def("add", &Foo::add)
                    .def("combine", &Foo::combine);

```
with
```py
torch.jit.script
def f(x):
    val = torch._C.Foo(5, 3)
    val.display()
    print(val.add(3))
```
results in
```
x: 5 y: 3
24
```

Current issues:
- [x] The python class created by torchscript doesn't interactly properly with the surrounding code.
```
torch.jit.script
def f(x):
    val = torch._C.Foo(5, 3)
    return val
```
- [x] Doesn't properly take in non-pointer classes. Can't define this function signature in cpp (We don't want to support this I believe).
```cpp
  void combine(Foo x) {
```

- [x] Has some issues with memory for blobs when constructing multiple objects (fix constant propagation pass to not treat capsules as the same object).
```py
torch.jit.script
def f(x):
    val = torch._C.Foo(5, 3)
    val2 = torch._C.Foo(100, 0)
    val.display()
    print(val.add(3))
```
- [ ] Can't define multiple constructors (need to define overload string. Currently not possible since we don't support overloaded methods).
- [x] `init` is a little bit different syntax than `pybind`. `.init<...>()` instead of `.def(py::init<>())`
- [x] I couldn't figure out how to add some files into the build so they'd be copied to the `include/` directories, so I symlinked them manually.
- [ ] Currently, the conversion from Python into Torchscript doesn't work.
- [ ] Torchbind also currently requires Python/Pybind dependency. Fixing this would probably involve some kind of macro to bind into Python when possible.
- [ ] We pass back into Python by value, currently. There's no way of passing by reference.
- [x] Currently can only register one method with the same type signature. This is because we create a `static auto opRegistry`, and the function is templated on the type signature.

Somewhat blocked on https://github.com/pytorch/pytorch/pull/21177. We currently use some structures that will be refactored by his PR (namely `return_type_to_ivalue` and `ivalue_to_arg_type`.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21098

Differential Revision: D16634872

Pulled By: Chillee

fbshipit-source-id: 1408bb89ea649c27d560df59e2cf9920467fe1de
2019-08-02 18:45:15 -07:00
..
_thnn
autograd Added torch.autograd.profiler.record_function() as context manager. (#23428) 2019-07-30 11:10:01 -07:00
backends
contrib
csrc Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
cuda Let set_rng_state and get_rng_state accept string parameter (#23448) 2019-07-29 08:08:39 -07:00
distributed make OMP_NUM_THREADS default in launch.py (#22501) 2019-07-23 16:14:24 -07:00
distributions Fix distributions.Categorical.sample bug from .view() (#23328) 2019-07-29 12:09:50 -07:00
for_onnx
jit Fix LSTM int8 quantization model size issue (#23577) 2019-08-02 13:38:30 -07:00
legacy
lib Remove superfluous check (#23370) 2019-07-25 11:26:16 -07:00
multiprocessing Add multiprocessing_context= argument to DataLoader (#22990) 2019-07-29 12:58:40 -07:00
nn fix align_corners doc 2019-08-02 12:43:35 -07:00
onnx added opset10 ORT tests (#22993) 2019-08-02 17:34:48 -07:00
optim Adam/AdamW implementation minor fix (#22628) 2019-08-01 11:42:04 -07:00
quantization Remove qconfig_dict from API (#23465) 2019-08-02 10:28:48 -07:00
sparse
testing Fix get_all_math_dtypes for device='cuda' retuning None (#23028) 2019-07-19 09:29:16 -07:00
utils Fix pin_memory_thread not exiting quickly (#23646) 2019-08-01 15:24:14 -07:00
__config__.py
__future__.py Add torch.__future__._overwrite_module_params_on_conversion global flag, and check it in nn.Module._apply() (#21613) 2019-06-19 10:30:02 -07:00
__init__.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
__init__.pyi.in pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00
_classes.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_jit_internal.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_ops.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_six.py Finished the high-priority functions (#21127) 2019-06-04 17:59:05 -07:00
_storage_docs.py Enabled BFloat16 storage (#21523) 2019-07-09 21:51:06 -07:00
_tensor_docs.py pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00
_tensor_str.py Add names to repr for named tensors 2019-08-02 11:37:29 -07:00
_torch_docs.py Allowing batching for det/logdet/slogdet operations (#22909) 2019-07-31 10:01:32 -07:00
_utils_internal.py
_utils.py Catch and print exception traceback in parallel_apply() workers (#18055) 2019-07-26 11:41:22 -07:00
abi-check.cpp
CMakeLists.txt Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
custom_class.h Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
extension.h
functional.py Changed tensor comparison return type from uint8 to bool (#21113) 2019-08-01 07:54:53 -07:00
hub.py Use dst dir for temp file (#23629) 2019-07-31 19:04:03 -07:00
py.typed
quasirandom.py
random.py Refactor Random Number Generators in ATen (#21555) 2019-06-19 13:54:09 -07:00
README.txt
script.h
serialization.py fix error message 2019-07-18 23:38:55 -07:00
storage.py Enabled BFloat16 storage (#21523) 2019-07-09 21:51:06 -07:00
tensor.py pin_memory should not copy on already pinned tensors (#23484) 2019-07-30 21:16:23 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.