pytorch/torch
Junjie Bai 352731bd6e Revert D18632773: Split libtorch.so back into libtorch_{cpu,cuda,hip}
Test Plan: revert-hammer

Differential Revision:
D18632773

Original commit changeset: ea717c81e0d7

fbshipit-source-id: 18601439f9f81c9f389020e5a0e4e04adb21772d
2019-11-21 15:01:09 -08:00
..
autograd explicitly provide memory format when calling to *_like operators 2019-11-18 21:47:52 -08:00
backends Remove fbgemm_is_cpu_supported in favor of torch.backends.quantized.supported_qengines (#26840) 2019-09-27 13:45:15 -07:00
contrib Remove torch.contrib._graph_vis (#24874) 2019-08-21 10:48:07 -07:00
csrc Revert D18301806: Use pybind11::gil_scoped_* functions instead of AutoGIL/AutoNoGIL 2019-11-21 14:50:07 -08:00
cuda Move torch.cuda's atfork handler into C++ (#29101) 2019-11-11 07:34:27 -08:00
distributed Add RPC internal helper that overrides the default pickler. (#30185) 2019-11-21 10:01:02 -08:00
distributions explicitly provide memory format when calling to clone() at Indexing.cpp 2019-11-07 05:38:32 -08:00
for_onnx Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
jit Add clone_instance for Module (#30168) 2019-11-21 13:00:34 -08:00
legacy
lib Add abort API in gloo ProcessGroup Send/Recv Work (#29928) 2019-11-20 20:18:54 -08:00
multiprocessing Ignore F401 in all __init__.py without putting noqa (#25823) 2019-10-23 15:28:13 -07:00
nn enable the per channel dynamic quantization (#30122) 2019-11-21 10:12:05 -08:00
onnx ONNX Hardtanh Opset 11 Support (#30169) 2019-11-20 18:59:00 -08:00
optim explicitly provide memory format when calling to *_like operators 2019-11-19 16:19:29 -08:00
quantization enable the per channel dynamic quantization (#30122) 2019-11-21 10:12:05 -08:00
sparse add some sparse tensor ops support in TorchScript (#24967) 2019-08-23 15:48:14 -07:00
testing
utils Revert D18632773: Split libtorch.so back into libtorch_{cpu,cuda,hip} 2019-11-21 15:01:09 -08:00
__config__.py
__future__.py
__init__.py explicitly provide memory format when calling to *_like operators 2019-11-11 17:57:34 -08:00
__init__.pyi.in Improve Tensor type hints (#28578) 2019-10-27 04:43:51 -07:00
_classes.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_jit_internal.py use new overload mechanism for rnns (#29614) 2019-11-13 15:44:25 -08:00
_namedtensor_internals.py Allow align_to to take in partially named tensors (#27308) 2019-10-09 16:28:45 -07:00
_ops.py Initial torchbind prototype (#21098) 2019-08-02 18:45:15 -07:00
_six.py module dedupe (#26666) 2019-10-12 09:51:57 -07:00
_storage_docs.py
_tensor_docs.py Add op bitwise_xor to replace __xor__ and __ixor__ (#25665) 2019-11-12 16:14:04 -08:00
_tensor_str.py Per-channel quantized tensor to have only a single axis (#26675) 2019-09-23 22:29:01 -07:00
_torch_docs.py C++ API parity: isfinite 2019-11-19 20:00:05 -08:00
_utils_internal.py Add a wrapper for inspect in JIT to produce better error message (#25415) 2019-09-14 21:27:51 -07:00
_utils.py Implement pickle support for sparse tensors and torch.layout instances (#27062) 2019-10-04 08:09:32 -07:00
abi-check.cpp
CMakeLists.txt Add support for quantized operator conversion from PT to C2 via ONNX (#29694) 2019-11-18 12:12:40 -08:00
custom_class.h Small fixes for torchbind (#28800) 2019-10-28 16:45:24 -07:00
extension.h
functional.py C++ API parity: isfinite 2019-11-19 20:00:05 -08:00
hub.py Fix hub when branch name contains slash. (#27960) 2019-10-18 10:18:12 -07:00
py.typed
quasirandom.py explicitly provide memory format when calling to clone() at Indexing.cpp 2019-11-07 05:38:32 -08:00
random.py Move torch.cuda's atfork handler into C++ (#29101) 2019-11-11 07:34:27 -08:00
README.txt
script.h turn off autograd mode in android JNI wrapper (#26477) 2019-09-19 21:25:39 -07:00
serialization.py Add zipfile serialization (#29232) 2019-11-19 10:17:32 -08:00
storage.py
tensor.py explicitly provide memory format when calling to clone() at Indexing.cpp 2019-11-07 05:38:32 -08:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.