pytorch/torch
Elias Ellison 4d2f6f1bbe Remove remaining test jit expects redux (#17924)
Summary:
Trying to reland https://github.com/pytorch/pytorch/pull/17886 since it broke a build and I reverted it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/17924

Differential Revision: D14423842

Pulled By: eellison

fbshipit-source-id: f219e786bd07f7da3b7f9e866981199f5ccf6318
2019-03-12 11:33:34 -07:00
..
_thnn try to get rid of tmp_install (#16414) 2019-01-29 17:29:40 -08:00
autograd Return namedtuples from torch.* function with multiple return arguments for C++ operators (#15429) 2019-01-22 11:12:18 -08:00
backends Add torch.backends.openmp.is_available(); fix some cmake messages (#16425) 2019-01-31 16:15:46 -08:00
contrib Remove stages from IR, they are not longer used 2018-10-05 13:58:15 -07:00
csrc Remove remaining test jit expects redux (#17924) 2019-03-12 11:33:34 -07:00
cuda Restore current streams on dst device after switching streams (#17439) 2019-02-25 12:06:41 -08:00
distributed Fixed a formatting issue in doc comments (#17505) 2019-03-12 09:55:29 -07:00
distributions Registering of kl-divergence for independent distribution (#17681) 2019-03-11 08:10:16 -07:00
for_onnx
jit Clarify JIT docs 2019-03-09 23:13:31 -08:00
legacy Remove torch/legacy (#11823) 2018-09-20 14:00:54 -07:00
lib Remove unused parameter in ProcessGroupGloo (#17718) 2019-03-11 18:01:20 -07:00
multiprocessing Fix typo 2019-02-26 20:23:34 -08:00
nn Fix minor grammatical mistakes in torch/nn/modules/loss.py (#17892) 2019-03-12 08:42:50 -07:00
onnx Revert D14361993: [pytorch][PR] [Onnx] - refactoring serialization of ONNX initializers to be name-based 2019-03-08 16:31:14 -08:00
optim Redefine scheduler to set learning rate using recursive formula (#14010) 2018-12-18 16:44:31 -08:00
sparse Correct conv and pooling docstrings in nn module (#17052) 2019-02-15 06:58:02 -08:00
testing Lightweight String check Utility (#16858) 2019-02-19 12:31:57 -08:00
utils Passing indices as a list to Subset instead of Tensor (#17649) 2019-03-10 09:23:53 -07:00
__init__.py Revert #17191 and #17215 that no longer apply on Windows (#17567) 2019-03-01 10:37:27 -08:00
__init__.pyi.in Move argsort to C++ 2019-02-21 07:59:27 -08:00
_jit_internal.py use flake8-mypy (#17721) 2019-03-07 09:15:54 -08:00
_ops.py Use realpath for loaded libraries (#13936) 2018-11-15 11:23:20 -08:00
_six.py create type hint stub files for module torch (#12500) 2019-01-29 12:14:17 -08:00
_storage_docs.py Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
_tensor_docs.py Deprecate torch.pstrf (#17866) 2019-03-11 12:27:52 -07:00
_tensor_str.py Added scientific notation on set_printoptions (#16876) 2019-02-11 04:55:12 -08:00
_torch_docs.py torch.btrifact for tensors with greater than 3 dimensions (#14964) 2019-03-12 01:46:07 -07:00
_utils_internal.py Use fixed MASTER_PORT in test_distributed (#13109) 2018-10-25 08:51:34 -07:00
_utils.py create type hint stub files for module torch (#12500) 2019-01-29 12:14:17 -08:00
abi-check.cpp Fixes for Torch Script C++ API (#11682) 2018-09-17 09:54:50 -07:00
CMakeLists.txt Remove legacy way of exposing caffe2 operators to PyTorch (#17742) 2019-03-08 10:22:41 -08:00
extension.h Remove deprecated variable_tensor_functions (#15003) 2018-12-11 17:16:11 -08:00
functional.py Deprecate torch.pstrf (#17866) 2019-03-11 12:27:52 -07:00
hub.py Fix github branch prefix v (#15552) 2018-12-26 19:48:47 -08:00
random.py Improve the docstring of nn.random.fork_rng (#15960) 2019-01-14 02:41:18 -08:00
README.txt
script.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
serialization.py Avoid unnecessary CPU-to-GPU copy of torch.load with CUDA (#17297) 2019-02-21 01:32:19 -08:00
storage.py Bool tensor. Part 0: Boolean storage implementation (#16810) 2019-02-19 08:22:13 -08:00
tensor.py Deprecate torch.pstrf (#17866) 2019-03-11 12:27:52 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.