pytorch/torch
Elias Ellison fe81faee5f Add more CPU tests (#47369)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47369

Test Plan: Imported from OSS

Reviewed By: ansley

Differential Revision: D24805251

Pulled By: eellison

fbshipit-source-id: f1a8210ffdc3cc88354cb4896652151d83a0345a
2020-11-12 11:13:47 -08:00
..
_C Add more CPU tests (#47369) 2020-11-12 11:13:47 -08:00
autograd Add max_src_column_width to autograd profiler (#46257) 2020-11-10 18:51:39 -08:00
backends PyTorch NNAPI integration prototype (#46780) 2020-11-05 21:31:01 -08:00
contrib
csrc Add more CPU tests (#47369) 2020-11-12 11:13:47 -08:00
cuda Add nvtx.range() context manager (#42925) 2020-10-22 19:46:16 -07:00
distributed [NCCL] enable p2p tests (#47797) 2020-11-12 10:44:50 -08:00
distributions Annotate torch.nn.cpp (#46490) 2020-10-23 17:40:32 -07:00
fft torch.fft: Two dimensional FFT functions (#45164) 2020-10-17 16:23:06 -07:00
for_onnx
futures fix #45552 - adding add_done_callback(fn) to torch.futures.Future (#45675) 2020-10-13 07:47:36 -07:00
fx add cost_aware_partition (#47673) 2020-11-11 19:31:37 -08:00
jit [JIT] Add __prepare_scriptable__ duck typing to allow replacing nn.modules with scriptable preparations (#45645) 2020-11-10 08:59:45 -08:00
legacy
lib [c10d] switch ProcessGroup to be managed by intrusive_ptr (#47343) 2020-11-12 07:36:23 -08:00
linalg Added torch.linalg.tensorsolve (#46142) 2020-10-29 10:29:28 -07:00
multiprocessing Add exception classification to torch.multiprocessing.spawn (#45174) 2020-10-09 12:59:41 -07:00
nn [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
onnx [ONNX] Bool inputs to index_put updated symbolic (#46866) 2020-11-11 12:45:31 -08:00
optim Revert D24262885: [pytorch][PR] Added foreach_zero_ API 2020-10-28 06:48:59 -07:00
package [packaging] simpler dependency plotting (#45686) 2020-10-06 23:40:00 -07:00
quantization [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
sparse Revised sparse tensor documentation. (#45400) 2020-10-22 02:07:54 -07:00
testing [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) 2020-11-12 10:56:30 -08:00
utils [hotfix] fix collect_env not working when torch compile/install fails (#47752) 2020-11-11 11:47:49 -08:00
__config__.py
__future__.py
__init__.py Revert D24859919: [pytorch][PR] Grammatically updated the tech docs 2020-11-11 07:43:17 -08:00
_appdirs.py
_autograd_functions.py make torch.lu differentiable. (#46284) 2020-10-23 10:13:46 -07:00
_classes.py
_jit_internal.py [JIT] add support for torch.jit.Final in python 3.6 (#47393) 2020-11-06 14:30:44 -08:00
_linalg_utils.py
_lobpcg.py Backward support for generalized eigenvalue solver with LOBPCG in forward [only k-rank SYMEIG case] (#43002) 2020-09-28 07:22:35 -07:00
_lowrank.py Allow large inputs to svd_lowrank. Fix inaccuracy in torch.svd docs. (#47440) 2020-11-09 21:04:48 -08:00
_namedtensor_internals.py
_ops.py
_six.py
_storage_docs.py
_tensor_docs.py Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) (#47225) 2020-11-09 08:31:01 -08:00
_tensor_str.py
_torch_docs.py Added CUDA support for complex input for torch.triangular_solve (#46916) 2020-11-11 16:08:11 -08:00
_utils_internal.py Remove py2 compatible future imports (#44735) 2020-09-16 12:55:57 -07:00
_utils.py
_VF.py
_vmap_internals.py Allow vmap to accept nested python data structures as inputs (#46289) 2020-10-20 07:52:17 -07:00
abi-check.cpp
CMakeLists.txt make a way to disable callgrind (#46116) 2020-10-13 16:18:04 -07:00
custom_class_detail.h
custom_class.h [TorchBind] Support using lambda function as TorchBind constructor (#47819) 2020-11-12 09:29:34 -08:00
extension.h
functional.py Revert "Fixed einsum compatibility/performance issues (#46398)" (#47821) 2020-11-12 08:11:40 -08:00
hub.py Add a torch.hub.load_local() function that can load models from any local directory with a hubconf.py (#44204) 2020-09-21 14:17:21 -07:00
library.h Rationalize inlining of kernels into the unboxing wrapper (#42845) 2020-10-15 04:02:51 -07:00
overrides.py Implement Tensor.new_empty_strided(sizes, strides, *, dtype, device, requires_grad) (#47225) 2020-11-09 08:31:01 -08:00
py.typed
quasirandom.py Type check quasirandom (#45434) 2020-09-28 16:49:38 -07:00
random.py
README.txt
script.h
serialization.py Use storage.cpu() for moving storage to CPU in serialization. (#46028) 2020-10-13 12:51:10 -07:00
storage.py Add type informations to torch/storage.py (#46876) 2020-11-06 11:34:10 -08:00
tensor.py Fix output type of torch.max for Tensor subclasses. (#47110) 2020-11-10 19:45:36 -08:00
types.py Enable torch.tensor typechecks (#45077) 2020-09-24 08:22:06 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.