pytorch/torch
Peter Bell 9962bfb3c9 Remove THTensor (#69040)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69040

Test Plan: Imported from OSS

Reviewed By: anjali411

Differential Revision: D32872478

Pulled By: ngimel

fbshipit-source-id: f93e16509d64308d91e374744410a6a811e7f4e3
2021-12-10 02:29:11 -08:00
..
_C [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#69306) 2021-12-09 14:53:31 -08:00
_masked Strided masked var. (#68738) 2021-12-01 19:19:37 -08:00
ao Back out "[wip][quant][graphmode] produce reference pattern for binary ops and then rewrite to quantized op" (#69713) 2021-12-09 21:55:09 -08:00
autograd Fix inference_mode decorator (#68617) 2021-12-09 10:45:09 -08:00
backends [Linalg] Add a runtime switch to let pytorch prefer a backend impl in linalg functions on GPU (#67980) 2021-12-03 19:06:30 -08:00
contrib
cpu
csrc Remove THTensor (#69040) 2021-12-10 02:29:11 -08:00
cuda Add nvidia-smi memory and utilization as native Python API (#69104) 2021-12-08 10:33:23 -08:00
distributed Set non-default backend names to lower case (#69400) 2021-12-07 07:58:46 -08:00
distributions Deprecate torch.triangular_solve (#63570) 2021-12-02 13:24:55 -08:00
fft
for_onnx
futures
fx Fix sign op converter (#69580) 2021-12-07 19:04:51 -08:00
jit [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#69306) 2021-12-09 14:53:31 -08:00
legacy
lib
linalg Revert D32521980: Add linalg.lu_factor 2021-11-28 17:22:15 -08:00
multiprocessing
nn [quant][embdding qat] Re-land Add FX support for QAT EmbeddingBag (#69334) 2021-12-08 05:57:20 -08:00
onnx Added antialias flag to interpolate (CPU only, bilinear) (#65142) 2021-11-17 09:10:15 -08:00
optim Fix ChainedScheduler.get_last_lr() (#69112) 2021-12-02 13:44:12 -08:00
package
profiler [Reland] Python tracer. (#68325) 2021-11-15 23:32:49 -08:00
quantization
sparse Sparse CSR CUDA: Add torch.sparse.sampled_addmm (#68007) 2021-11-29 15:43:29 -08:00
special
testing Revert D32942007: OpInfo: Convert more sample_input_funcs to generators 2021-12-09 10:54:41 -08:00
utils [DataLoader] Implementing communication processes for Map-style DataPipes (#68549) 2021-12-08 07:27:01 -08:00
__config__.py
__future__.py
__init__.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py [package] fix torchscript classes in package (#68028) 2021-11-16 10:01:40 -08:00
_linalg_utils.py
_lobpcg.py torch.lobpcg.backward: do not save non-Variable types with ctx.save_for_backward. (#67994) 2021-11-08 10:02:09 -08:00
_lowrank.py
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py
_storage_docs.py
_tensor_docs.py [numpy] Alias arctan2 to atan2 (#67010) 2021-11-16 09:41:09 -08:00
_tensor_str.py Add efficient zero tensors (#64837) 2021-12-08 10:37:39 -08:00
_tensor.py [quant] Remove warning for quantized Tensor in __dir__ (#69265) 2021-12-02 10:30:36 -08:00
_torch_docs.py Extend explanation of torch.cholesky_inverse to consider batched inputs. (#69069) 2021-12-09 14:01:31 -08:00
_utils_internal.py
_utils.py
_VF.py
_vmap_internals.py More aggressively market functorch.vmap when torch.vmap gets called (#67347) 2021-11-12 16:10:16 -08:00
abi-check.cpp
autocast_mode.py
CMakeLists.txt [pytorch][PR] Add ability for a mobile::Module to save as flatbuffer (#69306) 2021-12-09 14:53:31 -08:00
custom_class_detail.h
custom_class.h [NOOP][clangformat][codemod] Enable CLANGFORMAT (#67854) 2021-11-04 14:07:57 -07:00
deploy.h
extension.h
functional.py Revert D32521980: Add linalg.lu_factor 2021-11-28 17:22:15 -08:00
hub.py making import_module private and deprecating public method (#67990) 2021-11-09 07:27:57 -08:00
library.h
overrides.py Add efficient zero tensors (#64837) 2021-12-08 10:37:39 -08:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py expose return_types in Python (#66614) 2021-12-06 09:05:29 -08:00
script.h
serialization.py Avoid dtype mismatch error in torch.save if storages are unallocated (#68787) 2021-11-24 09:51:29 -08:00
storage.py
torch_version.py
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.