pytorch/torch
2022-06-10 18:17:33 +00:00
..
_C Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
_C_flatbuffer
_decomp Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""" 2022-06-10 04:40:43 +00:00
_lazy Revert "Revert "[LT] Codegen ReuseNode for supported ops"" 2022-05-16 20:14:42 +00:00
_masked Revert "masked logsumexp/logaddexp" 2022-05-24 16:12:35 +00:00
_prims Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""" 2022-06-10 04:40:43 +00:00
_refs Revert "Revert "Revert "Added {logical_not, trace} refs, moved logical ops to use method overloads""" 2022-06-10 04:40:43 +00:00
_subclasses Add Dynamic Output Shape Tagdfor ata-dependent ops, handle in FakeTensor 2022-06-09 22:16:16 +00:00
amp Update amp document with CPU Training/Inference Examples (#77244) 2022-05-11 15:42:45 +00:00
ao [ao] Added fx model report per_channel detector 2022-06-10 08:09:59 +00:00
autograd [forward ad] forbid non-float non-complex tangent and primal 2022-05-31 20:58:19 +00:00
backends Deprecate torch.lu 2022-06-07 22:50:14 +00:00
contrib
cpu
csrc Add mutation checks for tensor inputs 2022-06-10 18:17:33 +00:00
cuda Resolve TODO after Python 2 for custom_fwd (#78592) 2022-06-01 05:17:41 +00:00
distributed Use appropriate dtype for sharded linear implementation. 2022-06-10 07:32:15 +00:00
distributions add type annotation to distributions.kl_divergence (#78432) 2022-06-10 13:39:20 +00:00
fft [complex32] fft support (cuda only) (#74857) 2022-05-12 04:28:55 +00:00
futures
fx Ported proxy tensor tests over to core (#78890) 2022-06-07 00:28:53 +00:00
jit Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771) 2022-06-07 21:44:55 +00:00
legacy
lib
linalg Deprecate torch.lu 2022-06-07 22:50:14 +00:00
monitor
multiprocessing Restore old names for private funcs in legacy storages (#77861) 2022-05-20 02:03:34 +00:00
nested
nn Port index.Tensor to structured kernels. 2022-06-10 17:27:47 +00:00
onnx Add onnx support for movedim and moveaxis (#78931) 2022-06-09 19:41:09 +00:00
optim Adding maximize to Adamax (#77409) 2022-05-16 17:34:44 +00:00
package torch/package: add fix for implicit numpy dependency (#78979) 2022-06-08 17:07:00 +00:00
profiler Revert "Revert "[Profiler] Move python tracing to unified event type (Part 2)"" 2022-06-09 19:45:02 +00:00
quantization
sparse Compressed sparse layout conversion stubs (#77489) 2022-05-16 18:37:42 +00:00
special Orthogonal Polynomials (#78304) 2022-06-03 22:38:56 +00:00
testing Add mutation checks for tensor inputs 2022-06-10 18:17:33 +00:00
utils Foward fix sharding bug for DL (#79124) 2022-06-08 16:16:58 +00:00
__config__.py
__future__.py
__init__.py [CUBLAS][TF32] Fix broken docstring for set_float32_matmul_precision (#78949) 2022-06-06 22:04:10 +00:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771) 2022-06-07 21:44:55 +00:00
_linalg_utils.py Remove deprecated torch.solve (#70986) 2022-05-10 13:44:07 +00:00
_lobpcg.py
_lowrank.py
_meta_registrations.py Port index.Tensor to structured kernels. 2022-06-10 17:27:47 +00:00
_namedtensor_internals.py
_ops.py Revert "Autogen Tags enum, and allow specifying tags while defining an op" 2022-06-03 01:53:53 +00:00
_python_dispatcher.py Lint fix 2022-05-05 05:52:40 +00:00
_six.py
_sources.py Reland "[pytorch][PR] Support dataclasses in TorchScript" take 2 (#74353) (#74353) (#76771) 2022-06-07 21:44:55 +00:00
_storage_docs.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
_tensor_docs.py Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
_tensor_str.py Support str for Sparse Compressed tensors 2022-05-18 12:58:54 +00:00
_tensor.py Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
_torch_docs.py MAINT: Harmonize argsort params with array_api (#75162) 2022-06-09 12:32:01 +00:00
_utils_internal.py
_utils.py [DOCS] Add docstring to _get_async_or_non_blocking in _utils.py (#78036) 2022-06-01 16:19:43 +00:00
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Make Wunused-local-typedef a hard error (#77918) 2022-06-09 18:14:01 +00:00
custom_class_detail.h
custom_class.h
deploy.h
extension.h
functional.py Deprecate torch.lu 2022-06-07 22:50:14 +00:00
hub.py Minor torchhub docs 2022-05-10 11:01:02 +00:00
library.h Revert "Autogen Tags enum, and allow specifying tags while defining an op" 2022-06-03 01:53:53 +00:00
library.py Make torch.library decorators return function 2022-06-08 01:57:00 +00:00
overrides.py add strides to slow path 2022-06-10 16:59:14 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Merge torch.cuda._UntypedStorage into torch._UntypedStorage (#75459) 2022-05-19 13:54:39 +00:00
storage.py Fix _free_weak_ref error (#78575) 2022-06-01 00:07:48 +00:00
torch_version.py Move Tensor.grad back into C++ 2022-06-10 13:44:45 +00:00
types.py

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.