pytorch/torch
2022-10-12 18:37:58 +00:00
..
_C (Re-open) Adds cudaMallocAsync as an alternative backend for the CUDA allocator (#82682) 2022-10-12 03:44:21 +00:00
_C_flatbuffer
_decomp Enable aten-aten decomps (#85921) 2022-10-08 05:12:42 +00:00
_dispatch New calling convention for Python dispatcher (#85133) 2022-09-16 20:38:21 +00:00
_lazy Add step closures (#84300) 2022-09-06 20:55:34 +00:00
_prims Add prims.clone (#86705) 2022-10-12 18:22:00 +00:00
_prims_common [primTorch] decomposition for bucketize (#86366) 2022-10-12 12:25:42 +00:00
_refs [primTorch] Add ref for triplet_margin_loss, improve triplet_margin_with_distance_loss (#85614) 2022-10-12 18:37:58 +00:00
_subclasses Revert "min/max support for SymInt/Floats, finish as_strided/scatter/squeeze() backward symint support (#86643)" 2022-10-11 23:12:40 +00:00
amp
ao [ao] fixing public v private for backend_config.native.py (#86030) 2022-10-12 16:06:42 +00:00
autograd Add __all__ to torch.{autograd, fx, cuda} submodules (#85343) 2022-10-09 14:46:54 +00:00
backends Add opteinsum backend to give users control (#86219) 2022-10-05 06:33:25 +00:00
contrib
cpu Add correct __all__ for torch.distributed and torch.cuda submodules (#85702) 2022-10-10 19:15:24 +00:00
csrc [BE] Store helper functions C++ for python API parity (#82136) 2022-10-12 17:49:38 +00:00
cuda Extend torch.cuda.is_available() to attempt an NVML-based CUDA availability assessment when explicitly requested by the user (#85951) 2022-10-12 18:37:50 +00:00
distributed [nn] Add remove_duplicate flag to named_buffers (#674) (#85903) 2022-10-11 18:49:09 +00:00
distributions Add original sources/references to Wishart.py in distributions (#86543) 2022-10-11 21:21:53 +00:00
fft
futures More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
fx Patching getitem in partitioner (#86713) 2022-10-12 07:50:46 +00:00
jit Move the asserts in shape functions upsample_nearest_2d op. (#85801) 2022-09-30 18:30:06 +00:00
legacy
lib Declare public dependencies on libshm (#82694) 2022-10-07 00:01:25 +00:00
linalg Strenghten preconditions of linalg.cross (#83798) 2022-08-24 15:17:12 +00:00
masked [maskedtensor] use masked_softmax for forward/backward instead of regular softmax (#85845) 2022-10-04 00:29:19 +00:00
monitor More doctest refinements. (#83317) 2022-08-22 20:07:26 +00:00
multiprocessing
nested Add python nested_tensor and as_nested_tensor constructors in torch.nested (#85593) 2022-09-28 20:15:02 +00:00
nn [primTorch] Add ref for triplet_margin_loss, improve triplet_margin_with_distance_loss (#85614) 2022-10-12 18:37:58 +00:00
onnx [ONNX] Support device().type() string comparison with constant (#86168) 2022-10-12 17:25:45 +00:00
optim [optim] fix: empty grad support for SparseAdam (#86459) 2022-10-07 19:24:59 +00:00
package fix typo in torch/package/_mock.py (#84508) 2022-09-05 16:48:34 +00:00
profiler Fix ITT unit-tests if PyTorch is compiled with USE_ITT=OFF (#86199) 2022-10-04 21:57:05 +00:00
quantization [Pytorch][quantization][ondevice] Add a wrapper API for server side prep (#83742) 2022-08-29 17:55:26 +00:00
sparse
special Adding multigammaln ref and fix arange (#85153) 2022-09-20 17:52:56 +00:00
testing [primTorch] Add ref for triplet_margin_loss, improve triplet_margin_with_distance_loss (#85614) 2022-10-12 18:37:58 +00:00
utils Optimize __dlpack_device__ performance (#86665) 2022-10-11 19:03:46 +00:00
__config__.py
__future__.py
__init__.py [quant][ao_migration] nn.intrinsic.quantized migration to ao (#86172) 2022-10-08 00:01:38 +00:00
_appdirs.py
_classes.py
_deploy.py
_jit_internal.py
_linalg_utils.py Remove deprecated torch.lstsq (#70980) 2022-09-23 00:16:55 +00:00
_lobpcg.py
_lowrank.py
_meta_registrations.py Revert "Enable max.unary_out (#85926)" 2022-10-11 23:53:12 +00:00
_namedtensor_internals.py
_ops.py [Modes] remove enable and rewrite mode stack (squashed) (#84774) 2022-09-27 01:04:35 +00:00
_python_dispatcher.py [PolishComment] Polish code comment, revelant->relevant (#85238) 2022-09-19 19:43:14 +00:00
_six.py
_sources.py
_storage_docs.py
_tensor_docs.py Enable sparse_dim() and dense_dim() methods for Strided tensors (#86203) 2022-10-06 18:39:22 +00:00
_tensor_str.py Fix printing regular tensors inside functorch transforms (#85556) 2022-09-26 15:35:47 +00:00
_tensor.py Allow PrivateUse1 backends to not have Storage (#86557) 2022-10-12 15:26:29 +00:00
_torch_docs.py [primTorch] decomposition for bucketize (#86366) 2022-10-12 12:25:42 +00:00
_utils_internal.py
_utils.py
_VF.py
_vmap_internals.py
abi-check.cpp
CMakeLists.txt Delete torch::deploy from pytorch core (#85953) 2022-10-06 07:20:16 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py Add opteinsum backend to give users control (#86219) 2022-10-05 06:33:25 +00:00
hub.py Add type hints to torch.save, torch.load (#83937) 2022-08-26 18:58:25 +00:00
library.h
library.py Disable torch.library.Library with PYTORCH_DISABLE_LIBRARY (#85190) 2022-09-17 03:05:43 +00:00
overrides.py 🦊 [AI Accelerators] Consolidate native_layer_norm for nested tensor (#86295) 2022-10-06 13:10:25 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py Add __all__ to torch.utils submodules (#85331) 2022-09-27 14:45:26 +00:00
script.h
serialization.py Add type hints to torch.save, torch.load (#83937) 2022-08-26 18:58:25 +00:00
storage.py
torch_version.py
types.py improve annotations (#86105) 2022-10-05 10:33:26 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.