pytorch/torch
2021-11-24 14:35:59 -08:00
..
_C [c10d] Fix object-based collectives for debug mode (#68223) 2021-11-13 04:18:31 -08:00
_masked Strided masked softmin. (#68463) 2021-11-19 20:51:42 -08:00
ao [Quant] Refactor handling of FixedQParams operators (#68143) 2021-11-23 15:26:10 -08:00
autograd Add vectorized Jacobian and Hessian computation with forward AD (#67041) 2021-11-19 14:31:09 -08:00
backends Add an option to disable reduced precision reductions for FP16 GEMM (#67946) 2021-11-09 17:27:20 -08:00
contrib
cpu Add fp16/fp32 autocasting to JIT/TorchScript (#63939) 2021-10-27 12:11:36 -07:00
csrc Update base for Update on "WIP - Quantized BI Bytedoc workarounds" 2021-11-24 14:35:59 -08:00
cuda Update Documentation to Make CUDA Call Explicit (#67973) 2021-11-23 16:25:37 -08:00
distributed Remove extraneous logging (#68830) 2021-11-24 07:15:50 -08:00
distributions Implement Entropy methods for Binomial and Multinomial distributions (#67609) 2021-11-11 09:16:28 -08:00
fft C++ API and docs for hfftn (#66127) 2021-10-07 12:48:36 -07:00
for_onnx
futures
fx All get_attr node to be in64 type (#68818) 2021-11-23 15:21:47 -08:00
jit [clone][sparse] Add torch._C._sparse namespace (#68672) 2021-11-19 19:47:38 -08:00
legacy
lib [NOOP][clangformat][codemod] Enable CLANGFORMAT for some folders in caffe2/* (#67746) 2021-11-03 12:23:14 -07:00
linalg Add linalg.solve_triangular (#63568) 2021-11-22 12:41:06 -08:00
multiprocessing Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
nn [BC-breaking] Change dtype of softmax to support TorchScript and MyPy (#68336) 2021-11-18 11:26:14 -08:00
onnx Added antialias flag to interpolate (CPU only, bilinear) (#65142) 2021-11-17 09:10:15 -08:00
optim [pytorch] Fix loading from checkpoint after "maximize" flag was introduced in SGD (#68733) 2021-11-23 11:42:16 -08:00
package [Codemod][FBSourceBlackLinter] Daily arc lint --take BLACK 2021-10-05 20:55:56 -07:00
profiler [Reland] Python tracer. (#68325) 2021-11-15 23:32:49 -08:00
quantization fx quant: enable linear-bn1d fusion for PTQ (#66484) 2021-10-18 10:14:28 -07:00
sparse [clone][sparse] Add torch._C._sparse namespace (#68672) 2021-11-19 19:47:38 -08:00
special [special] special alias for softmax (#62251) 2021-10-01 03:55:32 -07:00
testing OpInfos for torch.atleast_{1d, 2d, 3d} (#67355) 2021-11-24 09:55:39 -08:00
utils Revert D32633806: Sparse CSR CUDA: Add block torch.addmm when mat1 is sparse 2021-11-24 09:15:17 -08:00
__config__.py
__future__.py
__init__.py Add set_deterministic_debug_mode and get_deterministic_debug_mode (#67778) 2021-11-11 12:48:29 -08:00
_appdirs.py
_classes.py
_deploy.py [deploy] fix TypedStorage serialization (#67499) 2021-10-28 22:33:04 -07:00
_jit_internal.py [package] fix torchscript classes in package (#68028) 2021-11-16 10:01:40 -08:00
_linalg_utils.py
_lobpcg.py torch.lobpcg.backward: do not save non-Variable types with ctx.save_for_backward. (#67994) 2021-11-08 10:02:09 -08:00
_lowrank.py Prefer mT and mH over transpose(-2, -1) and transpose(-2, -1).conj() (#64181) 2021-10-18 13:02:25 -07:00
_namedtensor_internals.py
_ops.py
_python_dispatcher.py
_six.py
_sources.py Disallow annotations on instance attributes outside __init__ (#67051) 2021-10-25 16:20:47 -07:00
_storage_docs.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_tensor_docs.py [numpy] Alias arctan2 to atan2 (#67010) 2021-11-16 09:41:09 -08:00
_tensor_str.py
_tensor.py Sparse CSR: add convert_indices_from_csr_to_coo (#66774) 2021-11-17 22:28:30 -08:00
_torch_docs.py Add docs entry for adjoint. (#68869) 2021-11-24 10:03:41 -08:00
_utils_internal.py
_utils.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00
_VF.py
_vmap_internals.py More aggressively market functorch.vmap when torch.vmap gets called (#67347) 2021-11-12 16:10:16 -08:00
abi-check.cpp
autocast_mode.py Add fp16/fp32 autocasting to JIT/TorchScript (#63939) 2021-10-27 12:11:36 -07:00
CMakeLists.txt codegen: Split up source, header and Declarations.yaml generation (#67497) 2021-11-03 13:20:54 -07:00
custom_class_detail.h [NOOP][clangformat][codemod] Enable CLANGFORMAT for some folders in caffe2/* (#67746) 2021-11-03 12:23:14 -07:00
custom_class.h [NOOP][clangformat][codemod] Enable CLANGFORMAT (#67854) 2021-11-04 14:07:57 -07:00
deploy.h
extension.h
functional.py [lint] small pass to make lint clean (#68367) 2021-11-16 10:27:00 -08:00
hub.py making import_module private and deprecating public method (#67990) 2021-11-09 07:27:57 -08:00
library.h [NOOP][clangformat][codemod] Enable CLANGFORMAT for some folders in caffe2/* (#67746) 2021-11-03 12:23:14 -07:00
overrides.py Add linalg.solve_triangular (#63568) 2021-11-22 12:41:06 -08:00
py.typed
quasirandom.py
random.py
README.txt
script.h [NOOP][clangformat][codemod] Enable CLANGFORMAT for some folders in caffe2/* (#67746) 2021-11-03 12:23:14 -07:00
serialization.py Avoid dtype mismatch error in torch.save if storages are unallocated (#68787) 2021-11-24 09:51:29 -08:00
storage.py [oss][pytorch] Add quint2x4 dtype (#65545) 2021-10-06 14:22:00 -07:00
torch_version.py
types.py Remove dtype from torch.Storage and use only torch.ByteStorage (#62030) 2021-10-05 13:50:34 -07:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.