pytorch/torch
2025-02-01 12:33:41 +00:00
..
_awaits
_C Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-31 17:43:09 +00:00
_C_flatbuffer
_custom_op PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_decomp PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_dispatch PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_dynamo Add "//caffe2:libtorch" to minifier TARGET file (#146203) 2025-02-01 05:37:23 +00:00
_export [export] Fix symfloat serialization (#146112) 2025-02-01 02:28:44 +00:00
_functorch Turn on mypy for _dynamo/variables/builtin.py (#145552) 2025-01-30 22:21:32 +00:00
_higher_order_ops Introduce aoti_call_delegate HOP (#145630) 2025-01-31 04:57:36 +00:00
_inductor add node mapping processing (#146103) 2025-02-01 08:29:29 +00:00
_lazy PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_library PEP585: Missed conversions (#145342) 2025-01-29 05:24:36 +00:00
_logging [BE][export] add "+export" logging to de/serialization (#145283) 2025-01-23 19:47:48 +00:00
_numpy PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_prims PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_prims_common Output of nonzero is transposed, fix fake tensor (#144695) 2025-01-26 01:07:22 +00:00
_refs [Inductor][CPP] fix torch logit decomposition (#145576) 2025-01-27 19:37:51 +00:00
_strobelight PEP585 update - torch/_C torch/_decomp torch/_lazy torch/_library torch/_numpy torch/_prims torch/_refs torch/_strobelight (#145102) 2025-01-18 20:47:12 +00:00
_subclasses Fix aten.to when input is a tensor constant (#146220) 2025-02-01 11:07:33 +00:00
_vendor
accelerator
amp [autocast][pytorch] Support autocast for MTIA (#145627) 2025-01-25 03:24:59 +00:00
ao Resolve affine quantization namespace collision with torchao (#145941) 2025-01-31 21:29:47 +00:00
autograd functional compiled autograd (#144707) 2025-01-27 05:20:56 +00:00
backends Revert "[CUDA][cuBLAS] Add fp16 accumulate option to cuBLAS/cuBLASLt (#144441)" 2025-01-31 17:43:09 +00:00
compiler [Doc] Add period at the end of the sentence (#145384) 2025-01-22 19:56:31 +00:00
contrib PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
cpu
csrc [Environment Variable][7/N] Use thread-safe getenv functions (#140211) 2025-02-01 12:33:41 +00:00
cuda [inductor triton] Disable incorrect TF32 usage on CUDA capability < 8 (#145684) 2025-01-28 22:01:08 +00:00
distributed [MTIA][FSDP2] Enable MTIA device in FSDP2 library code (#145842) 2025-01-31 21:21:00 +00:00
distributions torch.distributions: replace numbers.Number with torch.types.Number. (#145086) 2025-01-27 20:24:55 +00:00
export fix internal error with reorder submodules (#146181) 2025-02-01 00:30:42 +00:00
fft
func
futures PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
fx add node mapping processing (#146103) 2025-02-01 08:29:29 +00:00
jit PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
legacy
lib
linalg
masked PEP585 update - torch/_higher_order_ops torch/_subclasses torch/backends torch/compiler torch/cuda torch/masked torch/mtia torch/nested (#145202) 2025-01-20 22:37:26 +00:00
monitor
mps [MPS] Support includes in metal objects (#145087) 2025-01-18 05:35:22 +00:00
mtia [S481486] Move MTIA dynamic library loading from __init__.py to a separate module (#145322) 2025-01-22 23:39:43 +00:00
multiprocessing
nested Support remaining *_like factory functions for NJT (#144889) 2025-01-27 21:33:51 +00:00
nn torch/nn/utils/rnn.py: docs: improvements (#138628) 2025-02-01 00:10:30 +00:00
onnx Revert "Advance past fc window for stft center (#145437)" 2025-01-30 23:14:16 +00:00
optim PEP585: Missed conversions (#145342) 2025-01-29 05:24:36 +00:00
package Use typing.IO[bytes] instead of io.BytesIO in annotations (#144994) 2025-01-27 18:08:07 +00:00
profiler execution trace export supports gzip format (#146179) 2025-02-01 01:25:25 +00:00
quantization
signal PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
sparse PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
special
testing Build RowwiseScaledMM.cu for SM89 (#145676) 2025-02-01 11:44:58 +00:00
utils Add option to serialization config to reduce random reads from get_record_offset when loading with mmap=True (#143880) 2025-01-31 17:09:20 +00:00
xpu PEP585 update - torch/nn torch/optim torch/package torch/profiler torch/serialization torch/sparse torch/xpu (#145175) 2025-01-21 16:57:27 +00:00
__config__.py
__future__.py
__init__.py Torch device backend autoload fix (#145611) 2025-01-31 19:27:42 +00:00
_appdirs.py
_classes.py
_compile.py [BE] typing for decorators (#144161) 2025-01-04 16:40:09 +00:00
_custom_ops.py
_deploy.py
_environment.py
_guards.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_jit_internal.py PEP585: Missed conversions (#145342) 2025-01-29 05:24:36 +00:00
_linalg_utils.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_lobpcg.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_lowrank.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_meta_registrations.py nonzero_static with symint size (#146006) 2025-01-30 23:42:42 +00:00
_namedtensor_internals.py
_ops.py Introduce aoti_call_delegate HOP (#145630) 2025-01-31 04:57:36 +00:00
_python_dispatcher.py
_size_docs.py remove allow-untyped-defs from torch/_size_docs.py (#143942) 2024-12-29 01:00:46 +00:00
_sources.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_storage_docs.py
_streambase.py
_tensor_docs.py Update pin memory related APIs to not pass 'device' argument (#131858) 2025-01-15 17:23:35 +00:00
_tensor_str.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_tensor.py [pytorch] raise exception when calling dim order on sparse tensor (#145888) 2025-01-29 06:15:44 +00:00
_thread_safe_fork.py
_torch_docs.py Add overloads to diagonal docs (#144214) 2025-01-31 15:53:59 +00:00
_utils_internal.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_utils.py [utils] add try_import method for importing optional modules (#145528) 2025-01-25 00:14:07 +00:00
_VF.py
_vmap_internals.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
_weights_only_unpickler.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
abi-check.cpp
CMakeLists.txt Revert "export AOTI_TORCH_EXPORT on Windows. (#140030)" 2025-01-06 18:15:52 +00:00
custom_class_detail.h Enable readability-redundant-declaration (#143982) 2024-12-31 00:20:10 +00:00
custom_class.h
extension.h
functional.py Revert "Advance past fc window for stft center (#145437)" 2025-01-30 23:14:16 +00:00
hub.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
library.h Remove trivial dispatch_key_allowlist_check function (#146169) 2025-01-31 19:59:40 +00:00
library.py [Custom Ops] Fix f-strings in custom ops error message (#145673) 2025-01-27 19:22:43 +00:00
overrides.py Revert "Add generator parameter to rand*_like functions (#136780)" 2025-01-24 19:00:21 +00:00
py.typed
quasirandom.py
random.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
README.txt
return_types.py
script.h
serialization.py Add option to serialization config to reduce random reads from get_record_offset when loading with mmap=True (#143880) 2025-01-31 17:09:20 +00:00
storage.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
torch_version.py PEP585 update - mostly toplevels (#145178) 2025-01-22 02:21:14 +00:00
types.py Improve typing in torch/types.py (#145237) 2025-01-28 05:29:12 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.