pytorch/torch
2023-08-29 01:01:02 +00:00
..
_awaits
_C Revert "Standardize on error types for distributed errors. (#107651)" 2023-08-28 23:58:33 +00:00
_C_flatbuffer
_custom_op Extend impl_backward to be usable with torch.library operators (#106817) 2023-08-14 14:33:46 +00:00
_decomp Implement decomposition for aten.tensor_split.tensor_indices_or_sections (#107251) 2023-08-28 17:01:23 +00:00
_dispatch Fix some fake mode confusion between inner/outer fake mode in export (#106515) 2023-08-04 15:42:23 +00:00
_dynamo [dynamo] Graph break on pack_padded_sequence (#108096) 2023-08-29 00:08:11 +00:00
_export Automatically turn on dynamo in cond (#108028) 2023-08-28 10:16:41 +00:00
_functorch [inductor][ac] preserve recompute tags through pattern matching (#107742) 2023-08-25 03:48:26 +00:00
_higher_order_ops Automatically turn on dynamo in cond (#108028) 2023-08-28 10:16:41 +00:00
_inductor [inductor] Add constant_to_device for ir.Constant (#108087) 2023-08-29 00:08:11 +00:00
_lazy
_logging [logging] Add more flags to default logs (#107912) 2023-08-29 01:01:02 +00:00
_numpy torch._numpy: keep f16 CUDA tensors in f16 where possible (#107768) 2023-08-23 18:35:47 +00:00
_prims [activation checkpointing] Add default autocast keys to functional rng wrappers (#107934) 2023-08-25 18:22:02 +00:00
_prims_common Remove dynamo+nvfuser (#105789) 2023-08-08 22:29:32 +00:00
_refs Added normal op decomposition for specializations of the normal op (#106792) 2023-08-25 16:18:28 +00:00
_subclasses [Quant][Inductor] Enable the lowering of quantized maxpool2d (#105906) 2023-08-26 08:36:47 +00:00
amp Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
ao [reland][quant][pt2e][xnnpack_quantizer] Add support for mul and mul_relu (#107930) (#107992) 2023-08-27 14:50:03 +00:00
autograd [profiler] move _enable_dynamo_cache_lookup_profiler (#107720) 2023-08-23 23:41:35 +00:00
backends [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
compiler
contrib [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
cpu Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
csrc Revert "Standardize on error types for distributed errors. (#107651)" 2023-08-28 23:58:33 +00:00
cuda Expose cudaStreamCaptureMode in CUDA Graphs, use local setting in inductor (#107407) 2023-08-25 01:44:26 +00:00
distributed Revert "Standardize on error types for distributed errors. (#107651)" 2023-08-28 23:58:33 +00:00
distributions [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
export [export] Don't save example_inputs for now. (#107978) 2023-08-26 14:36:56 +00:00
fft
func [pt2] support vmap (#101707) 2023-08-09 03:39:33 +00:00
futures
fx Fix aot sequence_nr to reset bwd flag (#107210) 2023-08-24 16:58:12 +00:00
jit [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
legacy
lib Remove some unnecessary <iostream> includes from headers (#106914) 2023-08-25 18:24:05 +00:00
linalg [CUDA][Linalg} Patch crash of linalg.eigh when input matrix is ill-conditioned, in some cusolver version (#107082) 2023-08-16 21:15:15 +00:00
masked [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
monitor
mps [MPS] Introduce torch.mps.Event() APIs (#102121) 2023-08-08 03:45:45 +00:00
multiprocessing Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
nested
nn Fix LayerNorm(bias=False) error (#108060) 2023-08-28 18:23:13 +00:00
onnx [ONNX] Support torch.compile(backend="onnxrt", options=OrtBackendOptions(...)) (#107973) 2023-08-26 18:20:18 +00:00
optim Implement "RAdamW" optimizer (#107507) 2023-08-28 20:50:25 +00:00
package [BE]: Update Ruff to 0.0.280 (#105724) 2023-07-22 23:03:34 +00:00
profiler [profiler] move _enable_dynamo_cache_lookup_profiler (#107720) 2023-08-23 23:41:35 +00:00
quantization Apply UFMT to low traffic torch modules (#106249) 2023-07-29 23:37:30 +00:00
signal [BE] Enable ruff's UP rules and autoformat optim/ (#105426) 2023-07-18 21:07:43 +00:00
sparse Add torch.sparse.as_sparse_gradcheck decorator of gradcheck that allows gradcheck input function to receive and return sparse tensors (#107150) 2023-08-26 07:24:31 +00:00
special
testing Revert "Standardize on error types for distributed errors. (#107651)" 2023-08-28 23:58:33 +00:00
utils fix typos (#108006) 2023-08-28 19:49:09 +00:00
__config__.py
__future__.py
__init__.py Initial Python 3.12 build fixes (#106083) 2023-08-25 13:23:48 +00:00
_appdirs.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_classes.py
_compile.py [dynamo] Reland #104317 - Lazy disable_dynamo API out-of-dynamo (#104664) 2023-07-06 00:48:02 +00:00
_custom_ops.py Extend impl_backward to be usable with torch.library operators (#106817) 2023-08-14 14:33:46 +00:00
_deploy.py
_guards.py [dynamo] Store originating source in the Guard object (#107634) 2023-08-22 02:16:31 +00:00
_jit_internal.py
_linalg_utils.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_lobpcg.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_lowrank.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_meta_registrations.py [Quant][Inductor] Enable qlinear weight prepack inside inductor constant folding (#106782) 2023-08-27 12:53:44 +00:00
_namedtensor_internals.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
_ops.py Support cond and out_dtype for predispatch (#107941) 2023-08-25 17:37:16 +00:00
_python_dispatcher.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
_sources.py
_storage_docs.py
_tensor_docs.py Modify signature for tensor.tile in doc (#106295) 2023-08-01 19:51:52 +00:00
_tensor_str.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
_tensor.py [PyTorch][Tensor] Introduce tensor.dim_order (#106835) 2023-08-25 00:06:03 +00:00
_torch_docs.py Add optional is_coalesced argument to sparse coo tensor factory function. (#107638) 2023-08-26 07:24:29 +00:00
_utils_internal.py [Dynamo] Improve PT2 fbcode logging observability (#106932) 2023-08-11 20:46:04 +00:00
_utils.py Add optional is_coalesced argument to sparse coo tensor factory function. (#107638) 2023-08-26 07:24:29 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
abi-check.cpp
CMakeLists.txt Revert "[Reland] Upgrade NVTX to NVTX3 (#97582)" 2023-08-15 20:55:12 +00:00
custom_class_detail.h
custom_class.h Remove some unnecessary <iostream> includes from headers (#106914) 2023-08-25 18:24:05 +00:00
extension.h reduce header file to boost cpp_wrapper build. (#107585) 2023-08-22 11:58:47 +00:00
functional.py fix torch.norm for custom device (#106198) 2023-08-02 06:25:52 +00:00
hub.py Default permissions for torch.hub downloads (#82869) 2023-08-24 15:48:24 +00:00
library.h
library.py Enable registering fallthroughs to (op, dk) from torch.library (#106086) 2023-07-28 19:37:59 +00:00
overrides.py Expose torch.export.{save,load} APIs (#107888) 2023-08-25 06:06:36 +00:00
py.typed
quasirandom.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
random.py
README.txt
return_types.py
script.h
serialization.py [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
storage.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
torch_version.py [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436) 2023-07-21 07:38:46 +00:00
types.py [BE]: Apply PYI autofixes to various types (#107521) 2023-08-20 02:42:21 +00:00
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.