pytorch/torch
2024-12-18 01:48:04 +00:00
..
_awaits
_C [dynamo] implement framelocals mapping as c++ object (#140063) 2024-12-17 18:54:27 +00:00
_C_flatbuffer
_custom_op
_decomp Back out "Fix undesired specialization on slice after split. (#142372)" (#143356) 2024-12-17 09:17:18 +00:00
_dispatch Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_dynamo [Dynamo] only import einops if version is lower than 0.7.0 (#142847) 2024-12-17 20:50:25 +00:00
_export [export] Serialize all dataclass fields (#142286) 2024-12-17 17:21:27 +00:00
_functorch Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_higher_order_ops [FlexAttention] Fix broken eager tracing (#143344) 2024-12-17 09:42:36 +00:00
_inductor Use process pool for precompilation of triton templates (#142450) 2024-12-18 01:48:04 +00:00
_lazy remove allow-untyped-defs for torch/_lazy/device_context.py (#143367) 2024-12-17 18:54:03 +00:00
_library Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_logging Add "inductor_pre_grad_graph" logging (#142717) (#143126) 2024-12-13 21:48:25 +00:00
_numpy Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_prims Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_prims_common Remove an unused variable from _prims_common/wrappers.py (#138480) 2024-12-10 00:12:53 +00:00
_refs Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_strobelight Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_subclasses Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_vendor
accelerator [BE][accelerator] formalize API name {current,set}_device_{idx => index} (#140542) 2024-12-12 10:53:48 +00:00
amp
ao No actual change, just remove variable contain Tensors from global scope (#143225) 2024-12-17 16:14:25 +00:00
autograd Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
backends Revert "[ROCm] CK Flash Attention Backend (#138947)" 2024-12-17 16:46:57 +00:00
compiler Migrate compiler config to Config (#143152) 2024-12-14 07:38:25 +00:00
contrib
cpu
csrc Add 2 more APIs to the exposed public torch python APIs (#143380) 2024-12-17 21:16:51 +00:00
cuda [torch][cuda] fix race condition in cuda initialization (#143238) 2024-12-14 07:41:24 +00:00
distributed [SJD] adding kill logic for current process when killing a worker (#141060) 2024-12-18 00:13:02 +00:00
distributions Remove some unused type ignores (round 1) (#142325) 2024-12-09 18:23:46 +00:00
export fix dynamo nn module stack fqn (#142823) 2024-12-12 07:02:13 +00:00
fft
func
futures
fx Simplify _compute_symbolic_stride() (#138844) 2024-12-17 19:16:53 +00:00
jit Add warning to torch.jit.load (#143403) 2024-12-18 00:17:41 +00:00
legacy
lib
linalg fix linalg.SVD docs typo: wrong V* shape in reduced SVD (#142037) 2024-12-04 09:18:33 +00:00
masked remove allow-untyped-defs for torch/masked/maskedtensor/creation.py (#143321) 2024-12-17 16:44:50 +00:00
monitor
mps [MPS] Add CompileShader method (#141478) 2024-12-11 02:00:51 +00:00
mtia [MTIA] (3/n) Implement PyTorch APIs to query/reset device peak memory usage (#143347) 2024-12-17 23:37:03 +00:00
multiprocessing
nested NJT linear_backward should not return inner tensor as-is (#143333) 2024-12-18 00:15:18 +00:00
nn [FlexAttention] Optimzing learned bias perf to dq calc (#142281) 2024-12-15 21:44:32 +00:00
onnx Fix a misspelling [ONNX] (#143301) 2024-12-16 20:19:41 +00:00
optim Remove some unused type ignores (round 1) (#142325) 2024-12-09 18:23:46 +00:00
package [torch.package, 3.13] fixes to torch.package for 3.13 (#141409) 2024-12-05 00:23:47 +00:00
profiler [Profiler] Enable Iterative Step without profiler in fbcode (#142077) 2024-12-12 19:00:13 +00:00
quantization
signal
sparse
special
testing Fix sample inputs leaked from subtest (#143415) 2024-12-18 00:15:18 +00:00
utils remove allow-untyped-defs for torch/utils/benchmark/examples/simple_timeit.py (#143368) 2024-12-17 17:19:11 +00:00
xpu Support torch.xpu.mem_get_info API (#141230) 2024-12-05 08:17:25 +00:00
__config__.py remove allow-untyped-defs for torch/__config__.py (#143320) 2024-12-17 00:16:09 +00:00
__future__.py
__init__.py Transform unbacked int expressions into a fresh unbacked int. (#141917) 2024-12-05 16:53:44 +00:00
_appdirs.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_environment.py
_guards.py add private config to temporarily preserve old FSDP guard behavior (#142871) 2024-12-13 22:06:48 +00:00
_jit_internal.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_linalg_utils.py
_lobpcg.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_lowrank.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_meta_registrations.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_namedtensor_internals.py
_ops.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
_tensor_str.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
_tensor.py __cuda_array_interface__: Use "<V2" for bfloat16. (#143042) 2024-12-14 06:27:52 +00:00
_thread_safe_fork.py
_torch_docs.py Add torch.cat tensors type promotion description (#141339) 2024-12-14 01:36:41 +00:00
_utils_internal.py Revert "Kill capture_pre_autograd_graph API (#143224)" 2024-12-17 00:47:24 +00:00
_utils.py Reraise worker errors as runtime errors in more cases when the original exception can't be constructed (#140911) 2024-12-14 03:11:36 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Remove unused Python variables in torch/[_-a]* (#133492) 2024-12-12 17:39:14 +00:00
abi-check.cpp
CMakeLists.txt Hide torch_python symbols (#142214) 2024-12-16 00:59:26 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py
hub.py
library.h
library.py make it clearer (in docs) one can double decorate with torch.library.impl_* APIs (#137608) 2024-12-17 15:13:58 +00:00
overrides.py [dim_order] raised runtime error when tensor has ambiguous dim order (#141632) 2024-12-08 23:16:57 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py Prevent torch.jit.load path in torch.load when weights_only=True (#143326) 2024-12-18 00:17:41 +00:00
storage.py
torch_version.py
types.py
version.py.tpl

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.