..
_awaits
_C
[fr] [xpu] Add FlightRecorder support for ProcessGroupXCCL ( #158568 )
2025-08-22 09:03:35 +00:00
_C_flatbuffer
_custom_op
[BE]: ruff PLC0207 - use maxsplit kwarg ( #160107 )
2025-08-08 03:14:59 +00:00
_decomp
Revert "[dynamic shapes] unbacked-safe slicing ( #157944 )"
2025-08-22 20:48:46 +00:00
_dispatch
_dynamo
[dynamo] Support method calls on complex ConstantVariables ( #161122 )
2025-08-22 21:40:03 +00:00
_export
[export] Remove unused Model, tensor_paths, constant_paths ( #161185 )
2025-08-22 01:07:01 +00:00
_functorch
Revert "Close some sources of fake tensor leakages ( #159923 )"
2025-08-22 20:42:50 +00:00
_higher_order_ops
[invoke_subgraph][inductor] Thread graphsafe rng input states for hops ( #160713 )
2025-08-21 20:41:29 +00:00
_inductor
Revert "[dynamic shapes] unbacked-safe slicing ( #157944 )"
2025-08-22 20:48:46 +00:00
_lazy
_library
Account for triton kernel source code hidden in custom ops properly in AOTAutogradCache ( #160120 )
2025-08-12 14:11:06 +00:00
_logging
Add compile_id: Optional[CompileID] to torch._logging._internal.trace_structured_artifact ( #160440 )
2025-08-13 06:28:23 +00:00
_numpy
Fix torch._numpy to match NumPy when empty ellipsis causes advanced indexing separation ( #158297 )
2025-07-16 08:11:53 +00:00
_prims
[BE]: ruff PLC0207 - use maxsplit kwarg ( #160107 )
2025-08-08 03:14:59 +00:00
_prims_common
[dynamic shapes] prims_common non_overlapping_and_dense ( #160462 )
2025-08-19 01:35:28 +00:00
_refs
Revert "Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. ( #159197 )"
2025-08-18 07:22:13 +00:00
_strobelight
_subclasses
Revert "[dynamic shapes] unbacked-safe slicing ( #157944 )"
2025-08-22 20:48:46 +00:00
_vendor
accelerator
Add unified memory APIs for torch.accelerator ( #152932 )
2025-08-08 17:41:22 +00:00
amp
Fix autocast context manager when there is exception ( #159565 )
2025-08-01 02:12:24 +00:00
ao
Remove the uncessary empty file ( #160728 )
2025-08-19 10:54:08 +00:00
autograd
Add ownership token when needed on GradientEdge ( #160098 )
2025-08-12 20:14:18 +00:00
backends
[ROCm] SDPA fix mem fault when dropout is enabled ( #154864 )
2025-08-21 14:23:13 +00:00
compiler
[PGO] add extra read/write keys ( #160715 )
2025-08-18 01:41:08 +00:00
contrib
cpu
Replace _device_t with torch.types.Device in torch/cpu/__init__.py ( #161031 )
2025-08-21 00:22:43 +00:00
csrc
[Profiler] Add GC Events to Python Stack Tracer ( #161209 )
2025-08-22 22:11:25 +00:00
cuda
Remove the uncessary empty file ( #160728 )
2025-08-19 10:54:08 +00:00
distributed
Revert "[DTensor] Make default RNG semantics match user-passed generator ( #160482 )"
2025-08-22 15:04:28 +00:00
distributions
export
Revert "Close some sources of fake tensor leakages ( #159923 )"
2025-08-22 20:42:50 +00:00
fft
func
futures
fx
[rfc] add hint_override kwarg to mark_dynamic ( #161007 )
2025-08-21 02:22:52 +00:00
headeronly
[Reland] Migrate ScalarType to headeronly ( #159911 )
2025-08-06 07:36:37 +00:00
jit
[4/n] Remove references to TorchScript in PyTorch docs ( #158317 )
2025-07-16 20:01:34 +00:00
legacy
lib
linalg
masked
Revert "Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. ( #159197 )"
2025-08-18 07:22:13 +00:00
monitor
mps
mtia
[Re-land][Inductor] Support native Inductor as backend for MTIA ( #159211 )
2025-07-29 17:03:24 +00:00
multiprocessing
Support NUMA Binding for Callable Entrypoints ( #160163 )
2025-08-12 20:08:49 +00:00
nativert
[nativert] make runtime const folding aware of run_const_graph ( #160760 )
2025-08-21 05:22:03 +00:00
nested
Revert "Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. ( #159197 )"
2025-08-18 07:22:13 +00:00
nn
typing debugging.py ( #160364 )
2025-08-15 02:09:31 +00:00
numa
[ez] Make NUMA signpost parameters JSON serializable ( #160710 )
2025-08-15 16:52:43 +00:00
onnx
[ONNX] Remove enable_fake_mode and exporter_legacy ( #161222 )
2025-08-22 22:15:27 +00:00
optim
Optimzie zero_grad description ( #161239 )
2025-08-22 06:18:25 +00:00
package
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
profiler
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
quantization
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
signal
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
sparse
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
special
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
testing
[ROCm] SDPA fix mem fault when dropout is enabled ( #154864 )
2025-08-21 14:23:13 +00:00
utils
[ROCm] revamp HIPCachingAllocatorMasqueradingAsCUDA ( #161221 )
2025-08-22 18:13:12 +00:00
xpu
[BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format ( #144552 )
2025-08-07 00:09:56 +00:00
__config__.py
__future__.py
__init__.py
[BE] remove torch deploy - conditionals ( #158288 )
2025-07-29 17:40:49 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_environment.py
_guards.py
Move save guard error throwing to separate phase ( #160662 )
2025-08-19 14:46:43 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py
Fix meta function for aten.complex ( #160894 )
2025-08-20 16:30:04 +00:00
_namedtensor_internals.py
_ops.py
[BE] remove torch deploy - conditionals ( #158288 )
2025-07-29 17:40:49 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
Add missing optional for tensor ops ( #159028 )
2025-07-25 04:36:55 +00:00
_tensor_str.py
Fix max_width computation in _tensor_str._Formatter ( #126859 )
2025-08-01 15:05:41 +00:00
_tensor.py
[MPS] Enable dlpack integration ( #158888 )
2025-07-24 18:05:41 +00:00
_thread_safe_fork.py
_torch_docs.py
[cuda][cupy] Improve cupy device placement when device is provided with explicit index ( #158529 )
2025-08-15 00:27:42 +00:00
_utils_internal.py
Wire in pt2_triton_builds ( #159897 )
2025-08-06 07:39:51 +00:00
_utils.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
added class or module info for functions blocked by weight-only load ( #159935 )
2025-08-12 20:52:25 +00:00
CMakeLists.txt
CMake build: preserve PYTHONPATH ( #160144 )
2025-08-08 16:03:49 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py
unify broadcast_shapes functions and avoid duplicates ( #160251 )
2025-08-16 00:54:32 +00:00
header_only_apis.txt
[Reland] Migrate ScalarType to headeronly ( #159911 )
2025-08-06 07:36:37 +00:00
hub.py
Allow torch.hub.load with unauthorized GITHUB_TOKEN ( #159896 )
2025-08-14 18:15:49 +00:00
library.h
Using std::make_unique<T>() instead of unique<T>(new T()) ( #160723 )
2025-08-19 10:25:47 +00:00
library.py
Add utility to get computed kernel in torch.library ( #158393 )
2025-08-13 21:00:59 +00:00
overrides.py
[PT2]: Add Static Dispatch Kernel for wrapped_fbgemm_linear_fp16_weight ( #160451 )
2025-08-15 04:06:17 +00:00
py.typed
quasirandom.py
random.py
return_types.py
script.h
serialization.py
added class or module info for functions blocked by weight-only load ( #159935 )
2025-08-12 20:52:25 +00:00
storage.py
torch_version.py
types.py
version.py.tpl