pytorch/torch
2025-09-18 17:49:16 +00:00
..
_awaits
_C Revert "[torch][cuda][device_limits] Library for querying device hardware limits for flops and bandwidth (#162942)" 2025-09-18 17:49:16 +00:00
_C_flatbuffer
_custom_op [BE]: ruff PLC0207 - use maxsplit kwarg (#160107) 2025-08-08 03:14:59 +00:00
_decomp support unbacked softmax / logsoftmax (#162216) 2025-09-18 15:43:20 +00:00
_dispatch
_dynamo fixing graph break for namedtuple._replace (#160139) 2025-09-18 14:32:36 +00:00
_export Move export_db to use new tracer, remove restriction on optional inputs (#162993) 2025-09-18 00:43:32 +00:00
_functorch [doc]: Small typos (#162982) 2025-09-16 17:42:19 +00:00
_higher_order_ops [dynamo][hop] Introduce Local Map HOP (#161458) 2025-09-17 09:32:38 +00:00
_inductor support unbacked softmax / logsoftmax (#162216) 2025-09-18 15:43:20 +00:00
_lazy
_library Add missing tags parameter to custom_op overload signatures (#162047) 2025-09-13 19:57:23 +00:00
_logging Add compile_id: Optional[CompileID] to torch._logging._internal.trace_structured_artifact (#160440) 2025-08-13 06:28:23 +00:00
_numpy
_prims [dynamic shapes] unbacked-safe should_swap (#160473) 2025-09-11 18:51:25 +00:00
_prims_common are_strides_like_channels_last_or_false (#162354) 2025-09-16 00:49:05 +00:00
_refs Don't register wrong overload to prim decomp (#163138) 2025-09-18 17:01:19 +00:00
_strobelight
_subclasses [Reland] Return NoOpDeviceGuardImpl in replace of CudaDeviceGuard when device is not available (#163187) 2025-09-18 04:46:26 +00:00
_vendor
accelerator Add unified memory APIs for torch.accelerator (#152932) 2025-08-08 17:41:22 +00:00
amp Optimize AMP custom_backend_name error message (#162037) 2025-09-04 08:27:56 +00:00
ao [doc]: Small typos (#162982) 2025-09-16 17:42:19 +00:00
autograd [ONNX] Refactor torchscript based exporter (#161323) 2025-09-02 16:10:30 +00:00
backends Revert "[ROCm] SDPA fix mem fault when dropout is enabled (#154864)" 2025-08-26 20:03:59 +00:00
compiler [easy] [precompile] Convert CompileArtifacts to callable (#162169) 2025-09-07 23:37:31 +00:00
contrib
cpu Replace _device_t with torch.types.Device in torch/cpu/__init__.py (#161031) 2025-08-21 00:22:43 +00:00
csrc Revert "[torch][cuda][device_limits] Library for querying device hardware limits for flops and bandwidth (#162942)" 2025-09-18 17:49:16 +00:00
cuda Revert "[torch][cuda][device_limits] Library for querying device hardware limits for flops and bandwidth (#162942)" 2025-09-18 17:49:16 +00:00
distributed [FSDP2] idempotent reset_sharded_param: no-op if _local_tensor is already padded (#163130) 2025-09-18 09:20:37 +00:00
distributions
export Fix inconsistent test and add new tracer as config (#162558) 2025-09-17 17:01:48 +00:00
fft
func
futures
fx Fix: ShapeEnv not propagated properly to inductor SizeVars (#162927) 2025-09-18 00:56:22 +00:00
headeronly Add CUDA_KERNEL_ASSERT_PRINTF, a more flexible CUDA_KERNEL_ASSERT_MSG (#160129) 2025-09-16 00:23:48 +00:00
jit
legacy
lib
linalg Revert "Add __init__.pyi to torch/linalg (#160750)" 2025-09-02 16:53:55 +00:00
masked Remove guard_size_oblivious from default contiguity python check, and add aten.sym_is_contiguous. [attempt2] (#160869) 2025-09-08 22:59:13 +00:00
monitor
mps
mtia [BE] Add Documentation for Device APIs (#162834) 2025-09-16 17:01:06 +00:00
multiprocessing Allow parallel start NUMA binding (#161576) 2025-08-28 01:15:58 +00:00
nativert Update placement utils and weights to handle meta device (#162842) 2025-09-17 08:12:32 +00:00
nested [BC Breaking] Remove flex + njt code paths (#161734) 2025-09-16 00:13:56 +00:00
nn enable sync batchnorm for HPU device (#163047) 2025-09-16 20:45:38 +00:00
numa Allow parallel start NUMA binding (#161576) 2025-08-28 01:15:58 +00:00
onnx [mypy] add some import ignores to onnx (#163133) 2025-09-17 09:32:38 +00:00
optim [optim] override SWALR.state_dict and load_state_dict (#163122) 2025-09-17 18:17:26 +00:00
package [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
profiler removed duplicate imports (#161685) 2025-08-31 16:21:49 +00:00
quantization [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
signal [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
sparse [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
special [BE][PYFMT] migrate PYFMT for torch/[p-z]*/ to ruff format (#144552) 2025-08-07 00:09:56 +00:00
testing [ROCm] Remove HIPBLASLT_ALLOW_TF32 from codebase (#162998) 2025-09-18 13:53:48 +00:00
utils [BE] Remove bottleneck (#163210) 2025-09-18 12:08:13 +00:00
xpu Add a new API torch.xpu.can_device_access_peer for Intel GPU (#162705) 2025-09-16 18:00:22 +00:00
__config__.py
__future__.py
__init__.py Update Microsoft C++ Redistributable to the latest version (#161430) 2025-09-18 14:22:03 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_environment.py
_guards.py fix incorrect interaction between DDPOptimizer and donated buffers (#160745) 2025-09-04 21:57:27 +00:00
_jit_internal.py
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py [cuDNN][SDPA][submodule] Roll-back cuDNN frontend upgrade, update Meta registration (#163104) 2025-09-17 15:48:54 +00:00
_namedtensor_internals.py
_ops.py Enable XPU path for FlexAttention (#143553) 2025-08-29 23:10:58 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
_tensor_str.py Fix max_width computation in _tensor_str._Formatter (#126859) 2025-08-01 15:05:41 +00:00
_tensor.py [dynamic shapes] unbacked-safe should_swap (#160473) 2025-09-11 18:51:25 +00:00
_thread_safe_fork.py
_torch_docs.py Update docs for quantile to be clearer for nearest (#162423) 2025-09-09 18:04:12 +00:00
_utils_internal.py Add DISABLE_JUSTKNOBS to torch/_utils_internal.py and use it for dynamo _maybe_set_eval_frame (#162298) 2025-09-15 23:00:39 +00:00
_utils.py
_VF.py
_vmap_internals.py
_weights_only_unpickler.py Fix type checking for persistent loads in the weights-only unpickler (#161661) 2025-09-01 19:57:19 +00:00
CMakeLists.txt [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-12 10:54:42 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py unify broadcast_shapes functions and avoid duplicates (#160251) 2025-08-16 00:54:32 +00:00
header_only_apis.txt [Reland] Migrate ScalarType to headeronly (#159911) 2025-08-06 07:36:37 +00:00
hub.py Allow torch.hub.load with unauthorized GITHUB_TOKEN (#159896) 2025-08-14 18:15:49 +00:00
library.h Using std::make_unique<T>() instead of unique<T>(new T()) (#160723) 2025-08-19 10:25:47 +00:00
library.py Leak Python filenames so that we can give good dispatcher errors. (#160418) 2025-08-31 22:31:39 +00:00
overrides.py Add torch.Tensor._make_dtensor to accelerate DTensor.__new__ further (#161590) 2025-09-05 18:43:41 +00:00
py.typed
quasirandom.py
random.py
return_types.py
script.h
serialization.py added class or module info for functions blocked by weight-only load (#159935) 2025-08-12 20:52:25 +00:00
storage.py
torch_version.py
types.py
version.py.tpl