..
_awaits
_C
Revert "[dynamo] Add guard serialization for tensor matches. ( #151318 )"
2025-04-24 19:22:45 +00:00
_C_flatbuffer
_custom_op
_decomp
Make torch._chunk_cat support non-contiguous inputs ( #151263 )
2025-04-16 04:18:46 +00:00
_dispatch
[BE][PYFMT] migrate PYFMT for torch._dynamo to ruff format ( #144549 )
2025-02-28 03:03:53 +00:00
_dynamo
Revert "[dynamo] Add guard serialization for tensor matches. ( #151318 )"
2025-04-24 19:22:45 +00:00
_export
[export] improve error message for deserializing custom triton op ( #152029 )
2025-04-24 20:22:05 +00:00
_functorch
[invoke_subgraph] Cache fake tensor if no unbacked symint in the output ( #151957 )
2025-04-24 14:17:22 +00:00
_higher_order_ops
[invoke_subgraph] Compile time traces ( #151409 )
2025-04-24 13:20:50 +00:00
_inductor
[Inductor][CPP] Optimize the epilogue for int8 GEMM Template ( #152000 )
2025-04-24 23:36:00 +00:00
_lazy
_library
[torchbind] fix error message when attr is a real tensor. ( #151944 )
2025-04-23 17:32:11 +00:00
_logging
[export] Beef up guard_added logs ( #149465 )
2025-03-20 23:02:07 +00:00
_numpy
_prims
Support torch.compile rng selective activation checkpointing with cudagraph ( #146878 )
2025-02-28 00:47:03 +00:00
_prims_common
[dynamic shapes] guard_or_false for _reshape_view_helper, utils._infer_size for wildcard dims ( #150127 )
2025-04-23 05:42:30 +00:00
_refs
[dynamic shapes] rewrite expand with guard_or_false ( #150236 )
2025-04-23 06:11:11 +00:00
_strobelight
Enable strobelight profiling specific compile frame ids using COMPILE_STROBELIGHT_FRAME_FILTER ( #147549 )
2025-02-22 03:44:53 +00:00
_subclasses
Revert "[dynamo] Add guard serialization for tensor matches. ( #151318 )"
2025-04-24 19:22:45 +00:00
_vendor
accelerator
Delegate torch.accelerator.device_count to torch.xxx.device_count for multi-process usage ( #149924 )
2025-04-10 02:37:37 +00:00
amp
[Intel GPU] skip a cuda api call in amp to save some host overhead on xpu ( #151111 )
2025-04-13 06:37:07 +00:00
ao
[BE][Easy]: Simplify reversed call in graph matcher ( #151674 )
2025-04-19 14:14:31 +00:00
autograd
Fix torch.autograd.backward inputs validation ( #150975 )
2025-04-17 02:11:13 +00:00
backends
Expose is_available API for torch.backends.mkldnn ( #147432 )
2025-04-10 05:05:37 +00:00
compiler
[MegaCache] Return None on no compilation ( #151921 )
2025-04-23 04:32:06 +00:00
contrib
cpu
[CPU Stream] Add noop for CPU stream record_event() and wait_event() ( #145935 )
2025-02-20 18:50:55 +00:00
csrc
Improve stable library apis per Scott's feedback ( #152040 )
2025-04-24 20:51:03 +00:00
cuda
Add torch.cuda._compile_kernel() ( #151484 )
2025-04-24 07:14:31 +00:00
distributed
Move verbose warning to warning_once ( #152044 )
2025-04-24 16:18:34 +00:00
distributions
add generalized pareto distribution (GPD) ( #135968 )
2025-04-17 18:51:02 +00:00
export
[DRAFT] INitial version of sticky export ( #151047 )
2025-04-23 22:58:43 +00:00
fft
func
futures
PEP585: More UP006 fixes ( #146392 )
2025-02-20 06:18:13 +00:00
fx
[dynamic shapes] user-code friendly statically_known_true, has_static_value ( #151601 )
2025-04-24 02:53:59 +00:00
jit
Fix torchscript issues with reference quantized modules ( #150870 )
2025-04-10 20:14:45 +00:00
legacy
lib
[1/N] Use internal linkage in torch/csrc C++ files. ( #150930 )
2025-04-11 02:19:31 +00:00
linalg
Implement gradient for the residuals of torch.linalg.lstsq ( #148526 )
2025-03-10 12:35:09 +00:00
masked
[BE][Easy]: Dedupe a TypeAlias in PrimsCommon ( #151565 )
2025-04-17 19:59:41 +00:00
monitor
mps
[MPS] Make torch.mps.compile_shader public ( #148972 )
2025-03-11 20:20:58 +00:00
mtia
[MTIA] Add _mtia_maybeExchangeDevice to MTIA module ( #149340 )
2025-03-18 15:15:12 +00:00
multiprocessing
nested
[aotd] Guess tangents stride as output strides ( #144579 )
2025-03-20 15:41:36 +00:00
nn
Optimize register_full_backward_hook description when all input no grad ( #151785 )
2025-04-22 17:57:31 +00:00
onnx
[ONNX] Update decomposition logic to loop over onnx registry ( #151826 )
2025-04-22 19:40:52 +00:00
optim
Include other accelerators in capturable docstr for optimizers ( #149770 )
2025-04-24 20:38:42 +00:00
package
Remove code for Python < 3.9 ( #147097 )
2025-02-14 03:22:49 +00:00
profiler
[profiler][retry] don't disable CUPTI_LAZY_REINIT for cuda >= 12.6 ( #151124 )
2025-04-15 16:11:49 +00:00
quantization
signal
sparse
Fix spelling ( #149277 )
2025-03-20 01:02:32 +00:00
special
testing
[MPS] Extend index_put to half precision floats ( #151869 )
2025-04-22 22:00:08 +00:00
utils
[xpu] set aot device flags in cpp_extension ( #149459 )
2025-04-24 22:55:52 +00:00
xpu
xpu: torch.xpu.get_arch_list() to return [] if xpu not compiled ( #147431 )
2025-02-24 01:35:54 +00:00
__config__.py
__future__.py
__init__.py
[profiler][retry] don't disable CUPTI_LAZY_REINIT for cuda >= 12.6 ( #151124 )
2025-04-15 16:11:49 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
_deploy.py
_environment.py
_guards.py
Revert "[dynamo] Add guard serialization for tensor matches. ( #151318 )"
2025-04-24 19:22:45 +00:00
_jit_internal.py
[BE][CI] bump ruff to 0.9.2: multiline assert statements ( #144546 )
2025-02-27 20:46:16 +00:00
_linalg_utils.py
_lobpcg.py
[BE][CI] bump ruff to 0.9.2: multiline assert statements ( #144546 )
2025-02-27 20:46:16 +00:00
_lowrank.py
_meta_registrations.py
Non-deterministic alert in histc_cuda for floating types only ( #151701 )
2025-04-24 21:16:46 +00:00
_namedtensor_internals.py
_ops.py
Introduce unsafe way to mark functions as cacheable ( #151603 )
2025-04-21 17:37:38 +00:00
_python_dispatcher.py
_size_docs.py
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
[Docs] Clarify behavior when integer dtype is used with requires_grad=True in tensor.to() ( #150913 )
2025-04-10 02:52:58 +00:00
_tensor_str.py
add torch.float4_e2m1fn_x2 to PyTorch ( #148791 )
2025-03-27 17:32:20 +00:00
_tensor.py
Revert "Fix non-bitwise type annotations for Tensor operators (see #145838 ) ( #146845 )"
2025-02-18 19:01:27 +00:00
_thread_safe_fork.py
_torch_docs.py
Added to docs for out_dtype arg in torch gemms ( #151704 )
2025-04-21 20:09:17 +00:00
_utils_internal.py
[profiler][retry] don't disable CUPTI_LAZY_REINIT for cuda >= 12.6 ( #151124 )
2025-04-15 16:11:49 +00:00
_utils.py
Allow torch.load under FakeTensorMode to load FakeTensors with correct devices (for plain Tensors) ( #147786 )
2025-03-06 12:04:32 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
Add sparse tensors constructed via legacy constructor to _sparse_tensors_to_validate ( #147759 )
2025-02-25 23:51:12 +00:00
CMakeLists.txt
Add new dependences for gen_pyi.py ( #150391 )
2025-04-03 14:18:18 +00:00
custom_class_detail.h
custom_class.h
Remove unneeded Clang-tidy suppression ( #148246 )
2025-03-01 16:51:54 +00:00
extension.h
functional.py
Optimize cdist param description ( #151178 )
2025-04-14 13:53:10 +00:00
hub.py
[BE][CI][Easy] bump ruff to 0.9.0: long statements in docstrings ( #146509 )
2025-02-24 19:56:08 +00:00
library.h
Overload Library::def rather than templating it ( #151626 )
2025-04-18 22:51:16 +00:00
library.py
fix spammy library deinit errors when user passes an invalid TORCH_LOGS argument ( #151678 )
2025-04-22 20:13:52 +00:00
overrides.py
[CUDA][cuBLAS] Aten GEMM overload for FP32 output from FP16/BF16 inputs ( #150812 )
2025-04-18 01:53:26 +00:00
py.typed
quasirandom.py
random.py
Update description for torch.random.fork_rng ( #151881 )
2025-04-23 16:59:29 +00:00
README.md
Rename README.txt to README.md ( #149811 )
2025-03-24 22:33:33 +00:00
return_types.py
script.h
serialization.py
Move get accelerator to use build time flags when possible ( #146098 )
2025-03-10 13:17:58 +00:00
storage.py
add torch.float4_e2m1fn_x2 to PyTorch ( #148791 )
2025-03-27 17:32:20 +00:00
torch_version.py
types.py
version.py.tpl