| .. |
|
_awaits
|
|
|
|
_C
|
Generalize torch._C._set_allocator_settings to be generic (#156175)
|
2025-07-30 06:37:15 +00:00 |
|
_C_flatbuffer
|
|
|
|
_custom_op
|
|
|
|
_decomp
|
(should_fold) gso to guard_or_false when checking folding whether to 3d bmm into 2d mm (#159184)
|
2025-07-30 03:12:14 +00:00 |
|
_dispatch
|
|
|
|
_dynamo
|
Revert "[ContextParallel][FlexAttention] Prototype of supporting FlexAttention in Context Parallel (#158692)"
|
2025-07-31 18:00:30 +00:00 |
|
_export
|
[Dynamo][Better Engineering] Add typing annotations to guard and source (#158397) (#159491)
|
2025-07-30 22:57:50 +00:00 |
|
_functorch
|
[Dynamo][Better Engineering] Add typing annotations to guard and source (#158397) (#159491)
|
2025-07-30 22:57:50 +00:00 |
|
_higher_order_ops
|
[Dynamo][Better Engineering] Add typing annotations to guard and source (#158397) (#159491)
|
2025-07-30 22:57:50 +00:00 |
|
_inductor
|
Fix grouped MM load along K when TMA loads are not used (#159485)
|
2025-07-31 17:58:02 +00:00 |
|
_lazy
|
[BE][2/16] fix typos in torch/ (torch/_*/) (#156312)
|
2025-07-12 05:47:06 +00:00 |
|
_library
|
[BE] remove torch deploy - conditionals (#158288)
|
2025-07-29 17:40:49 +00:00 |
|
_logging
|
[dynamo][be] hide warnings without invalidating warnings cache (#158520)
|
2025-07-18 22:02:31 +00:00 |
|
_numpy
|
Fix torch._numpy to match NumPy when empty ellipsis causes advanced indexing separation (#158297)
|
2025-07-16 08:11:53 +00:00 |
|
_prims
|
[BE][2/16] fix typos in torch/ (torch/_*/) (#156312)
|
2025-07-12 05:47:06 +00:00 |
|
_prims_common
|
[Dynamo][Better Engineering] Add typing annotations to guard and source (#158397) (#159491)
|
2025-07-30 22:57:50 +00:00 |
|
_refs
|
[BE][2/16] fix typos in torch/ (torch/_*/) (#156312)
|
2025-07-12 05:47:06 +00:00 |
|
_strobelight
|
[BE][2/16] fix typos in torch/ (torch/_*/) (#156312)
|
2025-07-12 05:47:06 +00:00 |
|
_subclasses
|
unbacked handling for view_copy (#159244)
|
2025-07-29 07:10:46 +00:00 |
|
_vendor
|
|
|
|
accelerator
|
Revert "Add unified memory APIs for torch.accelerator (#152932)"
|
2025-07-22 01:01:41 +00:00 |
|
amp
|
Issue warning with reference to user code rather than torch (#155112)
|
2025-07-14 05:24:23 +00:00 |
|
ao
|
[nn]: updated type alias for padddingmode in module/conv.py (#158843)
|
2025-07-25 23:05:02 +00:00 |
|
autograd
|
Fix types in graphs.py (#158192)
|
2025-07-15 19:49:38 +00:00 |
|
backends
|
fixed typo error (#159451)
|
2025-07-30 17:41:30 +00:00 |
|
compiler
|
Disable cudagraph GCs by default (#158649)
|
2025-07-29 19:56:11 +00:00 |
|
contrib
|
|
|
|
cpu
|
|
|
|
csrc
|
[Refactor] Fix Compile Warning: possibly dangling reference to a temporary (#159517)
|
2025-07-31 04:49:43 +00:00 |
|
cuda
|
Generalize torch._C._set_allocator_settings to be generic (#156175)
|
2025-07-30 06:37:15 +00:00 |
|
distributed
|
Revert "[ContextParallel][FlexAttention] Prototype of supporting FlexAttention in Context Parallel (#158692)"
|
2025-07-31 18:00:30 +00:00 |
|
distributions
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
export
|
[draft export] logging (#159004)
|
2025-07-31 05:52:13 +00:00 |
|
fft
|
|
|
|
func
|
|
|
|
futures
|
Simplify the base classes of _PyFutureMeta (#157757)
|
2025-07-08 15:39:56 +00:00 |
|
fx
|
Fix duplicated sources in inductor provenance tracking (#159484)
|
2025-07-30 23:03:11 +00:00 |
|
headeronly
|
Move BFloat16.h to headeronly (#159412)
|
2025-07-31 15:29:17 +00:00 |
|
jit
|
[4/n] Remove references to TorchScript in PyTorch docs (#158317)
|
2025-07-16 20:01:34 +00:00 |
|
legacy
|
|
|
|
lib
|
|
|
|
linalg
|
|
|
|
masked
|
Fix MaskedTensor to device ignored mask (#151205)
|
2025-07-21 21:44:49 +00:00 |
|
monitor
|
|
|
|
mps
|
[BE][12/16] fix typos in torch/ (#156602)
|
2025-07-02 22:55:29 +00:00 |
|
mtia
|
[Re-land][Inductor] Support native Inductor as backend for MTIA (#159211)
|
2025-07-29 17:03:24 +00:00 |
|
multiprocessing
|
[BE][12/16] fix typos in torch/ (#156602)
|
2025-07-02 22:55:29 +00:00 |
|
nativert
|
disable execution frame cleanup (#159531)
|
2025-07-31 05:02:36 +00:00 |
|
nested
|
Add check nested_tensor_from_jagged param jagged_dim >= 1 (#157770)
|
2025-07-10 00:34:39 +00:00 |
|
nn
|
Fix the Doc of padding in avg_poolnd (#159142)
|
2025-07-31 02:02:48 +00:00 |
|
onnx
|
[ONNX] RMS Norm (#159377)
|
2025-07-30 18:55:47 +00:00 |
|
optim
|
Detach tensor before clone in SGD optimiser and other code (#159204)
|
2025-07-27 03:31:12 +00:00 |
|
package
|
[BE][Ez]: Update ruff to 0.12.2 (#157937)
|
2025-07-11 15:16:20 +00:00 |
|
profiler
|
[profiler] update CUDA runtime kernel identification logic (#157890)
|
2025-07-24 19:14:08 +00:00 |
|
quantization
|
|
|
|
signal
|
|
|
|
sparse
|
[build] modernize build-frontend: python setup.py develop/install -> [uv ]pip install --no-build-isolation [-e ]. (#156027)
|
2025-07-09 11:24:27 +00:00 |
|
special
|
|
|
|
testing
|
[ROCm] Add FP8 rowwise support to _scaled_grouped_mm + Submodule update (#159075)
|
2025-07-30 23:53:58 +00:00 |
|
utils
|
[export] _ccode for PythonMod (#158851)
|
2025-07-31 16:46:51 +00:00 |
|
xpu
|
|
|
|
__config__.py
|
|
|
|
__future__.py
|
|
|
|
__init__.py
|
[BE] remove torch deploy - conditionals (#158288)
|
2025-07-29 17:40:49 +00:00 |
|
_appdirs.py
|
|
|
|
_classes.py
|
remove allow-untyped-defs from torch/_classes.py (#157231)
|
2025-07-08 00:11:52 +00:00 |
|
_compile.py
|
|
|
|
_custom_ops.py
|
|
|
|
_environment.py
|
|
|
|
_guards.py
|
[dynamo][guards] Always record user.stack for informative tlparse guards (#159526)
|
2025-07-31 03:18:33 +00:00 |
|
_jit_internal.py
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
_linalg_utils.py
|
Update is_sparse doc to mention that it is sparse_coo specific (#157378)
|
2025-07-09 18:22:14 +00:00 |
|
_lobpcg.py
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
_lowrank.py
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
_meta_registrations.py
|
[ROCm] Add FP8 rowwise support to _scaled_grouped_mm + Submodule update (#159075)
|
2025-07-30 23:53:58 +00:00 |
|
_namedtensor_internals.py
|
|
|
|
_ops.py
|
[BE] remove torch deploy - conditionals (#158288)
|
2025-07-29 17:40:49 +00:00 |
|
_python_dispatcher.py
|
|
|
|
_size_docs.py
|
|
|
|
_sources.py
|
|
|
|
_storage_docs.py
|
|
|
|
_streambase.py
|
|
|
|
_tensor_docs.py
|
Add missing optional for tensor ops (#159028)
|
2025-07-25 04:36:55 +00:00 |
|
_tensor_str.py
|
Revert "Fix max_width computation in _tensor_str._Formatter (#126859)"
|
2025-07-30 16:56:32 +00:00 |
|
_tensor.py
|
[MPS] Enable dlpack integration (#158888)
|
2025-07-24 18:05:41 +00:00 |
|
_thread_safe_fork.py
|
|
|
|
_torch_docs.py
|
Add basic torch.hash_tensor op (#154149)
|
2025-07-23 22:28:03 +00:00 |
|
_utils_internal.py
|
[draft export] logging (#159004)
|
2025-07-31 05:52:13 +00:00 |
|
_utils.py
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
_VF.py
|
|
|
|
_vmap_internals.py
|
|
|
|
_weights_only_unpickler.py
|
|
|
|
CMakeLists.txt
|
Migrate c10/macros/cmake_macros.h.in to torch/headeronly (#158035)
|
2025-07-15 19:52:59 +00:00 |
|
custom_class_detail.h
|
|
|
|
custom_class.h
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
extension.h
|
|
|
|
functional.py
|
Fix atleast_{1,2,3}d() with no arguments description (#156042)
|
2025-07-28 06:25:23 +00:00 |
|
header_only_apis.txt
|
Move BFloat16.h to headeronly (#159412)
|
2025-07-31 15:29:17 +00:00 |
|
hub.py
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
library.h
|
[BE][1/16] fix typos in torch/ (#156311)
|
2025-07-09 11:02:22 +00:00 |
|
library.py
|
[BE] remove torch deploy - conditionals (#158288)
|
2025-07-29 17:40:49 +00:00 |
|
overrides.py
|
Add basic torch.hash_tensor op (#154149)
|
2025-07-23 22:28:03 +00:00 |
|
py.typed
|
|
|
|
quasirandom.py
|
|
|
|
random.py
|
|
|
|
return_types.py
|
|
|
|
script.h
|
|
|
|
serialization.py
|
Reduce random reads for offset metadata when calling torch.load under FakeTensorMode (#157931)
|
2025-07-17 22:17:52 +00:00 |
|
storage.py
|
|
|
|
torch_version.py
|
|
|
|
types.py
|
|
|
|
version.py.tpl
|
|
|