..
_awaits
_C
[ROCm] CK Flash Attention Backend ( #143695 )
2025-01-03 22:01:36 +00:00
_C_flatbuffer
_custom_op
_decomp
[Inductor][CPU] disable bernoulli_p decomposition ( #143460 )
2024-12-19 11:21:35 +00:00
_dispatch
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_dynamo
Set enable_trace_contextlib_contextmanager flag to True ( #140604 )
2025-01-06 16:56:22 +00:00
_export
Support getattr for tensor subclasses in pre-dispatch export via patching tensor.getattr ( #143946 )
2025-01-06 23:55:50 +00:00
_functorch
Support getattr for tensor subclasses in pre-dispatch export via patching tensor.getattr ( #143946 )
2025-01-06 23:55:50 +00:00
_higher_order_ops
[user triton] add support for prune_configs_by in @triton.autotune ( #142207 )
2025-01-04 03:50:28 +00:00
_inductor
[MPSInductor] Add nan constant generation ( #144281 )
2025-01-06 22:13:23 +00:00
_lazy
remove allow-untyped-defs from torch/_lazy/config.py ( #143603 )
2024-12-20 05:34:19 +00:00
_library
[BE] typing for decorators ( #144161 )
2025-01-04 16:40:09 +00:00
_logging
Revert "Use absolute path path.resolve() -> path.absolute() ( #129409 )"
2025-01-04 14:17:20 +00:00
_numpy
[BE][CI] bump ruff to 0.8.4 ( #143753 )
2024-12-24 12:24:10 +00:00
_prims
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_prims_common
Pass allow_rhs_unbacked to the stride test in metadata test too ( #143040 )
2024-12-19 09:37:50 +00:00
_refs
[BE] typing for decorators ( #144161 )
2025-01-04 16:40:09 +00:00
_strobelight
Propagate callable parameter types using ParamSpec ( #142306 ) ( #143797 )
2024-12-29 23:03:14 +00:00
_subclasses
Propagate callable parameter types using ParamSpec ( #142306 ) ( #143797 )
2024-12-29 23:03:14 +00:00
_vendor
accelerator
torch/accelerator: fix device type comparison ( #143541 )
2024-12-23 10:54:53 +00:00
amp
[MPS] Add support for bf16 autocast ( #139390 )
2024-11-20 19:52:28 +00:00
ao
remove allow-untyped-defs from ao/nn/sparse/quantized/utils.py ( #144232 )
2025-01-06 19:54:27 +00:00
autograd
[BE][CI] bump ruff to 0.8.4 ( #143753 )
2024-12-24 12:24:10 +00:00
backends
[ROCm] CK Flash Attention Backend ( #143695 )
2025-01-03 22:01:36 +00:00
compiler
Propagate callable parameter types using ParamSpec ( #142306 ) ( #144047 )
2025-01-06 16:16:18 +00:00
contrib
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
cpu
csrc
Revert "export AOTI_TORCH_EXPORT on Windows. ( #140030 )"
2025-01-06 18:15:52 +00:00
cuda
Refine CUDA Stream priority ( #143849 )
2024-12-31 11:15:59 +00:00
distributed
Make all-reduce input contiguous in distributed.nn.all_reduce ( #144267 )
2025-01-06 22:20:04 +00:00
distributions
Remove some unused type ignores (round 1) ( #142325 )
2024-12-09 18:23:46 +00:00
export
Support getattr for tensor subclasses in pre-dispatch export via patching tensor.getattr ( #143946 )
2025-01-06 23:55:50 +00:00
fft
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
func
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
futures
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
fx
[TGIF][Easy] Slightly improve the logging for tgif split pass ( #143771 )
2025-01-06 21:00:15 +00:00
jit
remove allow-untyped-defs from torch/jit/_passes/_property_propagation.py ( #144132 )
2025-01-03 20:07:37 +00:00
legacy
lib
Add and use thread-safe strerror ( #140472 )
2024-11-19 04:24:17 +00:00
linalg
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
masked
remove allow-untyped-defs for torch/masked/maskedtensor/creation.py ( #143321 )
2024-12-17 16:44:50 +00:00
monitor
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
mps
remove allow-untyped-defs from torch/mps/event.py ( #144092 )
2025-01-03 01:20:17 +00:00
mtia
Revert "[MTIA] (3/n) Implement PyTorch APIs to query/reset device peak memory usage ( #143347 )"
2024-12-21 04:04:16 +00:00
multiprocessing
[BE][CI] bump ruff to 0.8.4 ( #143753 )
2024-12-24 12:24:10 +00:00
nested
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
nn
Rewrite _reparametrize_module to use contextmanager ( #138203 )
2025-01-06 16:56:22 +00:00
onnx
Revert "Use absolute path path.resolve() -> path.absolute() ( #129409 )"
2025-01-04 14:17:20 +00:00
optim
[BE] typing for decorators ( #144161 )
2025-01-04 16:40:09 +00:00
package
Revert "Use absolute path path.resolve() -> path.absolute() ( #129409 )"
2025-01-04 14:17:20 +00:00
profiler
Migrate from Tuple -> tuple in torch/profiler ( #144257 )
2025-01-06 23:34:14 +00:00
quantization
signal
[BE] typing for decorators ( #144161 )
2025-01-04 16:40:09 +00:00
sparse
[sparse] add extra options to _cslt_spare_mm ( #137427 )
2024-11-27 05:32:45 +00:00
special
[BE][Easy] enable PYFMT for torch/[a-s]*/ ( #138447 )
2024-12-23 14:04:00 +00:00
testing
Support getattr for tensor subclasses in pre-dispatch export via patching tensor.getattr ( #143946 )
2025-01-06 23:55:50 +00:00
utils
[TreeSpec] Support enum in defaultdict ( #144235 )
2025-01-07 00:10:46 +00:00
xpu
Add get_stream_from_external API for XPU backend ( #141123 )
2024-12-31 11:15:52 +00:00
__config__.py
remove allow-untyped-defs for torch/__config__.py ( #143320 )
2024-12-17 00:16:09 +00:00
__future__.py
__init__.py
Rename cache limit to recompile limit in configs ( #143709 )
2024-12-22 10:03:57 +00:00
_appdirs.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_classes.py
_compile.py
[BE] typing for decorators ( #144161 )
2025-01-04 16:40:09 +00:00
_custom_ops.py
_deploy.py
_environment.py
_guards.py
[ca] add compiled autograd to CompileId ( #141907 )
2024-12-21 00:41:24 +00:00
_jit_internal.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_linalg_utils.py
_lobpcg.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_lowrank.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_meta_registrations.py
[BE] typing for decorators ( #144161 )
2025-01-04 16:40:09 +00:00
_namedtensor_internals.py
_ops.py
Propagate callable parameter types using ParamSpec ( #142306 ) ( #144047 )
2025-01-06 16:16:18 +00:00
_python_dispatcher.py
_size_docs.py
remove allow-untyped-defs from torch/_size_docs.py ( #143942 )
2024-12-29 01:00:46 +00:00
_sources.py
_storage_docs.py
_streambase.py
_tensor_docs.py
_tensor_str.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
_tensor.py
__cuda_array_interface__: Use "<V2" for bfloat16. (#143042 )
2024-12-14 06:27:52 +00:00
_thread_safe_fork.py
_torch_docs.py
[Easy] Add torch.range, torch.arange params optional description ( #143731 )
2024-12-24 01:29:24 +00:00
_utils_internal.py
[reland] Kill capture_pre_autograd_graph API ( #143426 )
2024-12-18 12:07:09 +00:00
_utils.py
Reraise worker errors as runtime errors in more cases when the original exception can't be constructed ( #140911 )
2024-12-14 03:11:36 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
Remove unused Python variables in torch/[_-a]* ( #133492 )
2024-12-12 17:39:14 +00:00
abi-check.cpp
CMakeLists.txt
Revert "export AOTI_TORCH_EXPORT on Windows. ( #140030 )"
2025-01-06 18:15:52 +00:00
custom_class_detail.h
Enable readability-redundant-declaration ( #143982 )
2024-12-31 00:20:10 +00:00
custom_class.h
extension.h
functional.py
hub.py
library.h
Enable more readability-redundant checks ( #143963 )
2024-12-30 14:49:33 +00:00
library.py
make it clearer (in docs) one can double decorate with torch.library.impl_* APIs ( #137608 )
2024-12-17 15:13:58 +00:00
overrides.py
[dim_order] raised runtime error when tensor has ambiguous dim order ( #141632 )
2024-12-08 23:16:57 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
Add config.save.use_pinned_memory_for_d2h to serialization config ( #143342 )
2024-12-20 21:01:18 +00:00
storage.py
torch_version.py
types.py
version.py.tpl