..
_awaits
_C
[compiled autograd] introduce verbose logs, add autograd node info to graph ( #124954 )
2024-04-27 01:10:37 +00:00
_C_flatbuffer
_custom_op
Move schema inference to torch._library ( #124199 )
2024-04-19 17:56:30 +00:00
_decomp
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
_dispatch
_dynamo
Dynamo x autograd.Function supports setup_context ( #124802 )
2024-04-27 04:57:13 +00:00
_export
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
_functorch
Dynamo x autograd.Function supports setup_context ( #124802 )
2024-04-27 04:57:13 +00:00
_higher_order_ops
Made FlexAttention rewrite getitem calls to use aten.index in score_mod ( #124799 )
2024-04-26 17:22:13 +00:00
_inductor
[cpu] [inductor] decompose bmm for memory bound in lowering ( #124826 )
2024-04-27 00:19:10 +00:00
_lazy
_library
Fix torch.library.register_fake's module reporting ( #125037 )
2024-04-26 20:53:33 +00:00
_logging
[compiled autograd] introduce verbose logs, add autograd node info to graph ( #124954 )
2024-04-27 01:10:37 +00:00
_numpy
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
_prims
[BE]: Update ruff to 0.4.1 ( #124549 )
2024-04-21 14:06:23 +00:00
_prims_common
Made FlexAttention rewrite getitem calls to use aten.index in score_mod ( #124799 )
2024-04-26 17:22:13 +00:00
_refs
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
_subclasses
Fix mypy issues in fake_tensor.py ( #124428 )
2024-04-26 15:35:53 +00:00
_vendor
amp
add new API torch.amp.is_autocast_available ( #124938 )
2024-04-26 08:45:20 +00:00
ao
[Quant][PT2E] Enable linear-binary(-unary) post-op recipe for X86Inductor quantizer ( #122387 )
2024-04-27 02:40:57 +00:00
autograd
Dynamo x autograd.Function supports setup_context ( #124802 )
2024-04-27 04:57:13 +00:00
backends
preferred blas library; cublaslt gemm implementation ( #122106 )
2024-04-22 15:38:22 +00:00
compiler
Fix broken docs ( #124940 )
2024-04-26 19:24:52 +00:00
contrib
cpu
csrc
[Distributed] [7/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d ( #124987 )
2024-04-27 07:22:27 +00:00
cuda
[BE]: Apply ruff FURB 118. ( #124743 )
2024-04-26 14:34:52 +00:00
distributed
elastic/rendezvous: make barrier and rank assignment operations O(n) instead of O(n^2) ( #124982 )
2024-04-27 02:21:44 +00:00
distributions
[BE]: Update ruff to 0.4.1 ( #124549 )
2024-04-21 14:06:23 +00:00
export
[BE]: Apply ruff FURB 118. ( #124743 )
2024-04-26 14:34:52 +00:00
fft
func
futures
fx
[compiled autograd] introduce verbose logs, add autograd node info to graph ( #124954 )
2024-04-27 01:10:37 +00:00
jit
[BE]: TRY002 - Ban raising vanilla exceptions ( #124570 )
2024-04-21 22:26:40 +00:00
legacy
lib
linalg
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
masked
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
monitor
mps
Conform torch.mps to device module interface ( #124676 )
2024-04-23 18:38:48 +00:00
mtia
torch.mtia module for MTIA device backend ( #123612 )
2024-04-26 16:17:54 +00:00
multiprocessing
nested
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
nn
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
onnx
rename ort to maia in dynamo's ort backend. ( #124967 )
2024-04-26 19:09:29 +00:00
optim
add fused_sgd_kernel support for CPU device ( #123629 )
2024-04-23 08:28:19 +00:00
package
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
profiler
[BE]: Apply ruff FURB 118. ( #124743 )
2024-04-26 14:34:52 +00:00
quantization
signal
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
sparse
Fix a bug in retrieving approximate bsr_dense_addmm kernel meta data ( #124371 )
2024-04-24 13:59:18 +00:00
special
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
testing
Fix torch.library.register_fake's module reporting ( #125037 )
2024-04-26 20:53:33 +00:00
utils
Fix broken docs ( #124940 )
2024-04-26 19:24:52 +00:00
xpu
[BE]: TRY002 - Ban raising vanilla exceptions ( #124570 )
2024-04-21 22:26:40 +00:00
__config__.py
__future__.py
__init__.py
torch.mtia module for MTIA device backend ( #123612 )
2024-04-26 16:17:54 +00:00
_appdirs.py
_classes.py
_compile.py
_custom_ops.py
[BE] enable ruff rule RSE and remove useless parentheses in raise statements ( #124261 )
2024-04-17 19:29:34 +00:00
_deploy.py
_guards.py
Restore CompileContext as well in backwards ( #124626 )
2024-04-23 14:39:52 +00:00
_jit_internal.py
[BE]: TRY002 - Ban raising vanilla exceptions ( #124570 )
2024-04-21 22:26:40 +00:00
_linalg_utils.py
_lobpcg.py
_lowrank.py
_meta_registrations.py
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
_namedtensor_internals.py
_ops.py
Fix mypy issues in fake_tensor.py ( #124428 )
2024-04-26 15:35:53 +00:00
_python_dispatcher.py
_size_docs.py
Added a docstring for torch.Size.numel. ( #124186 )
2024-04-19 09:23:02 +00:00
_sources.py
_storage_docs.py
_streambase.py
[BE] enable ruff rule RSE and remove useless parentheses in raise statements ( #124261 )
2024-04-17 19:29:34 +00:00
_tensor_docs.py
_tensor_str.py
_tensor.py
Add testing and fix weights_only load for quantized types and nn.Parameters with python attrs ( #124330 )
2024-04-23 04:13:26 +00:00
_torch_docs.py
Fix global flake8 issues ( #124771 )
2024-04-26 15:35:53 +00:00
_utils_internal.py
[ROCm] Triton upstream AMD backend integration ( #121801 )
2024-04-25 20:44:27 +00:00
_utils.py
torch.mtia module for MTIA device backend ( #123612 )
2024-04-26 16:17:54 +00:00
_VF.py
_vmap_internals.py
_weights_only_unpickler.py
Add testing and fix weights_only load for quantized types and nn.Parameters with python attrs ( #124330 )
2024-04-23 04:13:26 +00:00
abi-check.cpp
CMakeLists.txt
[rfc] opentelemetry in pytorch ( #122999 )
2024-04-21 15:20:21 +00:00
custom_class_detail.h
custom_class.h
extension.h
functional.py
hub.py
[BE]: TRY002 - Ban raising vanilla exceptions ( #124570 )
2024-04-21 22:26:40 +00:00
library.h
Revert "Verify types in custom op schemas ( #124520 )"
2024-04-26 08:42:11 +00:00
library.py
Fix torch.library.register_fake's module reporting ( #125037 )
2024-04-26 20:53:33 +00:00
overrides.py
torch.mtia module for MTIA device backend ( #123612 )
2024-04-26 16:17:54 +00:00
py.typed
quasirandom.py
random.py
README.txt
return_types.py
script.h
serialization.py
storage.py
Add testing and fix weights_only load for quantized types and nn.Parameters with python attrs ( #124330 )
2024-04-23 04:13:26 +00:00
torch_version.py
types.py
version.py.tpl