| .. |
|
codegen
|
|
|
|
data
|
|
|
|
distributed
|
[c10d] Enhance Error Logging in new_subgroups() for Non-Divisible World Sizes (#154124)
|
2025-05-23 17:12:43 +00:00 |
|
generated
|
|
|
|
opinfo
|
Treat dim=[] same as dim=None (#153570)
|
2025-05-20 22:44:29 +00:00 |
|
optests
|
|
|
|
test_module
|
|
|
|
__init__.py
|
|
|
|
autocast_test_lists.py
|
|
|
|
autograd_function_db.py
|
|
|
|
check_kernel_launches.py
|
|
|
|
common_cuda.py
|
[TEST][ATen][CUDA] Skip row-wise scaled matrix mmultiplication tests on sm_120+ (#152814)
|
2025-05-08 19:34:20 +00:00 |
|
common_device_type.py
|
Fix instantiate_device_type_tests() for 3rd-party devices (#152177)
|
2025-04-30 06:25:59 +00:00 |
|
common_dist_composable.py
|
|
|
|
common_distributed.py
|
[c10d] Add support for testing SIGABRT return (#153167)
|
2025-05-26 00:56:05 +00:00 |
|
common_dtype.py
|
|
|
|
common_fsdp.py
|
Enable FSDP tests on XPU device (#147518)
|
2025-03-04 23:49:37 +00:00 |
|
common_jit.py
|
|
|
|
common_methods_invocations.py
|
torch.tensordot: performance improvements when contracting to a scalar. (#145936)
|
2025-05-13 10:57:30 +00:00 |
|
common_mkldnn.py
|
[BE]: Apply ruff PERF403 to use dict comprehensions more often (#149257)
|
2025-03-18 00:46:07 +00:00 |
|
common_modules.py
|
Remove outdated skipCUDAIfCudnnVersionLessThan decoration (#148940)
|
2025-03-13 18:02:50 +00:00 |
|
common_mps.py
|
[MPS][BE] Move fmod/remainder to Metal ops (#154280)
|
2025-05-24 01:45:33 +00:00 |
|
common_nn.py
|
ROCm: Enable tf32 testing on test_nn (#148945)
|
2025-04-28 23:01:04 +00:00 |
|
common_optimizers.py
|
|
|
|
common_pruning.py
|
|
|
|
common_quantization.py
|
Revert "Patch the _is_conv_node function (#153749)"
|
2025-05-23 19:04:20 +00:00 |
|
common_quantized.py
|
Ensure mxfp8 scaled_mm works w/ max-autotune (#152744)
|
2025-05-06 01:16:57 +00:00 |
|
common_subclass.py
|
|
|
|
common_utils.py
|
[BE][CI][Easy] Run lintrunner on generated .pyi stub files (#150732)
|
2025-05-27 14:58:02 +00:00 |
|
composite_compliance.py
|
Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022)
|
2025-05-27 14:10:00 +00:00 |
|
custom_op_db.py
|
|
|
|
custom_tensor.py
|
|
|
|
dist_utils.py
|
|
|
|
dynamo_test_failures.py
|
|
|
|
fake_config_module.py
|
|
|
|
fake_config_module2.py
|
|
|
|
fake_config_module3.py
|
|
|
|
hop_db.py
|
Add a HOP to bypass tracing of a wrapper function while tracing the wrapped function (#153487)
|
2025-05-22 04:24:38 +00:00 |
|
hypothesis_utils.py
|
|
|
|
inductor_utils.py
|
[Cutlass] Support float8_e4m3fn GEMM (#153890)
|
2025-05-22 08:37:33 +00:00 |
|
jit_metaprogramming_utils.py
|
|
|
|
jit_utils.py
|
|
|
|
logging_tensor.py
|
Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022)
|
2025-05-27 14:10:00 +00:00 |
|
logging_utils.py
|
|
|
|
quantization_torch_package_models.py
|
|
|
|
static_module.py
|
|
|
|
subclasses.py
|
|
|
|
torchbind_impls.py
|
Fakify torchbind objects in compile_fx and add tests for SigridTransformsInstanceTorchBind (#149529)
|
2025-03-21 18:58:28 +00:00 |
|
triton_utils.py
|
[AOTInductor] Fix autotuning code's codegen (#150522)
|
2025-04-03 00:08:19 +00:00 |
|
two_tensor.py
|
Support subclass constructor capturing in export (#147014)
|
2025-03-16 18:19:19 +00:00 |