pytorch/torch/testing/_internal
2025-06-20 15:35:25 +00:00
..
codegen
data
distributed [ROCm] Skip *_stress_cuda and test_ddp_apply_optim_in_backward* (#155724) 2025-06-12 21:18:04 +00:00
generated
opinfo Fixes OpInfo gradient checks for ctc_loss (#154590) 2025-06-10 19:56:39 +00:00
optests
test_module
__init__.py
autocast_test_lists.py
autograd_function_db.py
check_kernel_launches.py
common_cuda.py Revert "[ROCm] Bump AOTriton to 0.10b (#156290)" 2025-06-20 15:35:25 +00:00
common_device_type.py Typo fixes for "overridden" in comments and function names (#155944) 2025-06-14 03:37:38 +00:00
common_dist_composable.py
common_distributed.py add device generalisation support for distributed tests (#152471) 2025-06-20 07:35:42 +00:00
common_dtype.py
common_fsdp.py mypy 1.16.0 (#155821) 2025-06-14 18:18:43 +00:00
common_jit.py
common_methods_invocations.py [MPS] Fix bug in 3d coords calculation (#156375) 2025-06-19 19:56:15 +00:00
common_mkldnn.py
common_modules.py [MPS] Migrate hardswish (forward and backward) to Metal kernel (#155479) 2025-06-11 20:58:46 +00:00
common_mps.py [MPS] Implement backward pass for interpolate_trilinear (#156373) 2025-06-20 05:41:24 +00:00
common_nn.py
common_optimizers.py [MPS] Enable RProp test for non-contiguous (#155439) 2025-06-09 21:29:09 +00:00
common_pruning.py
common_quantization.py [Reland][pytorch] Patch the _is_conv_node function (#154473) 2025-05-30 00:41:03 +00:00
common_quantized.py Ensure mxfp8 scaled_mm works w/ max-autotune (#152744) 2025-05-06 01:16:57 +00:00
common_subclass.py
common_utils.py Provide access to the cudaGraph_t underlying a CUDAGraph. (#155164) 2025-06-18 03:39:28 +00:00
composite_compliance.py Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
custom_op_db.py
custom_tensor.py
dist_utils.py
dynamo_test_failures.py [ca] default on in CI, with fallback for tests in test/compiled_autograd_skips/ (#155480) 2025-06-16 18:45:03 +00:00
fake_config_module.py
fake_config_module2.py
fake_config_module3.py
hop_db.py Add a HOP to bypass tracing of a wrapper function while tracing the wrapped function (#153487) 2025-05-22 04:24:38 +00:00
hypothesis_utils.py
inductor_utils.py Custom FX pass for inductor's backend registration (#154841) 2025-06-14 17:29:54 +00:00
jit_metaprogramming_utils.py
jit_utils.py
logging_tensor.py Add torch.Tensor._make_wrapper_subclass to torch/_C/__init__.pyi (#154022) 2025-05-27 14:10:00 +00:00
logging_utils.py
quantization_torch_package_models.py
static_module.py
subclasses.py
torchbind_impls.py
triton_utils.py [test][triton pin] add device-side TMA tests (AOTI + test_triton_kernels) (#155827) 2025-06-15 20:24:19 +00:00
two_tensor.py