pytorch/test/distributed/_composable
Nikita Vedeneev 2f38eece7c [CUDA][cuBLAS] addmm -- some refactoring for easier navigation between the Lt and non-Lt paths (#163955)
As per title. Additionally, some Lt selection conditions are revisited, and some redundancy removed (especially in the ROCm vs non-ROCm paths).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163955
Approved by: https://github.com/ngimel, https://github.com/eqy
2025-10-21 20:48:12 +00:00
..
fsdp [CUDA][cuBLAS] addmm -- some refactoring for easier navigation between the Lt and non-Lt paths (#163955) 2025-10-21 20:48:12 +00:00
test_composability [Replicate][Test] tests that pp model grads are the same as single-device model grads (#164890) 2025-10-08 21:07:05 +00:00
test_checkpoint.py [2/N] Port 5 _composable distributed test to Intel GPU (#159241) 2025-09-15 06:24:58 +00:00
test_contract.py PEP585 update - test (#145176) 2025-01-22 04:48:28 +00:00
test_replicate_mixed_precision.py [FSDP][Replicate] tests replicate type casting behavior and edge cases in mixed precision (#162861) 2025-09-30 22:03:23 +00:00
test_replicate_training.py [FSDP][Replicate] tests replicate is composable with tp (#162853) 2025-09-30 21:29:54 +00:00
test_replicate_with_compiler.py [1/N] Apply UP035 rule in tests (#163947) 2025-09-29 01:42:01 +00:00
test_replicate_with_fsdp.py [replicate][be] improved readability of test case description (#160128) 2025-08-07 22:51:58 +00:00
test_replicate.py [2/N] Port 5 _composable distributed test to Intel GPU (#159241) 2025-09-15 06:24:58 +00:00