mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
In this series of PR we intend to refactor pipeline parallelism test cases to enable to be completely device agnostic. These changes will include the following approaches to do the same : - Allowing for multiple device types using instantiate_device_type_test - Replacing calls to cuda stream with torch.get_device_module(device) wherever it applies This should result in improvement in usability for all devices For this PR we have shown support for the following devices: - CPU (wherever applicable) - CUDA - HPU - XPU To add other device new users can simply append their device to the device list Pull Request resolved: https://github.com/pytorch/pytorch/pull/146472 Approved by: https://github.com/H-Huang |
||
|---|---|---|
| .. | ||
| artifacts | ||
| __init__.py | ||
| model_registry.py | ||
| schedule_registry.py | ||
| test_backward.py | ||
| test_microbatch.py | ||
| test_pipe.py | ||
| test_schedule_multiproc.py | ||
| test_schedule.py | ||
| test_stage.py | ||
| test_transformer.py | ||
| test_unflatten.py | ||