pytorch/torch/distributed
2025-09-23 23:22:53 +00:00
..
_composable Add early_stop kwarg to torch.utils.checkpoint (#160781) 2025-08-26 22:32:35 +00:00
_pycute [CuTe] Change the logic of pycute manipulation ops like coalesce, complement from co-lex to lex (#162690) 2025-09-16 19:53:45 +00:00
_shard [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_sharded_tensor [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_sharding_spec
_symmetric_memory [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_tensor [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_tools [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
algorithms [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
autograd [remove untyped defs] batch 1 (#157011) 2025-06-30 23:54:40 +00:00
benchmarks [BE][CI] bump ruff to 0.8.4 (#143753) 2024-12-24 12:24:10 +00:00
checkpoint [DCP] DTensor slice dequantization with proper block alignment (#163532) 2025-09-23 16:48:16 +00:00
elastic [BE] Delete all pre py-3.10 checks (#163653) 2025-09-23 23:22:53 +00:00
examples Support XPU in memory tracker (#150703) 2025-06-12 21:33:52 +00:00
fsdp [FSDP2] idempotent reset_sharded_param: no-op if _local_tensor is already padded (#163130) 2025-09-18 09:20:37 +00:00
launcher 154849 Add support to handle IGUSR1 and SIGUSR2 in multiprocessing (#160690) 2025-09-09 22:23:06 +00:00
nn [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
optim [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
pipelining Inspect schedule IR comms (#162996) 2025-09-16 16:59:06 +00:00
rpc [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
tensor Rename to _debug_mode.py to make it private (#163534) 2025-09-23 04:27:10 +00:00
__init__.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_C_stubs.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_checkpointable.py [BE]: Backport runtime_checkable perf improvements/behavior from 3.12 (#155130) 2025-06-06 13:28:05 +00:00
_composable_state.py [FSDP2] Make module-to-state mapping use weakrefs (#139650) 2024-11-05 02:16:52 +00:00
_dist2.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_distributed_c10d.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_functional_collectives_impl.py PEP585 update - torch/distributed (#145164) 2025-01-21 04:23:29 +00:00
_functional_collectives.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
_mesh_layout.py [DeviceMesh] Make CuTe layout as mesh layout to be ready for using in DeviceMesh (#162414) 2025-09-15 17:04:41 +00:00
_serialization.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
_state_dict_utils.py fix-unpin-memory-tensor-param (#160992) 2025-08-26 21:55:25 +00:00
argparse_util.py
c10d_logger.py PEP585 update - torch/distributed (#145164) 2025-01-21 04:23:29 +00:00
collective_utils.py [C10D] add _summarize_ranks util (#160284) 2025-08-28 00:17:53 +00:00
constants.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
CONTRIBUTING.md fix torch/distributed contributing doc (#158934) 2025-07-28 17:01:05 +00:00
device_mesh.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
distributed_c10d.py [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594) 2025-09-22 21:12:18 +00:00
launch.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
logging_handlers.py PEP585 update - torch/distributed (#145164) 2025-01-21 04:23:29 +00:00
remote_device.py
rendezvous.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
run.py Support XPU in --nproc-per-node option to torchrun (#159474) 2025-09-12 08:32:04 +00:00
utils.py Refactor to use torch.accelerator.device_index instead of torch.cuda.device for generic device context manager (#148880) 2025-04-25 09:45:25 +00:00