pytorch/torch/distributed
2025-07-25 02:56:34 +00:00
..
_composable [FSDP][Replicate] added replicate function that uses FSDP instead of DDP (#158207) 2025-07-23 22:53:06 +00:00
_shard [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
_sharded_tensor [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_sharding_spec
_symmetric_memory [SymmMem] Add NVSHMEM broadcast support into Triton (#158514) 2025-07-21 22:23:26 +00:00
_tensor [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_tools [BE] remove torch deploy - conditionals (#158288) 2025-07-23 20:27:28 +00:00
algorithms Fix non-bitwise type annotations for Tensor operators (see #145838) (#146845) 2025-06-24 15:41:34 +00:00
autograd [remove untyped defs] batch 1 (#157011) 2025-06-30 23:54:40 +00:00
benchmarks
checkpoint [BE] add noqa for flake8 rule B036: found except BaseException without re-raising (#159043) 2025-07-25 02:56:34 +00:00
elastic [BE] add noqa for flake8 rule B036: found except BaseException without re-raising (#159043) 2025-07-25 02:56:34 +00:00
examples Support XPU in memory tracker (#150703) 2025-06-12 21:33:52 +00:00
fsdp [BE] add noqa for flake8 rule B036: found except BaseException without re-raising (#159043) 2025-07-25 02:56:34 +00:00
launcher [2/n]passing event log handler to record function calls (#155457) 2025-06-12 19:35:08 +00:00
nn [BE]: Update ruff to 0.11.8 (#153249) 2025-05-12 18:30:52 +00:00
optim [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
pipelining [PP] Add eval() API to schedule (#157795) 2025-07-16 23:48:45 +00:00
rpc [BE] add noqa for flake8 rule B036: found except BaseException without re-raising (#159043) 2025-07-25 02:56:34 +00:00
tensor Support sort and scatter_add strategy (#159022) 2025-07-24 18:33:18 +00:00
__init__.py Make torch.distributed.breakpoint() set a long timeout (#158481) 2025-07-18 02:18:43 +00:00
_checkpointable.py [BE]: Backport runtime_checkable perf improvements/behavior from 3.12 (#155130) 2025-06-06 13:28:05 +00:00
_composable_state.py
_dist2.py dist2: add support for passing custom configs directly to PG (#158147) 2025-07-15 00:02:54 +00:00
_functional_collectives_impl.py
_functional_collectives.py [BE] remove torch deploy - conditionals (#158288) 2025-07-23 20:27:28 +00:00
_serialization.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
_state_dict_utils.py [dcp] add new checkpoint staging to preserve storage sharing and support mutable state_dicts (#155192) 2025-06-19 02:04:21 +00:00
argparse_util.py
c10d_logger.py
collective_utils.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
constants.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
CONTRIBUTING.md [BE][EZ] Minor doc fixes (#158574) 2025-07-18 10:34:55 -05:00
device_mesh.py [DeviceMesh][ez] Make the logic within flatten simpler (#158999) 2025-07-24 15:40:13 +00:00
distributed_c10d.py [doc] Updates to distributed.md for XCCL backend (#155834) 2025-07-22 21:01:43 +00:00
launch.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
logging_handlers.py
remote_device.py
rendezvous.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
run.py [BE][5/16] fix typos in torch/ (torch/distributed/) (#156315) 2025-06-23 02:57:28 +00:00
utils.py Refactor to use torch.accelerator.device_index instead of torch.cuda.device for generic device context manager (#148880) 2025-04-25 09:45:25 +00:00