pytorch/torch/distributed
2024-08-25 11:16:04 +00:00
..
_composable [FSDP2] Add cache for FSDP wrapper class (#134135) 2024-08-22 00:41:30 +00:00
_shard [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
_sharded_tensor [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
_sharding_spec [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
_symmetric_memory [micro_pipeline_tp] support all _scaled_mm args (#131984) 2024-08-05 21:44:37 +00:00
_tensor Revert "[dtensor][MTPG] make sharding prop lru cache not shared among threads (#134294)" 2024-08-25 11:16:04 +00:00
_tools [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
algorithms [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
autograd [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
benchmarks [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
checkpoint Fix DDPLoadBalancingPlanner docstring (#134044) 2024-08-21 21:28:22 +00:00
elastic [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
examples [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
fsdp [BE] typing for decorators - fx/_compatibility (part 1) (#134202) 2024-08-22 17:07:33 +00:00
launcher [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
nn Revert "added persistent option to buffers and namedbuffers (#132994)" 2024-08-09 18:14:53 +00:00
optim Revert "[BE] typing for decorators - _jit_internal (#131573)" 2024-07-28 03:29:32 +00:00
pipelining [BE] typing for decorators - fx/_compatibility (part 1) (#134202) 2024-08-22 17:07:33 +00:00
rpc [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00
tensor Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
__init__.py Remove ProcessGroupRoundRobin (#132888) 2024-08-08 01:07:40 +00:00
_checkpointable.py [torchrec][pt-d][model store] introduce LocalShardsWrapper for DTensor (#129150) 2024-06-21 01:58:51 +00:00
_composable_state.py
_functional_collectives_impl.py [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
_functional_collectives.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
_state_dict_utils.py Revert "[dtensor] move DTensor to public namespace (#133113)" 2024-08-19 05:00:19 +00:00
argparse_util.py Flip default value for mypy disallow_untyped_defs [5/11] (#127842) 2024-06-08 18:49:18 +00:00
c10d_logger.py [DCP] Fix duplicated logging messages when enable both c10d and dcp l… (#130423) 2024-07-11 13:43:39 +00:00
collective_utils.py [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
constants.py [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
CONTRIBUTING.md Clean up distributed/CONTRIBUTING.md (#128450) 2024-06-22 02:41:22 +00:00
device_mesh.py [DeviceMesh] Allow _flatten() to take in an optional mesh_dim_name (#134048) 2024-08-25 10:36:01 +00:00
distributed_c10d.py [CP] Rewrite ring attention backward algorithm and enablement APIs (#131351) 2024-08-15 16:41:51 +00:00
launch.py Flip default value for mypy disallow_untyped_defs [6/11] (#127843) 2024-06-08 18:49:29 +00:00
logging_handlers.py [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
remote_device.py [BE][Easy] fix ruff rule needless-bool (SIM103) (#130206) 2024-07-14 08:17:52 +00:00
rendezvous.py [BE][Easy] enable UFMT for torch/distributed/ (#128870) 2024-06-22 18:53:28 +00:00
run.py fix torchrun log message (#131652) 2024-07-25 14:50:10 +00:00
utils.py [Reland][PT-D] Relaxed contract to allow Sequence[nn.Module] (#127773) (#130947) 2024-07-17 22:40:13 +00:00