pytorch/torch/distributed/algorithms
Will Constable c82c46ccc7 [C10D] support group_src/dst in broadcast/reduce ops (#140843)
Also add mypy annotations

Partially addresses RFC 0042 (pytorch/rfcs#71)
See more details/motivation in #140460
Pull Request resolved: https://github.com/pytorch/pytorch/pull/140843
Approved by: https://github.com/kwen2501
2024-11-19 01:23:08 +00:00
..
_checkpoint [BE] [Reland] Make nn.Module state_dict load_state_dict pre-hook and state_dict post-hook public (#131690) 2024-07-26 18:14:07 +00:00
_comm_hooks [BE]: Update mypy to 1.11.2 (#133816) 2024-09-16 19:44:11 +00:00
_optimizer_overlap [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
_quantization [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
ddp_comm_hooks [C10D] support group_src/dst in broadcast/reduce ops (#140843) 2024-11-19 01:23:08 +00:00
model_averaging Use device-agnostic runtime API in distributed DDP/FSDP instead of cuda device specific. (#137678) 2024-11-13 05:32:19 +00:00
__init__.py [BE][Easy] enable UFMT for torch/distributed/{algorithms,autograd,benchmarks,checkpoint,elastic}/ (#128866) 2024-06-18 13:51:53 +00:00
join.py [BE][Easy] enable ruff rule PIE790: unnecessary pass statement (#133200) 2024-08-15 15:50:19 +00:00