pytorch/torch/distributed/_local_tensor
Luca Wehrstedt 58879bfafa [DeviceMesh] Prefer using _layout over _mesh for all sorts of things (#165554)
The goal of this PR is to avoid storing the explicit `mesh` Tensor inside each DeviceMesh, and instead compute it on-the-fly when the end user needs it, and try to replace all of its internal usages with `_layout` and the newly-introduced `_global_rank_permutation` Tensor. The name of this attribute is up for debate. The advantage of the `_global_rank_permutation` Tensor is that it is _the same_ Tensor for the root mesh and all its children, so it doesn't need to be copied/reallocated.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165554
Approved by: https://github.com/fduwjj
2025-10-17 17:57:51 +00:00
..
__init__.py [DeviceMesh] Prefer using _layout over _mesh for all sorts of things (#165554) 2025-10-17 17:57:51 +00:00
_c10d.py [RFC] Add pyrefly to lintrunner (#165179) 2025-10-16 20:07:09 +00:00