Aaron Orenstein
00ffeca1b1
PEP585 update - torch/distributed ( #145164 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-21 04:23:29 +00:00
PyTorch MergeBot
6374332d33
Revert "PEP585 update - torch/distributed ( #145164 )"
...
This reverts commit 6cb186e279 .
Reverted https://github.com/pytorch/pytorch/pull/145164 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing an inductor test ([comment](https://github.com/pytorch/pytorch/pull/145164#issuecomment-2602875679 ))
2025-01-20 16:46:46 +00:00
Aaron Orenstein
6cb186e279
PEP585 update - torch/distributed ( #145164 )
...
See #145101 for details.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-20 00:19:01 +00:00
Xuehai Pan
22d258427b
[BE][Easy] enable UFMT for torch/distributed/_shard/ ( #128867 )
...
Part of #123062
- #123062
Pull Request resolved: https://github.com/pytorch/pytorch/pull/128867
Approved by: https://github.com/fegin
ghstack dependencies: #128866
2024-06-18 14:39:25 +00:00
fduwjj
b4c8186774
[BE][1/N] Add deprecate msg to Sharded Partial and Replicate Tensor ( #94928 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94928
Approved by: https://github.com/wanchaol
2023-02-16 03:23:53 +00:00
Rodrigo Kumpera
d2078fac11
[dist.checkpoint] Cleanup usage of collectives and introduce narrow helper ( #81828 )
...
Introduce _DistWrapper class that wraps a process group and provides functional
variants of collectives. It works without c10d enabled and is exception
robust.
Introduce tensor_narrow_n that handle narrowing over multiple dimentions.
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81828
Approved by: https://github.com/wanchaol
2022-07-27 12:59:58 +00:00
wanchaol
be354d8139
[shard] Add basic math ops to ShardedTensor and add ReplicatedTensor inter-op
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73703
This PR add basic math ops to ShardedTensor (+-*/), and add ReplicatedTensor inter-op ShardedTensor to those math ops. This enables ShardedTensor (op) ReplicatedTensor to avoid communication in certain cases.
Differential Revision: [D34560867](https://our.internmc.facebook.com/intern/diff/D34560867/ )
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D34560867/ )!
Approved by: https://github.com/pritamdamania87
2022-04-12 04:25:10 +00:00