Commit Graph

9 Commits

Author SHA1 Message Date
Aaron Orenstein
00ffeca1b1 PEP585 update - torch/distributed (#145164)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-21 04:23:29 +00:00
PyTorch MergeBot
6374332d33 Revert "PEP585 update - torch/distributed (#145164)"
This reverts commit 6cb186e279.

Reverted https://github.com/pytorch/pytorch/pull/145164 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it is failing an inductor test ([comment](https://github.com/pytorch/pytorch/pull/145164#issuecomment-2602875679))
2025-01-20 16:46:46 +00:00
Aaron Orenstein
6cb186e279 PEP585 update - torch/distributed (#145164)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145164
Approved by: https://github.com/bobrenjc93
2025-01-20 00:19:01 +00:00
Xuehai Pan
22d258427b [BE][Easy] enable UFMT for torch/distributed/_shard/ (#128867)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128867
Approved by: https://github.com/fegin
ghstack dependencies: #128866
2024-06-18 14:39:25 +00:00
Aaron Orenstein
3a0d088517 Flip default value for mypy disallow_untyped_defs [5/11] (#127842)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127842
Approved by: https://github.com/oulgen
2024-06-08 18:49:18 +00:00
Aaron Gokaslan
8fce9a09cd [BE]: pyupgrade Python to 3.8 - imports and object inheritance only (#94308)
Apply parts of pyupgrade to torch (starting with the safest changes).
This PR only does two things: removes the need to inherit from object and removes unused future imports.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94308
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-07 21:10:56 +00:00
Rodrigo Kumpera
270c518be0 [checkpoint] Implement interop between Tensor and Sharded Tensor (#78120)
This allows loading a Tensor from a checkpoint with a SharedTensor in the same FQN.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/78120
Approved by: https://github.com/pritamdamania87
2022-06-16 15:31:09 +00:00
Rodrigo Kumpera
668599a673 Rewrite ShardedTensor.gather to use dist.gather instead of gather_object (#77272)
gather_object is problematic when used with Tensors as they can unpickle on the wrong
device and lead to deadlocks or spurious failures.

This change introduces a RPC workaround for EFA when initing TensorPipe until
they properly address it.

Fixes #73935

Pull Request resolved: https://github.com/pytorch/pytorch/pull/77272
Approved by: https://github.com/pritamdamania87
2022-05-17 02:14:40 +00:00
Wanchao Liang
d6c5295ec8 [shard] Extensible ShardingSpec (#72130)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72130

1. Refactor ShardingSpec, decouple PlacementSpec and ShardingSpec, as they are essentially two separate concept
2. Introduce customizable ShardingSpec, with the help of two APIs, we can allow user to inherit and define their own customized sharding spec:
  * ShardingSpec.build_metadata, which takes a tensor shape and define how to shard a tensor like this shape across ranks, return a ShardedTensorMetadata that describes the layout.
  * ShardingSpec.shard: define how to shard a tensor into ShardedTensor
3. Refactor `ShardedTensor.__init__` and `shard_parameter` to take the new ShardingSpec, enable these two API to support both ChunkShardingSpec and EnumerableShardingSpec

ghstack-source-id: 149788833

Test Plan: wait for ci

Reviewed By: fduwjj

Differential Revision: D33923403

fbshipit-source-id: 3236beec8543da651dfd89c32b6968745c59301e
(cherry picked from commit 5994b33a7a6ad96b1fad2e121c6bdd83a877346e)
2022-02-24 04:30:48 +00:00