pytorch/test/distributed/_shard
pritam a81be44410 Fix shard_module to appropriately deal with sub process groups.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79264

`shard_module` API didn't work correctly with a sub-pg since
`dist.scatter` actually takes the global rank as input for `src`.

Fixing this by passing in the appropriate rank to `dist.scatter`

Differential Revision: [D37062766](https://our.internmc.facebook.com/intern/diff/D37062766/)

Approved by: https://github.com/fduwjj, https://github.com/wanchaol
2022-06-12 03:50:45 +00:00
..
checkpoint [checkpoint] Synchronize error handling across all ranks (#77091) 2022-05-18 21:24:09 +00:00
sharded_optim [shard] fix some imports in tests (#73309) 2022-02-24 04:30:48 +00:00
sharded_tensor Use appropriate dtype for sharded linear implementation. 2022-06-10 07:32:15 +00:00
sharding_plan Fix shard_module to appropriately deal with sub process groups. 2022-06-12 03:50:45 +00:00
sharding_spec [PT-D] Fix Sharding spec inference to avoid invalid chunk sharding to be inferred as chunkshardingspec (#75296) 2022-04-06 16:11:14 -07:00
test_partial_tensor.py Fix partial_tensor ops. 2022-05-17 08:21:38 +00:00
test_replicated_tensor.py [shard] fix failed tests in sharded tensor 2022-05-18 23:21:47 +00:00
test_sharder.py [shard] Extensible Sharder and ShardingPlanner (#75844) 2022-04-28 04:11:10 +00:00