pytorch/torch/distributed/tensor/parallel
Will Constable 346343e6b5 [DeviceMesh] Make _validate_tp_mesh_dim support 3D (#125763)
Currently a 3D mesh with a submesh sliced out for TP is going to fail
this check.

According to @wanchaol in [this
comment](https://github.com/pytorch/pytorch/pull/125250#discussion_r1586653669)
it should be OK to remove these checks.  Though I would appreciate a
more careful review here, since I'm not too sure if there are other edge
cases where these checks are important.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125763
Approved by: https://github.com/wz337, https://github.com/wanchaol
2024-05-08 21:22:11 +00:00
..
__init__.py [TP] Introduce Sequence Parallel Style for Laynorm/RMSNorm/Dropout (#121295) 2024-03-07 02:04:59 +00:00
_data_parallel_utils.py [reland] pass shape/stride during tensor unflatten (#117340) 2024-01-13 19:33:47 +00:00
_utils.py [DeviceMesh] Make _validate_tp_mesh_dim support 3D (#125763) 2024-05-08 21:22:11 +00:00
api.py [TP] Add wildcard support (#122968) 2024-04-02 21:23:39 +00:00
ddp.py [2D] Remove enable_2d_with_fsdp() API and make remove_enable_2d_with_fsdp private (#112473) 2023-11-16 01:14:00 +00:00
fsdp.py [FSDP1][2D] Fix FSDP1 2D state_dict to use run_check=False (#123802) 2024-04-24 01:25:11 +00:00
input_reshard.py fix: docstring error in torch/distributed module (#113241) 2023-11-09 19:10:20 +00:00
loss.py [dtensor][TP] check funcol calls and improve doc for loss parallel (#121366) 2024-03-08 01:41:31 +00:00
style.py [tp] add kwargs support to prepare_module_input (#124114) 2024-04-22 21:46:31 +00:00