pytorch/test/distributed/tensor/parallel
fduwjj 2dc5e166a5 [TP][Inference] Enable DTensor TP inference (#110751)
In https://github.com/pytorch/pytorch/pull/109977, we observed that during inference mode, aten.Linear does not get decomposed. So instead of enabling sharding propagation for linear op, we use func.decompose so that it gets decomposed to matmul and mm.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110751
Approved by: https://github.com/bdhirsh, https://github.com/wanchaol
2023-10-07 18:57:27 +00:00
..
__init__.py
test_ddp_2d_parallel.py [2D][TP] Enable DDP TP integration with unit test (#106583) 2023-08-17 02:54:17 +00:00
test_fsdp_2d_parallel.py [3/N][2D] Enable training with new 2D flow (#110034) 2023-09-26 09:14:15 +00:00
test_parallelize_api.py [TP] Add an input resharding wrapper for TP and unit test for 2D + AC (#103334) 2023-06-23 04:05:01 +00:00
test_tp_examples.py [TP][Inference] Enable DTensor TP inference (#110751) 2023-10-07 18:57:27 +00:00
test_tp_random_state.py [device_mesh][BE] remove allgather from DM (#105614) 2023-07-27 01:33:05 +00:00
test_tp_style.py [TP] Enable more generic attn in Tensor Parallelism (#100508) 2023-05-07 18:15:49 +00:00
test_view_sharding_dim_change.py