pytorch/torch/distributed/tensor/_ops
Sherlock Huang bb7c9a2d41 [DTensor] Fix DTensor.mean with uneven sharding (#163241)
Fixes #162692

When input is uneven sharded, redistribute input as Replicated.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163241
Approved by: https://github.com/dcci
2025-09-18 19:53:51 +00:00
..
__init__.py [dtensor] add op support for select_backward and slice_backward (#150357) 2025-04-01 19:15:25 +00:00
_common_rules.py [dtensor] refactor PlacementStrategy -> OpSpec, move utils to OpSchema (#155592) 2025-06-12 00:51:36 +00:00
_conv_ops.py [DTensor][conv] add DTensor convolution_backward op support for case where the input Tensor has requires_grad=False (#142278) 2025-02-10 07:06:40 +00:00
_einsum_strategy.py Fix einsum strategy shard dim > ndim (#157593) 2025-07-08 20:27:17 +00:00
_embedding_ops.py [BE][15/16] fix typos in torch/ (torch/distributed/tensor/) (#156605) 2025-07-17 12:08:33 +00:00
_math_ops.py [DTensor] Fix DTensor.mean with uneven sharding (#163241) 2025-09-18 19:53:51 +00:00
_matrix_ops.py Fix SDPA sharding when return_debug_mask is False (#159205) 2025-07-26 17:41:42 +00:00
_pointwise_ops.py [DTensor] Add _foreach_pow to sharding propagation list. (#162895) 2025-09-15 21:14:06 +00:00
_random_ops.py [1/N] cost coverage improvment (#157504) 2025-07-10 23:55:45 +00:00
_tensor_ops.py [DTensor] add op support for aten.unbind.int (#162560) 2025-09-11 00:58:23 +00:00
_view_ops.py [dynamo][hop] Introduce Local Map HOP (#161458) 2025-09-17 09:32:38 +00:00
utils.py [DTensor] Fix DTensor.mean with uneven sharding (#163241) 2025-09-18 19:53:51 +00:00