pytorch/torch/distributed/tensor/_ops
Tianyu Liu 5d6ac2dced [dtensor] add op support for select_backward and slice_backward (#150357)
Inheriting and rebasing @awgu 's PR https://github.com/pytorch/pytorch/pull/149071
- fixed an issue for `select_backward` and an issue for `slice_backward`
- removed `_experimental_ops.py` as it becomes empty

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150357
Approved by: https://github.com/awgu, https://github.com/XilunWu
2025-04-01 19:15:25 +00:00
..
__init__.py [dtensor] add op support for select_backward and slice_backward (#150357) 2025-04-01 19:15:25 +00:00
_common_rules.py [BE]: Apply ruff PERF403 to use dict comprehensions more often (#149257) 2025-03-18 00:46:07 +00:00
_conv_ops.py [DTensor][conv] add DTensor convolution_backward op support for case where the input Tensor has requires_grad=False (#142278) 2025-02-10 07:06:40 +00:00
_einsum_strategy.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00
_embedding_ops.py [dtensor] refactor sharding prop to handle cross mesh computation (#147869) 2025-03-04 18:30:44 +00:00
_math_ops.py [dtensor] refactor sharding prop to handle cross mesh computation (#147869) 2025-03-04 18:30:44 +00:00
_matrix_ops.py Add batch dim sharding rule to sdpa (#149253) 2025-03-18 07:54:02 +00:00
_pointwise_ops.py Let pointwise sharding take arg with largest number of dims in case of ties (#149721) 2025-03-24 15:39:39 +00:00
_random_ops.py [dtensor] refactor sharding prop to handle cross mesh computation (#147869) 2025-03-04 18:30:44 +00:00
_tensor_ops.py [dtensor] add op support for select_backward and slice_backward (#150357) 2025-04-01 19:15:25 +00:00
_view_ops.py [dtensor] refactor sharding prop to handle cross mesh computation (#147869) 2025-03-04 18:30:44 +00:00
utils.py [BE][PYFMT] migrate PYFMT for torch.{distributed,distributions} to ruff format (#144547) 2025-02-28 07:35:56 +00:00