pytorch/torch/distributed/_tensor/ops
Wanchao Liang a1aa32e204 [dtensor] tensor ops to use strategy based sharding prop (#100607)
This is the first series of PR that adopts operator impls to use a
strategy based approach, each op utilizes OpStrategy and PlacementStrategy
to generate their own strategy. By utilizing the strategy based
approach along with the op graph, we could enable more advanced op
implementation (decomp is possible), and turn the sharding prop to be
more like a contraint satisfication problem.

This PR alone only adds some basic tensor op strategies, and it directly
works on the op graph that was used for metadata propagation. The tensor ops
added in this PR mainly follows one of the arg strategy. The next set of
PRs would add more op strategies to other ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100607
Approved by: https://github.com/XilunWu
2023-05-11 02:47:20 +00:00
..
__init__.py [DTensor][1/N] add DTensor RNG state APIs (#98198) 2023-04-10 23:57:00 +00:00
basic_strategy.py [dtensor] add StrategyType and TupleStrategy (#99435) 2023-04-22 05:39:20 +00:00
common_rules.py [dtensor] tensor ops to use strategy based sharding prop (#100607) 2023-05-11 02:47:20 +00:00
embedding_ops.py [SPMD] Add embedding dense backward prop rule for postional embedding (#100038) 2023-04-27 16:31:51 +00:00
math_ops.py Enable LogSoftmax for SPMD tracing (#98380) 2023-04-06 04:41:37 +00:00
matrix_ops.py [dtensor][6/N] change to a better/safer op registration (#90735) 2023-02-01 05:06:33 +00:00
pointwise_ops.py [dtensor] tensor ops to use strategy based sharding prop (#100607) 2023-05-11 02:47:20 +00:00
random_ops.py [DTensor][3/N] enable aten.native_dropout (#98577) 2023-04-10 23:57:04 +00:00
tensor_ops.py [dtensor] tensor ops to use strategy based sharding prop (#100607) 2023-05-11 02:47:20 +00:00
utils.py [dtensor] tensor ops to use strategy based sharding prop (#100607) 2023-05-11 02:47:20 +00:00
view_ops.py [spmd] quick fix on batch input view issue (#98813) 2023-04-11 14:27:01 +00:00