Wanchao Liang
|
7522ca55f1
|
[tp] additional doc fixes (#94786)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94786
Approved by: https://github.com/fduwjj
|
2023-02-14 04:52:04 +00:00 |
|
Wanchao Liang
|
2db12e3844
|
[tp] minor update to TP docs (#94748)
minor update to TP docs for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94748
Approved by: https://github.com/fduwjj
|
2023-02-13 21:54:19 +00:00 |
|
Aaron Gokaslan
|
1e2d82b8e4
|
[BE] Merge isinstance calls together (#94419)
Simplify and speeds up isinstance calls by checking for multiple types at the same time.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/94419
Approved by: https://github.com/ezyang
|
2023-02-09 00:47:26 +00:00 |
|
fduwjj
|
3fb6e119e2
|
[PT-D][TP] Fix the module registration in TP API (#93412)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93412
Approved by: https://github.com/XilunWu
|
2023-02-01 21:03:56 +00:00 |
|
Wanchao Liang
|
9a56997fe1
|
[dtensor][5/N] add cached propagator for TP (#90734)
This PR adds a cached propagator for TP use, it caches the sharding
prop decision for the same input sharding on an operator. This could
improve eager mode performance.
Differential Revision: [D42876249](https://our.internmc.facebook.com/intern/diff/D42876249)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90734
Approved by: https://github.com/XilunWu, https://github.com/fduwjj
|
2023-02-01 05:04:08 +00:00 |
|
fduwjj
|
913866efbf
|
[PT-D][TP] Fix TP API for FQN path based parallelization (#93029)
We have not tested dict based parallelize_module and turns out we had mistakes here.
1. Fix the error.
2. Add unit test cases for it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93029
Approved by: https://github.com/wz337
|
2023-01-26 09:10:21 +00:00 |
|
joncrall
|
ad782ff7df
|
Enable xdoctest runner in CI for real this time (#83816)
Builds on #83317 and enables running the doctests. Just need to figure out what is causing the failures.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83816
Approved by: https://github.com/ezyang, https://github.com/malfet
|
2022-12-29 05:32:42 +00:00 |
|
Wanchao Liang
|
9b5e6b029f
|
[tp] umft distributed.tensor.parallel (#89969)
cmd: `ufmt format torch/distributed/tensor`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89969
Approved by: https://github.com/fduwjj
|
2022-12-01 20:58:16 +00:00 |
|
Wanchao Liang
|
4451eb24e6
|
Move tensor_parallel out to distributed.tensor folder (#89878)
This PR moves tensor parallel from torch.distributed._tensor.parallel
to torch.distributed.tensor.parallel, to prepare for beta release
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89878
Approved by: https://github.com/fduwjj
|
2022-11-30 22:13:10 +00:00 |
|