Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48467
The current API's forward method only accepted a Tensor or a Tuple of
Tensors, making this more generic by accepting any Sequence of Tensors.
ghstack-source-id: 118436340
Test Plan: waitforbuildbot
Reviewed By: rohan-varma
Differential Revision: D25181944
fbshipit-source-id: 4db251dad52c01abc69f3d327788f2e4289e6c9d
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47829
As per proposal in https://github.com/pytorch/pytorch/issues/44827,
the API needs to return an RRef to support inter-host pipelining.
For now, we just return a local RRef and only support pipeline on a single
host. But having this change in the API upfront ensures we don't make any BC
breaking changes later.
ghstack-source-id: 118366784
Test Plan: waitforbuildbot
Reviewed By: rohan-varma
Differential Revision: D24914022
fbshipit-source-id: e711e7d12efa45645f752f0e5e776a3d845f3ef5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48432
As per our design in https://github.com/pytorch/pytorch/issues/44827,
changign the API such that the user places modules on appropriate devices
instead of having a `balance` and `devices` parameter that decides this.
This design allows us to use RemoteModule in the future.
ghstack-source-id: 117491992
ghstack-source-id: 117491992
Test Plan: waitforbuildbot
Reviewed By: mrshenli
Differential Revision: D25172970
fbshipit-source-id: 61ea37720b92021596f69788e45265ac9cd41746
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/46804
As per our design in https://github.com/pytorch/pytorch/issues/44827,
changign the API such that the user places modules on appropriate devices
instead of having a `balance` and `devices` parameter that decides this.
This design allows us to use RemoteModule in the future.
ghstack-source-id: 116479842
Test Plan: waitforbuildbot
Reviewed By: mrshenli
Differential Revision: D24524219
fbshipit-source-id: 9973172c2bb7636572cdc37ce06bf8368638a463
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44090
This is an initial commit pulling in the torchgpipe fork at
https://github.com/facebookresearch/fairscale.
The purpose of this commit is to just pull in the code and ensure all tests and
builds work fine. We will slowly modify this to match our intended API
mentioned in https://fb.quip.com/txurAV3zIFox#RPZACAfAKMq. Follow up PRs would
address further changes needed on top of the initial commit..
We're pulling the code into the `torch.distributed._pipeline.sync` package. The
package is private on purpose since there is a lot of work (ex: docs, API
changes etc.) that needs to go in before we can actually officially support
this.
ghstack-source-id: 114864254
Test Plan:
1) waitforbuildbot
2) Ran all tests on my devgpu
Reviewed By: mrshenli
Differential Revision: D23493316
fbshipit-source-id: fe3c8b7dadeeb86abdc00e8a8652491b0b16743a