This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.
I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
This is a new version of #15648 based on the latest master branch.
Unlike the previous PR where I fixed a lot of the doctests in addition to integrating xdoctest, I'm going to reduce the scope here. I'm simply going to integrate xdoctest, and then I'm going to mark all of the failing tests as "SKIP". This will let xdoctest run on the dashboards, provide some value, and still let the dashboards pass. I'll leave fixing the doctests themselves to another PR.
In my initial commit, I do the bare minimum to get something running with failing dashboards. The few tests that I marked as skip are causing segfaults. Running xdoctest results in 293 failed, 201 passed tests. The next commits will be to disable those tests. (unfortunately I don't have a tool that will insert the `#xdoctest: +SKIP` directive over every failing test, so I'm going to do this mostly manually.)
Fixes https://github.com/pytorch/pytorch/issues/71105
@ezyang
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82797
Approved by: https://github.com/ezyang
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65190
As described in https://github.com/pytorch/pytorch/issues/65093, there
could be modules which don't have any parameters/buffers. In this case, Pipe
determines that the module should be executed on CPU. However this might result
in unnecessary GPU to CPU transfers whereas the user expected the module to be
executed on the GPU itself by keeping its inputs and outputs on GPU.
For this use case, we introduce a `WithDevice` wrapper which can be used to
override which device a particular module should be executed on as part of the
pipeline.
#Closes: https://github.com/pytorch/pytorch/issues/65093
ghstack-source-id: 138376272
Test Plan:
1) waitforbuildbot
2) unit tests
Reviewed By: SciPioneer
Differential Revision: D31010027
fbshipit-source-id: 4c1c61d3c6feeef341e002e5f7e83dd33ff3a516
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57325
As per the design outlined in
https://github.com/pytorch/pytorch/issues/53952, adding a `NoChunk` wrapper for
pipeline parallelism inputs.
If a Tensor is wrapped with this wrapper, the pipeline implementation does not
split this Tensor across micro-batches and instead just replicates this tensor
as-is similar to non-tensors.
ghstack-source-id: 132009305
Test Plan:
1) unit tests.
2) waitforbuildbot.
Reviewed By: SciPioneer
Differential Revision: D28109277
fbshipit-source-id: ee78c814c715d207d2796aba40b756a8e1834898
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57226
As per the design outlined in
https://github.com/pytorch/pytorch/issues/53952, this PR adds support for
non-Tensor args in the pipeline.
The `NoChunk` wrapper hasn't been implemented yet and will be implemented in a
follow up PR.
ghstack-source-id: 132008356
Test Plan:
1) unit tests
2) waitforbuildbot
Reviewed By: SciPioneer
Differential Revision: D28083564
fbshipit-source-id: 5f09da238eec0167feff76fe98916dedb0a9ae4e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55441
This is the first step towards supporting the proposal outlined in
https://github.com/pytorch/pytorch/issues/53952.
In this PR I've ensured Pipe.forward() accepts a *inputs argument instead of
just a single input as previously. This lays the groundwork for supporting
non-Tensors and generic arguments to the Pipe API. In this PR we still only
support Tensors and non-Tensor support will come in future PRs.
For backward compatibility I've ensured a single Tuple[Tensor] input still
works as expected previously.
ghstack-source-id: 130767499
Test Plan: waitforbuildbot
Reviewed By: SciPioneer
Differential Revision: D27613887
fbshipit-source-id: 05e19e537e6d7fe4999745fc4ba9941ac54906de
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55187
As described in https://github.com/pytorch/pytorch/issues/54927, Pipe
docs didn't explicitly mention initializing RPC. This PR improves the docs and
also ensures Pipe throws a more useful error message when RPC is not
initialized and not an internal assertion error.
ghstack-source-id: 125563552
Test Plan:
1) unit test added.
2) waitforbuildbot
Reviewed By: rohan-varma
Differential Revision: D27521783
fbshipit-source-id: d1a5c6ca789b9a66c07a794468178c25cfd4b743
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48638
Polishing up some of the docs for the main `Pipe` class and its
`forward` method.
ghstack-source-id: 118820804
Test Plan: waitforbuildbot
Reviewed By: rohan-varma
Differential Revision: D25237705
fbshipit-source-id: ba3d8737b90a80024c827c0887fc56f14bf678b7