### This change
- Implements the ruff linter in pytorch lintrunner. It is adapted from https://github.com/justinchuby/lintrunner-adapters/blob/main/lintrunner_adapters/adapters/ruff_linter.py. It does **both linting and fixing**. 🔧
- Migrated all flake8 configs to the ruff config and enabled it for the repo. ✅
- **`ruff` lints the whole repo in under 2s** 🤯
Fixes https://github.com/pytorch/pytorch/issues/94737 Replaces #99280
@huydhn @Skylion007
<!--
copilot:all
-->
### <samp>🤖 Generated by Copilot at 6b982dd</samp>
### Summary
🧹🛠️🎨
<!--
1. 🧹 This emoji represents cleaning or tidying up, which is what `ruff` does by formatting and linting the code. It also suggests improving the code quality and removing unnecessary or redundant code.
2. 🛠️ This emoji represents tools or fixing, which is what `ruff` is as a code formatter and linter. It also suggests enhancing the code functionality and performance, and resolving potential issues or bugs.
3. 🎨 This emoji represents art or creativity, which is what `ruff` allows by providing a consistent and configurable style for the code. It also suggests adding some flair or personality to the code, and making it more readable and enjoyable.
-->
Add `[tool.ruff]` section to `pyproject.toml` to configure `ruff` code formatter and linter. This change aims to improve code quality and consistency with a single tool.
> _`ruff` cleans the code_
> _like a spring breeze in the fields_
> _`pyproject.toml`_
### Walkthrough
* Configure `ruff` code formatter and linter for the whole project ([link](https://github.com/pytorch/pytorch/pull/99785/files?diff=unified&w=0#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711R22-R79))
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99785
Approved by: https://github.com/malfet, https://github.com/Skylion007
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57325
As per the design outlined in
https://github.com/pytorch/pytorch/issues/53952, adding a `NoChunk` wrapper for
pipeline parallelism inputs.
If a Tensor is wrapped with this wrapper, the pipeline implementation does not
split this Tensor across micro-batches and instead just replicates this tensor
as-is similar to non-tensors.
ghstack-source-id: 132009305
Test Plan:
1) unit tests.
2) waitforbuildbot.
Reviewed By: SciPioneer
Differential Revision: D28109277
fbshipit-source-id: ee78c814c715d207d2796aba40b756a8e1834898
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57226
As per the design outlined in
https://github.com/pytorch/pytorch/issues/53952, this PR adds support for
non-Tensor args in the pipeline.
The `NoChunk` wrapper hasn't been implemented yet and will be implemented in a
follow up PR.
ghstack-source-id: 132008356
Test Plan:
1) unit tests
2) waitforbuildbot
Reviewed By: SciPioneer
Differential Revision: D28083564
fbshipit-source-id: 5f09da238eec0167feff76fe98916dedb0a9ae4e
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55441
This is the first step towards supporting the proposal outlined in
https://github.com/pytorch/pytorch/issues/53952.
In this PR I've ensured Pipe.forward() accepts a *inputs argument instead of
just a single input as previously. This lays the groundwork for supporting
non-Tensors and generic arguments to the Pipe API. In this PR we still only
support Tensors and non-Tensor support will come in future PRs.
For backward compatibility I've ensured a single Tuple[Tensor] input still
works as expected previously.
ghstack-source-id: 130767499
Test Plan: waitforbuildbot
Reviewed By: SciPioneer
Differential Revision: D27613887
fbshipit-source-id: 05e19e537e6d7fe4999745fc4ba9941ac54906de