pytorch/torch/nn
Aaron Gokaslan 5a1216bb2e [BE]: Update ruff to 0.4.1 (#124549)
Update ruff to 0.4.1 .
This version fixes a lot false negatives/false positives, is 20-40% faster, and has various other bug fixes.

Below is a before and after table showing the execution time of ruff lint and ruff format in milliseconds courtesy of https://astral.sh/blog/ruff-v0.4.0

| Repository                                         | Linter (v0.3) | Linter (v0.4) | Formatter (v0.3) | Formatter (v0.4) |
|----------------------------------------------------|---------------|---------------|------------------|------------------|
| [pytorch/pytorch](https://github.com/pytorch/pytorch) | 328.7         | 251.8         | 351.1            | 274.9            |

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124549
Approved by: https://github.com/ezyang
2024-04-21 14:06:23 +00:00
..
attention Adds LSE output for templated-attention-hop if inputs require grad (#124308) 2024-04-20 05:45:56 +00:00
backends
intrinsic
modules [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
parallel [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
qat
quantizable
quantized
utils Add swap_tensors path to nn parametrizations (#124130) 2024-04-18 22:22:08 +00:00
__init__.py Add Support for CausalBias to torch compile (#116071) 2024-01-30 02:22:48 +00:00
_reduction.py
common_types.py
cpp.py
functional.py [BE]: Update ruff to 0.4.1 (#124549) 2024-04-21 14:06:23 +00:00
functional.pyi.in Add RMSNorm module (#121364) 2024-03-29 18:05:28 +00:00
grad.py
init.py
parameter.py Add guardrails preventing complex params in LBFGS & SparseAdam (#118161) 2024-01-24 21:22:47 +00:00
parameter.pyi