pytorch/torch/nn/parallel
Aaron Gokaslan 88ab3e4322 [BE]: Update ruff to 0.285 (#107519)
This updates ruff to 0.285 which is faster, better, and have fixes a bunch of false negatives with regards to fstrings.

I also enabled RUF017 which looks for accidental quadratic list summation. Luckily, seems like there are no instances of it in our codebase, so enabling it so that it stays like that. :)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107519
Approved by: https://github.com/ezyang
2023-08-20 01:36:18 +00:00
..
__init__.py Merge type stubs torch nn parallel (#102194) 2023-05-26 20:10:47 +00:00
_functions.py DDP forward support custom stream accelerated copy. (#98723) 2023-04-14 20:19:56 +00:00
comm.py [BE] f-stringify torch/ and scripts (#105538) 2023-07-21 19:35:24 +00:00
data_parallel.py [BE]: Update ruff to 0.285 (#107519) 2023-08-20 01:36:18 +00:00
distributed.py [BE]: Update ruff to 0.285 (#107519) 2023-08-20 01:36:18 +00:00
parallel_apply.py [BE]: Update ruff to 0.285 (#107519) 2023-08-20 01:36:18 +00:00
replicate.py Make DataParallel generic (#102455) 2023-06-03 00:33:01 +00:00
scatter_gather.py Merge type stubs torch nn parallel (#102194) 2023-05-26 20:10:47 +00:00