Applies PLW0108 which removes useless lambda calls in Python, the rule is in preview so it is not ready to be enabled by default just yet. These are the autofixes from the rule.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113602
Approved by: https://github.com/albanD
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29045
Addressing an issue seen in GitHub https://github.com/pytorch/pytorch/issues/28958
It seems sometimes the workers in this test don't stop cleanly. The purpose of this test is to check that the init_fun in init_workers works as expected, which is captured by the assertEqual in the for loop in the test. The behavior of stop() is not really important here.
The fact it's returning false is probably indicative that a worker is getting blocked but that doesn't affect the correctness of the test.
Test Plan: Ran the test 100 times, it consistently succeeds.
Reviewed By: akyrola
Differential Revision: D18273064
fbshipit-source-id: 5fdff8cf80ec7ba04acf4666a3116e081d96ffec
Summary:
parallel_workers supports calling a custom function "init_fun" when WorkerCoordinators are started which is passed in as an argument to init_workers.
Adding an analogous argument "shutdown_fun" which gets passed in to init_workers, and gets called when a WorkerCoordinator is stopped.
This allows users of the parallel_workers to add custom cleanup logic before the workers are stopped.
Reviewed By: akyrola
Differential Revision: D6020788
fbshipit-source-id: 1e1d8536a304a35fc9553407727da36446c668a3
Summary:
data_workers.py provides a really nice, easy way to run background threads for data input. Unfortunately, it's restrictive, the output of the fetcher function has to be a numpy array.
I pulled out that core nice thread management into parallel_workers, and updated the classes data_workers to extend those classes. The main change was refactoring out most of the queue handling logic into QueueManager.
This way parallel_workers can be used to manage background threads without having to use the queue for output.
Reviewed By: akyrola
Differential Revision: D5538626
fbshipit-source-id: f382cc43f800ff90840582a378dc9b86ac05b613