pytorch/test/cpp
Gaoxiang Liu 735f8cc6c2 [DI] Allow explicit taskLauncher for torchscript interpreter (#46865)
Summary:
By default, TorchScript execution is single threaded and uses the caller's thread pool. For the use case of distributed inference, we hope there is a way to customize the behavior where the  interpreter in torch script can be executed in other places. This diff allows an explicit taskLauncher for torchscript interpreter.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46865

Test Plan:
unit test is passed.

fbshipit-source-id: 1d7b003926c0d1f8facc53206efb960cff8897ac

Fixes #{issue number}

Reviewed By: houseroad

Differential Revision: D24616102

Pulled By: garroud

fbshipit-source-id: 79202b62f92d0b0baf72e4bf7aa3f05e0da91d59
2020-11-04 17:07:55 -08:00
..
api Add input argument to autograd.backward() cpp api (#47214) 2020-11-04 14:43:59 -08:00
common Trim libshm deps, move tempfile.h to c10 (#17019) 2019-02-13 19:38:35 -08:00
dist_autograd Fix Windows build failure after DDP PR merged (#45335) 2020-09-25 12:37:50 -07:00
jit [DI] Allow explicit taskLauncher for torchscript interpreter (#46865) 2020-11-04 17:07:55 -08:00
rpc Remove lock from GraphTask::set_exception_without_signal. (#45867) 2020-10-05 20:02:29 -07:00
tensorexpr Inlining all non-output buffers, including intermediate buffers. (#47258) 2020-11-03 17:00:32 -08:00
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00