pytorch/test/cpp/api
Jeremy Lilley 468a9d448e [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37681

By passing by value, we can std::move, and avoid unnecessarily copying
args that are part of any std::function/lambda state (e.g. in the jit
interpreter, there is a std::vector<> stack passed in the
InterpreterContinuation)

This makes the api also consistent with e.g. folly and best practices.
Added a minor at::launch() benchmark to test/cpp/, the difference is
mostly noticeable when copying the std::function<> internal args is
non-trivial.

Benchmarks pre/post (min over ~5 runs)
NoData: 5.81 us -> 5.63 us (-3.2%)
WithData(0): 6.67 us -> 5.88 us (-11.8%)
WithData(4): 6.98 us -> 6.51 us (-6.7%)
WithData(256): 9.44 us -> 7.89 (-16.5%)

ghstack-source-id: 103322321

Test Plan:
- perf: buck run mode/opt caffe2/test/cpp/api:parallel_benchmark pre/post
  - correctness buck test mode/dev-nosan caffe2/test/...

Reviewed By: dzhulgakov

Differential Revision: D21355148

fbshipit-source-id: 3567e730845106f1991091e4a892d093e00571c3
2020-05-05 08:41:38 -07:00
..
any.cpp [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027) 2020-02-17 20:38:02 -08:00
autograd.cpp Added autograd support for C->C functions and enabled requires_grad=True for complex (#36932) 2020-04-24 12:30:49 -07:00
CMakeLists.txt [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
dataloader.cpp Fix typos (#30606) 2019-12-02 20:17:42 -08:00
dispatch.cpp make sure dispatch test works on windows (#36729) 2020-04-16 11:36:56 -07:00
enum.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
functional.cpp Support for XNNPACK max pooling operator. (#35354) 2020-04-03 22:53:15 -07:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
integration.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
module.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
modulelist.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
modules.cpp Add inplace tests for several torch::nn modules / functionals (#35147) 2020-03-21 10:02:56 -07:00
namespace.cpp Remove using namespace torch::autograd from header files (#34423) 2020-03-09 10:31:21 -07:00
nn_utils.cpp [C++ API] Add PackedSequence / pack_padded_sequence / pad_packed_sequence / pack_sequence (#33652) 2020-02-25 12:53:41 -08:00
optim_baseline.h [C++ API Parity] LBFGS optimizer step() update and added closure to the Optimizer step() function (#34564) 2020-03-17 22:27:24 -07:00
optim_baseline.py [C++ API Parity] LBFGS optimizer step() update and added closure to the Optimizer step() function (#34564) 2020-03-17 22:27:24 -07:00
optim.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel_benchmark.cpp [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
parallel.cpp [PyTorch] Modify data_parallel to work with small tensors (#37704) 2020-05-04 11:06:42 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
sequential.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
serialize.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
static.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
support.cpp Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632) 2019-11-13 15:17:11 -08:00
support.h Changes warnings generated in cpp to show point of Python origination (#36052) 2020-04-25 21:18:58 -07:00
tensor_cuda.cpp Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547) 2020-01-25 20:00:54 -08:00
tensor_indexing.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
tensor_options_cuda.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor_options.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor.cpp Added autograd support for C->C functions and enabled requires_grad=True for complex (#36932) 2020-04-24 12:30:49 -07:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.