pytorch/test/cpp/api
Nikita Shulga c05ee86edd Fix return-type-is-always-copy warning (#47279)
Summary:
`std::vector<bool>` can not return values by reference, since they are stored as bit fields

Pull Request resolved: https://github.com/pytorch/pytorch/pull/47279

Reviewed By: glaringlee

Differential Revision: D24705188

Pulled By: malfet

fbshipit-source-id: 96e71cc4b9881f92af3b4a508d397deab6d68174
2020-11-03 08:53:24 -08:00
..
any.cpp [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027) 2020-02-17 20:38:02 -08:00
autograd.cpp Add RAII DetectAnomalyGuard (#47164) 2020-11-02 15:07:59 -08:00
CMakeLists.txt C++ API TransformerEncoderLayer (#42633) 2020-08-07 11:49:42 -07:00
dataloader.cpp
dispatch.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
enum.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
expanding-array.cpp
fft.cpp Add one dimensional FFTs to torch.fft namespace (#43011) 2020-09-19 23:32:22 -07:00
functional.cpp [c++] Distance-agnostic triplet margin loss (#45377) 2020-09-30 12:37:35 -07:00
init_baseline.h
init_baseline.py
init.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
integration.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
jit.cpp
memory.cpp
misc.cpp Throw error if torch.set_deterministic(True) is called with nondeterministic CuBLAS config (#41377) 2020-08-05 12:42:24 -07:00
module.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
modulelist.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
modules.cpp Fix return-type-is-always-copy warning (#47279) 2020-11-03 08:53:24 -08:00
namespace.cpp Remove using namespace torch::autograd from header files (#34423) 2020-03-09 10:31:21 -07:00
nn_utils.cpp [WIP] Fix cpp grad accessor API (#40887) 2020-07-16 09:11:12 -07:00
operations.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953) 2020-09-01 21:48:28 -07:00
optim_baseline.h Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim_baseline.py Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim.cpp [WIP] Fix cpp grad accessor API (#40887) 2020-07-16 09:11:12 -07:00
ordered_dict.cpp
parallel_benchmark.cpp [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
parallel.cpp [PyTorch] Modify data_parallel to work with small tensors (#37704) 2020-05-04 11:06:42 -07:00
parameterdict.cpp Python/C++ API Parity: Add impl and tests for ParameterDict (#40654) 2020-06-29 08:50:44 -07:00
parameterlist.cpp Impl for ParameterList (#41259) 2020-07-12 20:50:31 -07:00
README.md
rnn.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
sequential.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
serialize.cpp Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
static.cpp
support.cpp
support.h Changes warnings generated in cpp to show point of Python origination (#36052) 2020-04-25 21:18:58 -07:00
tensor_cuda.cpp
tensor_indexing.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
tensor_options_cuda.cpp
tensor_options.cpp
tensor.cpp Change to.dtype_layout to c10-full (#41169) 2020-07-10 16:04:34 -07:00
torch_include.cpp
transformer.cpp C++ APIs Transformer NN Module Top Layer (#44333) 2020-09-11 08:25:27 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.