pytorch/test/cpp/api
2018-06-27 04:50:56 -07:00
..
any.cpp [C++ API] Use torch::Tensor instead of at::Tensor/Variable mix (#8680) 2018-06-24 19:03:39 -07:00
cursor.cpp [C++ API] Rework optimization package (#8815) 2018-06-26 10:13:14 -07:00
integration.cpp fbshipit-source-id: ba600fcd2b5cefc7621357bdeb05e24cea02e5af 2018-06-27 04:50:56 -07:00
main.cpp Split up detail.h (#7836) 2018-05-30 08:55:34 -07:00
misc.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
module.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
modules.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
optim_baseline.h [C++ API] Rework optimization package (#8815) 2018-06-26 10:13:14 -07:00
optim_baseline.py [C++ API] Rework optimization package (#8815) 2018-06-26 10:13:14 -07:00
optim.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
README.md Update C++ API tests to use Catch2 (#7108) 2018-04-30 21:36:35 -04:00
rnn.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
sequential.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
serialization.cpp [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00
static.cpp [C++ API] Make pImpl easy to use in modules to enable happy reference semantics (#8347) 2018-06-18 19:45:53 -07:00
tensor_cuda.cpp Make at::tensor faster (#8709) 2018-06-20 14:46:58 -07:00
tensor_options_cuda.cpp Created DefaultTensorOptions in ATen (#8647) 2018-06-24 21:15:09 -07:00
tensor_options.cpp Created DefaultTensorOptions in ATen (#8647) 2018-06-24 21:15:09 -07:00
tensor.cpp [C++ API] Bag of fixes (#8843) 2018-06-25 21:11:49 -07:00
util.h [C++ API] Better forward methods (#8739) 2018-06-26 13:23:16 -07:00

C++ API Tests

In this folder live the tests for PyTorch's C++ API (formerly known as autogradpp). They use the Catch2 test framework.

CUDA Tests

The way we handle CUDA tests is by separating them into a separate TEST_CASE (e.g. we have optim and optim_cuda test cases in optim.cpp), and giving them the [cuda] tag. Then, inside main.cpp we detect at runtime whether CUDA is available. If not, we disable these CUDA tests by appending ~[cuda] to the test specifications. The ~ disables the tag.

One annoying aspect is that Catch only allows filtering on test cases and not sections. Ideally, one could have a section like LSTM inside the RNN test case, and give this section a [cuda] tag to only run it when CUDA is available. Instead, we have to create a whole separate RNN_cuda test case and put all these CUDA sections in there.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist