pytorch/test/cpp/api
Peter Goldsborough 8797bb1d30 Revert D10419671: use TypeMeta instead of ScalarType in TensorOptions
Differential Revision:
D10419671

Original commit changeset: 9cc8c5982fde

fbshipit-source-id: c870ecdd3730cf695007ebb110d362996da05e5d
2018-10-26 11:09:58 -07:00
..
any.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
CMakeLists.txt Implement DataLoader (#11918) 2018-10-22 10:22:41 -07:00
cursor.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
dataloader.cpp Change explicit usages of at::optional to c10::optional (#13082) 2018-10-25 15:17:53 -07:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
integration.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
jit.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
module.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
modules.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
optim_baseline.h Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
optim_baseline.py Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
optim.cpp Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
ordered-dict.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
parallel.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
sequential.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
serialize.cpp Revamp and document serialization, support streams (#12421) 2018-10-15 15:47:59 -07:00
static.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
support.h Move JIT tests to gtest (#12030) 2018-10-06 23:09:44 -07:00
tensor_cuda.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
tensor_options_cuda.cpp Revert D10419671: use TypeMeta instead of ScalarType in TensorOptions 2018-10-26 11:09:58 -07:00
tensor_options.cpp Add c10::optional to type syntax (#12582) 2018-10-25 16:08:29 -07:00
tensor.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.