pytorch/test/cpp/api
Dmytro Dzhulgakov be99eff75a Back out "Revert D10494123: [c10] Remove at::Optional" (#12991)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12991

Remove the file proxying. Before we can do land `using namespace c10` everywhere, we just keep the one off namespace proxy. The follow up diff is going to replace explicit at::optional but keep just `optional` usage

Reviewed By: ezyang, Yangqing

Differential Revision: D10511254

fbshipit-source-id: 8297c61d7e9810ae215a18869a6ec9b63f55d202
2018-10-25 15:17:51 -07:00
..
any.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
CMakeLists.txt Implement DataLoader (#11918) 2018-10-22 10:22:41 -07:00
cursor.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
dataloader.cpp Implement DataLoader (#11918) 2018-10-22 10:22:41 -07:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
integration.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
jit.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
memory.cpp Back out "Revert D10494123: [c10] Remove at::Optional" (#12991) 2018-10-25 15:17:51 -07:00
misc.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
module.cpp Do a better job of checking registered names (#13016) 2018-10-25 13:52:08 -07:00
modules.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
optim_baseline.h Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
optim_baseline.py Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
optim.cpp Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
ordered-dict.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
parallel.cpp Move exception to C10 (#12354) 2018-10-15 13:33:18 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
sequential.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
serialize.cpp Revamp and document serialization, support streams (#12421) 2018-10-15 15:47:59 -07:00
static.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
support.h Move JIT tests to gtest (#12030) 2018-10-06 23:09:44 -07:00
tensor_cuda.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
tensor_options_cuda.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
tensor_options.cpp Support additional device types (#12293) 2018-10-05 13:15:05 -07:00
tensor.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.