pytorch/test/cpp/api
Peter Goldsborough 8fafa7b6ac Remove size() from BatchDataset and templatize IndexType (#12960)
Summary:
This PR brings to changes to the recently landed C++ Frontend dataloader:

1. Removes the `size()` method from `BatchDataset`. This makes it cleaner to implement unsized ("infinite stream") datasets. The method was not used much beyond initial configuration.
2. Makes the index type of a dataset a template parameter of `BatchDataset` and `Sampler`. This essentially allows custom index types instead of only `vector<size_t>`. This greatly improves flexibility.

See the `InfiniteStreamDataset` and `TestIndex` datasets in the tests for what this enables.

Some additional minor updates and code movements too.

apaszke SsnL
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12960

Differential Revision: D12893342

Pulled By: goldsborough

fbshipit-source-id: ef03ea0f11a93319e81fba7d52a0ef1a125d3108
2018-11-05 17:13:09 -08:00
..
any.cpp Remove size() from BatchDataset and templatize IndexType (#12960) 2018-11-05 17:13:09 -08:00
CMakeLists.txt Implement DataLoader (#11918) 2018-10-22 10:22:41 -07:00
cursor.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
dataloader.cpp Remove size() from BatchDataset and templatize IndexType (#12960) 2018-11-05 17:13:09 -08:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
integration.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
jit.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Replace tmpnam usage (#13289) 2018-11-01 13:50:43 -07:00
module.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
modules.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
optim_baseline.h Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
optim_baseline.py Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
optim.cpp Lazily create tensors in optim_baseline (#12301) 2018-10-04 10:55:53 -07:00
ordered-dict.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
parallel.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Remove caffe2::Tensor::capacity_nbytes, at::Tensor::to##name##Data, (#11876) 2018-09-24 10:40:10 -07:00
sequential.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
serialize.cpp Replace tmpnam usage (#13289) 2018-11-01 13:50:43 -07:00
static.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
support.h Move JIT tests to gtest (#12030) 2018-10-06 23:09:44 -07:00
tensor_cuda.cpp Add tensor.to(options) (#13146) 2018-10-29 16:26:06 -07:00
tensor_options_cuda.cpp Replace CUDA-specific set_index(_from) method from DeviceGuard with set_device. (#13275) 2018-10-31 07:55:13 -07:00
tensor_options.cpp Add tensor.to(options) (#13146) 2018-10-29 16:26:06 -07:00
tensor.cpp Add tensor.to(options) (#13146) 2018-10-29 16:26:06 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.