pytorch/test/cpp/api
Roy Li 7aae51cded Replace tensor.type().scalarType() calls with tensor.scalar_type()
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/17515

Reviewed By: ezyang

Differential Revision: D14233250

fbshipit-source-id: 6c7af8d2291c0c2b148001b30cf03834f34366c0
2019-03-08 14:08:18 -08:00
..
any.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
CMakeLists.txt Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
dataloader.cpp Jaliyae/chunk buffer fix (#17409) 2019-02-23 08:48:53 -08:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
integration.cpp Move isnan to C++ (#15722) 2019-01-08 10:42:33 -08:00
jit.cpp fix tuple matching (#17687) 2019-03-06 11:25:36 -08:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
module.cpp Apply modernize-use-override - 2/2 2019-02-13 21:01:28 -08:00
modules.cpp Rename BatchNorm running_variance to running_var (#17371) 2019-02-22 08:00:25 -08:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
ordered_dict.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
parallel.cpp Remove OptionsGuard from ATen (#14524) 2018-11-30 13:30:35 -08:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Pretty printing of C++ modules (#15326) 2018-12-19 21:55:49 -08:00
sequential.cpp Pretty printing of C++ modules (#15326) 2018-12-19 21:55:49 -08:00
serialize.cpp Trim libshm deps, move tempfile.h to c10 (#17019) 2019-02-13 19:38:35 -08:00
static.cpp Make call operator on module holder call forward (#15831) 2019-01-14 14:40:33 -08:00
support.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
tensor_cuda.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
tensor_options_cuda.cpp Replace tensor.type().scalarType() calls with tensor.scalar_type() 2019-03-08 14:08:18 -08:00
tensor_options.cpp Replace tensor.type().scalarType() calls with tensor.scalar_type() 2019-03-08 14:08:18 -08:00
tensor.cpp Rename _local_scalar to item() (#13676) 2018-12-04 13:19:26 -08:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.