pytorch/test/cpp/api
Roy Li ab78449e8c Add ScalarType argument to Type::options() (#19270)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19270
ghimport-source-id: a5ade6131f3260066c5750ea1fa9ed5c998bb791

Differential Revision: D14938707

Pulled By: li-roy

fbshipit-source-id: 018fb3f01706531a06515d6d861e5683a455a705
2019-04-21 21:16:07 -07:00
..
any.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
CMakeLists.txt Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
dataloader.cpp Jaliyae/chunk buffer fix (#17409) 2019-02-23 08:48:53 -08:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp Fix torch::nn::init::orthogonal_ with CNNs (#18915) 2019-04-09 10:39:15 -07:00
integration.cpp Move isnan to C++ (#15722) 2019-01-08 10:42:33 -08:00
jit.cpp Fix test build (#19444) 2019-04-18 18:05:04 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
module.cpp Apply modernize-use-override - 2/2 2019-02-13 21:01:28 -08:00
modules.cpp Rename BatchNorm running_variance to running_var (#17371) 2019-02-22 08:00:25 -08:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
ordered_dict.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
parallel.cpp Remove OptionsGuard from ATen (#14524) 2018-11-30 13:30:35 -08:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Pretty printing of C++ modules (#15326) 2018-12-19 21:55:49 -08:00
sequential.cpp Add named submodule support to nn::Sequential (#17552) 2019-03-29 13:06:29 -07:00
serialize.cpp Trim libshm deps, move tempfile.h to c10 (#17019) 2019-02-13 19:38:35 -08:00
static.cpp Make call operator on module holder call forward (#15831) 2019-01-14 14:40:33 -08:00
support.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
tensor_cuda.cpp push magma init into lazyInitCUDA (#18527) 2019-04-03 12:47:34 -07:00
tensor_options_cuda.cpp Add ScalarType argument to Type::options() (#19270) 2019-04-21 21:16:07 -07:00
tensor_options.cpp Add ScalarType argument to Type::options() (#19270) 2019-04-21 21:16:07 -07:00
tensor.cpp Rename _local_scalar to item() (#13676) 2018-12-04 13:19:26 -08:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.