pytorch/test/cpp/api
Elias Ellison 18659e1336 Allow generic containers as module inputs (#16482)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/16326

Previously we didn't handle module inputs which included Generic Lists. When checking whether a generic list if a subvalue of the input arg type, I currently recurse on every element of the list. This shouldn't be too slow since the innermost list will be specialized and we won't have to check it's elements.

E.g. Tensor[][] -> GenericList [TensorList ].

The error message could be improved, but extracting the complete type of nested lists would have to deal with unifying types across lists / empty lists & typevars so I'm going to save that for a follow up PR.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16482

Differential Revision: D13882582

Pulled By: eellison

fbshipit-source-id: 3609bc572f0ee9ebf20a77ea5ebc8fa3b165e24b
2019-01-30 14:20:56 -08:00
..
any.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
CMakeLists.txt Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
dataloader.cpp Chunk dataset implementation (#15932) 2019-01-29 18:06:01 -08:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
integration.cpp Move isnan to C++ (#15722) 2019-01-08 10:42:33 -08:00
jit.cpp Allow generic containers as module inputs (#16482) 2019-01-30 14:20:56 -08:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
module.cpp Make call operator on module holder call forward (#15831) 2019-01-14 14:40:33 -08:00
modules.cpp Make call operator on module holder call forward (#15831) 2019-01-14 14:40:33 -08:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
ordered_dict.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
parallel.cpp Remove OptionsGuard from ATen (#14524) 2018-11-30 13:30:35 -08:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Pretty printing of C++ modules (#15326) 2018-12-19 21:55:49 -08:00
sequential.cpp Pretty printing of C++ modules (#15326) 2018-12-19 21:55:49 -08:00
serialize.cpp Fix serialization (#15033) 2018-12-11 22:43:36 -08:00
static.cpp Make call operator on module holder call forward (#15831) 2019-01-14 14:40:33 -08:00
support.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
tensor_cuda.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
tensor_options_cuda.cpp Fix include paths for TensorOptions 2018-12-07 16:23:44 -08:00
tensor_options.cpp Fix include paths for TensorOptions 2018-12-07 16:23:44 -08:00
tensor.cpp Rename _local_scalar to item() (#13676) 2018-12-04 13:19:26 -08:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.