pytorch/test/cpp/api
Shen Li aae48748f2 Avoid unnecessary tensor clone in Cloneable (#20995)
Summary:
As pointed out by SsnL in https://github.com/pytorch/pytorch/issues/20910, when clone destination is different from the module's device,
`Cloneable` currently calls `clone()` and then `to()` on every parameter and buffer, where the first clone is unnecessary.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20995

Differential Revision: D15517353

Pulled By: mrshenli

fbshipit-source-id: 6b6dc01560540a63845663f863dea0a948021fa5
2019-07-26 12:46:42 -07:00
..
any.cpp Fix Windows build and test in CI (#11716) 2018-11-13 16:35:54 -08:00
CMakeLists.txt Switch to out-source builds for LibTorch 2019-06-14 21:00:18 -07:00
dataloader.cpp Add support for cross-chunk shuffling in ChunkDataset (#22347) 2019-07-01 19:13:34 -07:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp Fix torch::nn::init::orthogonal_ with CNNs (#18915) 2019-04-09 10:39:15 -07:00
integration.cpp Move isnan to C++ (#15722) 2019-01-08 10:42:33 -08:00
jit.cpp Use concrete types in jit test for generic lists (#23192) 2019-07-23 10:04:12 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
module.cpp Avoid unnecessary tensor clone in Cloneable (#20995) 2019-07-26 12:46:42 -07:00
modules.cpp Rename BatchNorm running_variance to running_var (#17371) 2019-02-22 08:00:25 -08:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
ordered_dict.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
parallel.cpp Fix C++ data parallel (#20910) 2019-06-06 11:57:31 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Bidirectional GRU and LSTM C++ API forward fix (#22850) 2019-07-22 12:59:47 -07:00
sequential.cpp Include named_any.h in modules.h (#21437) 2019-06-06 09:57:33 -07:00
serialize.cpp Ignore nn::Functional submodules in nn::Module serialization (#19740) 2019-04-26 12:47:23 -07:00
static.cpp Make call operator on module holder call forward (#15831) 2019-01-14 14:40:33 -08:00
support.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
tensor_cuda.cpp push magma init into lazyInitCUDA (#18527) 2019-04-03 12:47:34 -07:00
tensor_options_cuda.cpp Add ScalarType argument to Type::options() (#19270) 2019-04-21 21:16:07 -07:00
tensor_options.cpp Stop using Type in Python bindings (#21963) 2019-06-30 04:11:32 -07:00
tensor.cpp Rename _local_scalar to item() (#13676) 2018-12-04 13:19:26 -08:00
torch_include.cpp Add get/set_num_interop_threads into torch.h include (#20659) 2019-05-20 00:34:59 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.