pytorch/test/cpp/api
Will Feng 026fd36c71 Use at::kLong for torch::tensor(integer_value) when dtype is not specified (#29066)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29066

This PR is BC-breaking in the following way:

Previously, C++ `torch::tensor` with an integer literal or a braced-init-list of
integer literals produces a tensor with dtype being the type of the integer literal(s). After this PR, it always produces a tensor of dtype `at::kLong` (aka. int64_t), matching Python `torch.tensor` behavior.

Test Plan: Imported from OSS

Differential Revision: D18307248

Pulled By: yf225

fbshipit-source-id: 7a8a2eefa113cbb238f23264843bdb3b77fec668
2019-11-04 21:39:10 -08:00
..
any.cpp Separate libtorch tests from libtorch build. (#26927) 2019-10-02 08:04:52 -07:00
autograd.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
CMakeLists.txt Use torch::variant for enums in C++ API 2019-10-16 22:40:57 -07:00
dataloader.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
enum.cpp Use c10::variant-based enums for Reduction 2019-10-29 14:15:48 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
functional.cpp Add torch.nn.GELU for GELU activation (#28944) 2019-11-03 21:55:05 -08:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp Use c10::variant-based enums for Nonlinearity and FanMode 2019-10-18 17:48:34 -07:00
integration.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
module.cpp Allow passing undefined Tensor to Module::register_parameter (#27948) 2019-10-15 10:10:42 -07:00
modulelist.cpp C++ API: torch::nn::BatchNorm1d (#28176) 2019-10-29 17:29:42 -07:00
modules.cpp Add torch.nn.GELU for GELU activation (#28944) 2019-11-03 21:55:05 -08:00
nn_utils.cpp Add C++ API clip_grad_value_ for nn:utils (#28736) 2019-10-31 19:11:54 -07:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
sequential.cpp C++ API: torch::nn::BatchNorm1d (#28176) 2019-10-29 17:29:42 -07:00
serialize.cpp Include hierarchy information in C++ API loading error messages (#28499) 2019-10-30 08:41:37 -07:00
static.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
support.h Add TORCH_WARN_ONCE, and use it in Tensor.data<T>() (#25207) 2019-08-27 21:42:44 -07:00
tensor_cuda.cpp Deprecate tensor.data<T>(), and codemod tensor.data<T>() to tensor.data_ptr<T>() (#24886) 2019-08-21 20:11:24 -07:00
tensor_options_cuda.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
tensor_options.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
tensor.cpp Use at::kLong for torch::tensor(integer_value) when dtype is not specified (#29066) 2019-11-04 21:39:10 -08:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.