pytorch/test/cpp/api
Will Feng cb74ede59e Pass F::*FuncOptions instead of torch::nn::*Options to functionals, and make F::*FuncOptions a different class when necessary (#29364)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29364

Currently, we use `torch::nn::*Options` both as module options and functional options. However, this makes it very hard to manage the parameters in `torch::nn::*Options`, because a module's constructor can take a different set of arguments than the module's equivalent functional (e.g. `torch.nn.BatchNorm1d` takes `num_features, eps=1e-5, momentum=0.1, affine=True,
track_running_stats=True`, while `F::batch_norm` takes `running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-5`).

This PR resolves the above problem by making `F::*FuncOptions` a different class from `torch::nn::*Options` when necessary (i.e. when a module's constructor takes a different set of arguments than the module's equivalent functional). In the rest of the cases where the module constructor takes the same set of arguments as the module's equivalent functional, `F::*FuncOptions` is an alias of `torch::nn::*Options`.

Also as part of this PR, we change all functional options to pass-by-value, to make the semantics consistent across all functionals.

Test Plan: Imported from OSS

Differential Revision: D18376977

Pulled By: yf225

fbshipit-source-id: 8d9c240d93bfd5af0165b6884fdc912476b1d06b
2019-11-08 22:38:21 -08:00
..
any.cpp Separate libtorch tests from libtorch build. (#26927) 2019-10-02 08:04:52 -07:00
autograd.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
CMakeLists.txt Use torch::variant for enums in C++ API 2019-10-16 22:40:57 -07:00
dataloader.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
enum.cpp Use c10::variant-based enums for Reduction 2019-10-29 14:15:48 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
functional.cpp Pass F::*FuncOptions instead of torch::nn::*Options to functionals, and make F::*FuncOptions a different class when necessary (#29364) 2019-11-08 22:38:21 -08:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp Use c10::variant-based enums for Nonlinearity and FanMode 2019-10-18 17:48:34 -07:00
integration.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
module.cpp Allow passing undefined Tensor to Module::register_parameter (#27948) 2019-10-15 10:10:42 -07:00
modulelist.cpp C++ API: torch::nn::BatchNorm1d (#28176) 2019-10-29 17:29:42 -07:00
modules.cpp Pass F::*FuncOptions instead of torch::nn::*Options to functionals, and make F::*FuncOptions a different class when necessary (#29364) 2019-11-08 22:38:21 -08:00
nn_utils.cpp Add C++ API clip_grad_value_ for nn:utils (#28736) 2019-10-31 19:11:54 -07:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
sequential.cpp C++ API: torch::nn::BatchNorm1d (#28176) 2019-10-29 17:29:42 -07:00
serialize.cpp Implement more of of the nn.Module API (#28828) 2019-11-06 22:58:25 -08:00
static.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
support.h Add TORCH_WARN_ONCE, and use it in Tensor.data<T>() (#25207) 2019-08-27 21:42:44 -07:00
tensor_cuda.cpp Deprecate tensor.data<T>(), and codemod tensor.data<T>() to tensor.data_ptr<T>() (#24886) 2019-08-21 20:11:24 -07:00
tensor_options_cuda.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
tensor_options.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
tensor.cpp Revert "Revert D18171156: Merge Tensor and Variable." (#29299) 2019-11-08 09:11:20 -08:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.