pytorch/test/cpp/api
Peter Goldsborough dccd0f2de6 Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (#11050)
Summary:
Linting `torch/csrc/` (non-recursive) and `torch/csrc/autograd` (non-recursive).

Fixed things like:
- `typedef` vs `using`
- Use `.empty()` instead of comparing with empty string/using `.size() == 0`
- Use range for loops instead of old style loops (`modernize-`)
- Remove some `virtual` + `override`
- Replace `stdint.h` with `cstdint`
- Replace `return Type(x, y)` with `return {x, y}`
- Use boolean values (`true`/`false`)  instead of numbers (1/0)
- More ...

ezyang apaszke cpuhrsch
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11050

Differential Revision: D9597505

Pulled By: goldsborough

fbshipit-source-id: cb0fb4793ade885a8dbf4b10484487b84c64c7f2
2018-09-05 19:55:50 -07:00
..
any.cpp Make torch::Tensor -> at::Tensor (#10516) 2018-08-15 21:25:12 -07:00
cursor.cpp Add OptimizerBase::add_parameters (#9472) 2018-07-17 14:10:22 -07:00
integration.cpp Refactor Device to not depend on Backend. (#10478) 2018-08-18 17:39:14 -07:00
main.cpp Functional DataParallel (#9234) 2018-07-19 16:12:04 -07:00
misc.cpp Update include paths for ATen/core (#10130) 2018-08-03 11:57:02 -07:00
module.cpp Make torch::Tensor -> at::Tensor (#10516) 2018-08-15 21:25:12 -07:00
modules.cpp Make torch::Tensor -> at::Tensor (#10516) 2018-08-15 21:25:12 -07:00
optim_baseline.h Remove use of data() in optimizers (#10490) 2018-08-14 13:10:19 -07:00
optim_baseline.py Remove use of data() in optimizers (#10490) 2018-08-14 13:10:19 -07:00
optim.cpp Bag of clang tidy fixes for torch/csrc/ and torch/csrc/autograd (#11050) 2018-09-05 19:55:50 -07:00
parallel.cpp Make torch::Tensor -> at::Tensor (#10516) 2018-08-15 21:25:12 -07:00
README.md Update C++ API tests to use Catch2 (#7108) 2018-04-30 21:36:35 -04:00
rnn.cpp Use ATen implementation of RNNs (#10761) 2018-08-23 16:12:14 -07:00
sequential.cpp Make torch::Tensor -> at::Tensor (#10516) 2018-08-15 21:25:12 -07:00
serialization.cpp Add complex32, complex64 and complex128 dtypes (#11173) 2018-09-03 19:19:36 -07:00
static.cpp Make Sequential ref-counted (#9151) 2018-07-11 17:24:59 -07:00
tensor_cuda.cpp Functional DataParallel (#9234) 2018-07-19 16:12:04 -07:00
tensor_options_cuda.cpp Move TensorOptions to ATen/core 2018-09-04 08:55:54 -07:00
tensor_options.cpp Move TensorOptions to ATen/core 2018-09-04 08:55:54 -07:00
tensor.cpp Remove Tensor constructor of Scalar. (#10852) 2018-08-24 16:02:05 -07:00
util.h Make torch::Tensor -> at::Tensor (#10516) 2018-08-15 21:25:12 -07:00

C++ API Tests

In this folder live the tests for PyTorch's C++ API (formerly known as autogradpp). They use the Catch2 test framework.

CUDA Tests

The way we handle CUDA tests is by separating them into a separate TEST_CASE (e.g. we have optim and optim_cuda test cases in optim.cpp), and giving them the [cuda] tag. Then, inside main.cpp we detect at runtime whether CUDA is available. If not, we disable these CUDA tests by appending ~[cuda] to the test specifications. The ~ disables the tag.

One annoying aspect is that Catch only allows filtering on test cases and not sections. Ideally, one could have a section like LSTM inside the RNN test case, and give this section a [cuda] tag to only run it when CUDA is available. Instead, we have to create a whole separate RNN_cuda test case and put all these CUDA sections in there.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist