pytorch/test/cpp/api
2025-10-10 17:56:11 +00:00
..
any.cpp
autograd.cpp Add view support for library custom Function (#164520) 2025-10-09 16:17:48 +00:00
CMakeLists.txt
dataloader.cpp
dispatch.cpp
enum.cpp
expanding-array.cpp
fft.cpp
functional.cpp
grad_mode.cpp
inference_mode.cpp
init_baseline.h
init_baseline.py
init.cpp
integration.cpp
ivalue.cpp
jit.cpp
memory.cpp
meta_tensor.cpp
misc.cpp
module.cpp
moduledict.cpp
modulelist.cpp
modules.cpp
namespace.cpp
nested_int.cpp
nested.cpp
nn_utils.cpp
operations.cpp
optim_baseline.h
optim_baseline.py
optim.cpp Revert "C++ API handle optimizer defaults (#161825)" 2025-10-10 17:56:11 +00:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp
parameterdict.cpp
parameterlist.cpp
README.md
rnn.cpp
sequential.cpp
serialize.cpp
special.cpp
static.cpp
support.cpp
support.h
tensor_cuda.cpp
tensor_flatten.cpp
tensor_indexing.cpp
tensor_options_cuda.cpp
tensor_options.cpp
tensor.cpp
torch_include.cpp
transformer.cpp

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.