pytorch/test/cpp/api
Ailing Zhang 98162cb0bb Enable AutoGradMode in InferenceMode. (#56107)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56107

Test Plan: Imported from OSS

Reviewed By: pbelevich, driazati

Differential Revision: D27807137

Pulled By: ailzhang

fbshipit-source-id: bfacf11ec5a431589cec73d6371cac81b425a115
2021-04-19 10:24:20 -07:00
..
any.cpp
autograd.cpp Fix autograd when inputs contains tensors without materialized grad_fn (#51940) 2021-02-11 09:22:15 -08:00
CMakeLists.txt Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
dataloader.cpp Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
dispatch.cpp
enum.cpp
expanding-array.cpp
fft.cpp Remove deprecated spectral ops from torch namespace (#48594) 2020-12-05 04:12:32 -08:00
functional.cpp Add padding_idx argument to EmbeddingBag (#49237) 2021-04-14 09:38:01 -07:00
grad_mode.cpp [WIP]Relax some limitations of InferenceMode. (#54403) 2021-04-09 14:40:37 -07:00
inference_mode.cpp Enable AutoGradMode in InferenceMode. (#56107) 2021-04-19 10:24:20 -07:00
init_baseline.h Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
init_baseline.py
init.cpp
integration.cpp
jit.cpp
memory.cpp
misc.cpp codegen: Resolve overload ambiguities created by defaulted arguments (#49348) 2021-01-04 11:59:16 -08:00
module.cpp
moduledict.cpp Implement C++ ModuleDict (#47707) 2020-11-19 08:07:51 -08:00
modulelist.cpp
modules.cpp Add padding_idx argument to EmbeddingBag (#49237) 2021-04-14 09:38:01 -07:00
namespace.cpp
nn_utils.cpp Flip clip_grad_norm default for error_if_nonfinite to false (#55169) 2021-04-02 12:25:32 -07:00
operations.cpp
optim_baseline.h
optim_baseline.py Remove legacy constructor calls from pytorch codebase. (#54142) 2021-04-11 15:45:17 -07:00
optim.cpp Adding learning rate schedulers to C++ API (#52268) 2021-03-10 23:09:51 -08:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp
parameterdict.cpp
parameterlist.cpp
README.md
rnn.cpp Adding support for CuDNN-based LSTM with projections (#47725) 2020-12-16 11:27:02 -08:00
sequential.cpp
serialize.cpp Modernize for-loops (#50912) 2021-01-22 10:53:24 -08:00
special.cpp [special] add torch.special namespace (#52296) 2021-03-04 00:04:36 -08:00
static.cpp
support.cpp
support.h Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
tensor_cuda.cpp
tensor_flatten.cpp fix unflatten_dense_tensor when there is empty tensor inside (#50321) 2021-01-23 12:14:34 -08:00
tensor_indexing.cpp Making ops c10-full: list of optional tensors (#49138) 2021-01-04 05:04:02 -08:00
tensor_options_cuda.cpp
tensor_options.cpp [PyTorch] Narrow Device to 2 bytes by narrowing DeviceType and DeviceIndex (#47023) 2020-11-18 19:39:40 -08:00
tensor.cpp
torch_include.cpp
transformer.cpp

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.