pytorch/test/cpp/api
Maxim Grechkin 38a08a49ea Flip clip_grad_norm default for error_if_nonfinite to false (#55169)
Summary:
Non-backwards-compatible change introduced in https://github.com/pytorch/pytorch/pull/53843 is tripping up a lot of code. Better to set it to False initially and then potentially flip to True in the later version to give people time to adapt.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55169

Reviewed By: mruberry

Differential Revision: D27511150

Pulled By: jbschlosser

fbshipit-source-id: 1ac018557c0900b31995c29f04aea060a27bc525
2021-04-02 12:25:32 -07:00
..
any.cpp
autograd.cpp Fix autograd when inputs contains tensors without materialized grad_fn (#51940) 2021-02-11 09:22:15 -08:00
CMakeLists.txt Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
dataloader.cpp Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
dispatch.cpp
enum.cpp
expanding-array.cpp
fft.cpp Remove deprecated spectral ops from torch namespace (#48594) 2020-12-05 04:12:32 -08:00
functional.cpp Forbid trailing whitespace (#53406) 2021-03-05 17:22:55 -08:00
grad_mode.cpp Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
inference_mode.cpp Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
init_baseline.h Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
init_baseline.py
init.cpp
integration.cpp
jit.cpp
memory.cpp
misc.cpp codegen: Resolve overload ambiguities created by defaulted arguments (#49348) 2021-01-04 11:59:16 -08:00
module.cpp
moduledict.cpp Implement C++ ModuleDict (#47707) 2020-11-19 08:07:51 -08:00
modulelist.cpp
modules.cpp Add padding='same' mode to conv{1,2,3}d (#45667) 2021-03-18 16:22:03 -07:00
namespace.cpp
nn_utils.cpp Flip clip_grad_norm default for error_if_nonfinite to false (#55169) 2021-04-02 12:25:32 -07:00
operations.cpp
optim_baseline.h
optim_baseline.py
optim.cpp Adding learning rate schedulers to C++ API (#52268) 2021-03-10 23:09:51 -08:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp
parameterdict.cpp
parameterlist.cpp
README.md
rnn.cpp Adding support for CuDNN-based LSTM with projections (#47725) 2020-12-16 11:27:02 -08:00
sequential.cpp
serialize.cpp Modernize for-loops (#50912) 2021-01-22 10:53:24 -08:00
special.cpp [special] add torch.special namespace (#52296) 2021-03-04 00:04:36 -08:00
static.cpp
support.cpp
support.h Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
tensor_cuda.cpp
tensor_flatten.cpp fix unflatten_dense_tensor when there is empty tensor inside (#50321) 2021-01-23 12:14:34 -08:00
tensor_indexing.cpp Making ops c10-full: list of optional tensors (#49138) 2021-01-04 05:04:02 -08:00
tensor_options_cuda.cpp
tensor_options.cpp [PyTorch] Narrow Device to 2 bytes by narrowing DeviceType and DeviceIndex (#47023) 2020-11-18 19:39:40 -08:00
tensor.cpp
torch_include.cpp
transformer.cpp C++ APIs Transformer NN Module Top Layer (#44333) 2020-09-11 08:25:27 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.