pytorch/test/cpp/api
Wanchao Liang 6d4c509168 [autograd] lower MAX_DEPTH limit according to TSAN limit (#36745)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36745

As we hold a mutex for our custom C++ Node, when calling reentrant
backward from custom C++ function, we will cocurrently holding many
mutexes up to MAX_DEPTH. TSAN only allow 65 mutexes at once, otherwise
it will complain. This PR lower the limit according to TSAN.

TSAN Reference: https://github.com/google/sanitizers/issues/950

Test Plan: Imported from OSS

Differential Revision: D21072604

Pulled By: wanchaol

fbshipit-source-id: 99cd1acab41a203d834fa4947f4e6f0ffd2e70f2
2020-04-16 20:43:20 -07:00
..
any.cpp [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027) 2020-02-17 20:38:02 -08:00
autograd.cpp [autograd] lower MAX_DEPTH limit according to TSAN limit (#36745) 2020-04-16 20:43:20 -07:00
CMakeLists.txt Fix/relax CMake linter rules (#35574) 2020-03-27 16:52:33 -07:00
dataloader.cpp Fix typos (#30606) 2019-12-02 20:17:42 -08:00
dispatch.cpp make sure dispatch test works on windows (#36729) 2020-04-16 11:36:56 -07:00
enum.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
functional.cpp Support for XNNPACK max pooling operator. (#35354) 2020-04-03 22:53:15 -07:00
init_baseline.h Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
integration.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
module.cpp Remove dead includes in caffe2/test 2020-01-21 11:30:34 -08:00
modulelist.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
modules.cpp Add inplace tests for several torch::nn modules / functionals (#35147) 2020-03-21 10:02:56 -07:00
namespace.cpp Remove using namespace torch::autograd from header files (#34423) 2020-03-09 10:31:21 -07:00
nn_utils.cpp [C++ API] Add PackedSequence / pack_padded_sequence / pad_packed_sequence / pack_sequence (#33652) 2020-02-25 12:53:41 -08:00
optim_baseline.h [C++ API Parity] LBFGS optimizer step() update and added closure to the Optimizer step() function (#34564) 2020-03-17 22:27:24 -07:00
optim_baseline.py [C++ API Parity] LBFGS optimizer step() update and added closure to the Optimizer step() function (#34564) 2020-03-17 22:27:24 -07:00
optim.cpp Use std::abs instead of abs in lbfgs.cpp (#35974) 2020-04-04 09:37:21 -07:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel.cpp Fix bugs in torch::tensor constructor (#28523) 2019-10-31 12:53:06 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
sequential.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
serialize.cpp [C++ API Parity] [Optimizers] Merged Optimizer and LossClosureOptimizer (#34957) 2020-03-26 19:53:02 -07:00
static.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
support.cpp Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632) 2019-11-13 15:17:11 -08:00
support.h C++ tensor indexing: more indexing tests (#30427) 2020-02-28 22:07:41 -08:00
tensor_cuda.cpp Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547) 2020-01-25 20:00:54 -08:00
tensor_indexing.cpp Back out "[pytorch][PR] indexing: throw exception for masks with dtype=uint8" (#36013) 2020-04-03 21:40:02 -07:00
tensor_options_cuda.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor_options.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor.cpp Bug fixes: torch::tensor(floating-point values) -> default dtype, and torch::tensor(integer values) ->at::kLong (#32367) 2020-02-01 15:00:07 -08:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.