pytorch/test/cpp/api
Xiang Gao 6fc75eadf0 Add CELU activation to pytorch (#8551)
Summary:
Also fuse input scale multiplication into ELU

Paper:
https://arxiv.org/pdf/1704.07483.pdf
Pull Request resolved: https://github.com/pytorch/pytorch/pull/8551

Differential Revision: D9088477

Pulled By: SsnL

fbshipit-source-id: 877771bee251b27154058f2b67d747c9812c696b
2018-08-01 07:54:44 -07:00
..
any.cpp Make Sequential ref-counted (#9151) 2018-07-11 17:24:59 -07:00
cursor.cpp Add OptimizerBase::add_parameters (#9472) 2018-07-17 14:10:22 -07:00
integration.cpp Fix Sequential::clone() (#9372) 2018-07-16 21:53:42 -07:00
main.cpp Functional DataParallel (#9234) 2018-07-19 16:12:04 -07:00
misc.cpp Initialization functions (#9295) 2018-07-12 18:53:57 -07:00
module.cpp Change behavior of clone to clone to a device (#9609) 2018-07-23 14:55:25 -07:00
modules.cpp Add CELU activation to pytorch (#8551) 2018-08-01 07:54:44 -07:00
optim_baseline.h Use torch:: instead of at:: (#8911) 2018-06-27 14:42:01 -07:00
optim_baseline.py Use torch:: instead of at:: (#8911) 2018-06-27 14:42:01 -07:00
optim.cpp Add OptimizerBase::add_parameters (#9472) 2018-07-17 14:10:22 -07:00
parallel.cpp Guard include of cuda-only header comm.h (#9656) 2018-07-20 19:46:36 -07:00
README.md Update C++ API tests to use Catch2 (#7108) 2018-04-30 21:36:35 -04:00
rnn.cpp Merge THStorage into at::Storage 2018-07-27 13:53:55 -07:00
sequential.cpp Change behavior of clone to clone to a device (#9609) 2018-07-23 14:55:25 -07:00
serialization.cpp Make Sequential ref-counted (#9151) 2018-07-11 17:24:59 -07:00
static.cpp Make Sequential ref-counted (#9151) 2018-07-11 17:24:59 -07:00
tensor_cuda.cpp Functional DataParallel (#9234) 2018-07-19 16:12:04 -07:00
tensor_options_cuda.cpp Functional DataParallel (#9234) 2018-07-19 16:12:04 -07:00
tensor_options.cpp Created DefaultTensorOptions in ATen (#8647) 2018-06-24 21:15:09 -07:00
tensor.cpp Create torch::from_blob for variables (#9605) 2018-07-23 12:40:12 -07:00
util.h Fix Sequential::clone() (#9372) 2018-07-16 21:53:42 -07:00

C++ API Tests

In this folder live the tests for PyTorch's C++ API (formerly known as autogradpp). They use the Catch2 test framework.

CUDA Tests

The way we handle CUDA tests is by separating them into a separate TEST_CASE (e.g. we have optim and optim_cuda test cases in optim.cpp), and giving them the [cuda] tag. Then, inside main.cpp we detect at runtime whether CUDA is available. If not, we disable these CUDA tests by appending ~[cuda] to the test specifications. The ~ disables the tag.

One annoying aspect is that Catch only allows filtering on test cases and not sections. Ideally, one could have a section like LSTM inside the RNN test case, and give this section a [cuda] tag to only run it when CUDA is available. Instead, we have to create a whole separate RNN_cuda test case and put all these CUDA sections in there.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist