pytorch/test/cpp/api
Brian Hirsh 439930c81b adding a beta parameter to the smooth_l1 loss fn (#44433)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44433

Not entirely sure why, but changing the type of beta from `float` to `double in autocast_mode.cpp and FunctionsManual.h fixes my compiler errors, failing instead at link time

fixing some type errors, updated fn signature in a few more files

removing my usage of Scalar, making beta a double everywhere instead

Test Plan: Imported from OSS

Reviewed By: mrshenli

Differential Revision: D23636720

Pulled By: bdhirsh

fbshipit-source-id: caea2a1f8dd72b3b5fd1d72dd886b2fcd690af6d
2020-09-25 16:36:28 -07:00
..
any.cpp [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027) 2020-02-17 20:38:02 -08:00
autograd.cpp Don't materialize output grads (#41821) 2020-08-11 04:27:07 -07:00
CMakeLists.txt C++ API TransformerEncoderLayer (#42633) 2020-08-07 11:49:42 -07:00
dataloader.cpp Fix typos (#30606) 2019-12-02 20:17:42 -08:00
dispatch.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
enum.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
fft.cpp Add one dimensional FFTs to torch.fft namespace (#43011) 2020-09-19 23:32:22 -07:00
functional.cpp adding a beta parameter to the smooth_l1 loss fn (#44433) 2020-09-25 16:36:28 -07:00
init_baseline.h
init_baseline.py
init.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
integration.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp
misc.cpp Throw error if torch.set_deterministic(True) is called with nondeterministic CuBLAS config (#41377) 2020-08-05 12:42:24 -07:00
module.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
modulelist.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
modules.cpp Implemented torch::nn::Unflatten in libtorch (#42613) 2020-08-14 15:32:13 -07:00
namespace.cpp Remove using namespace torch::autograd from header files (#34423) 2020-03-09 10:31:21 -07:00
nn_utils.cpp [WIP] Fix cpp grad accessor API (#40887) 2020-07-16 09:11:12 -07:00
operations.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953) 2020-09-01 21:48:28 -07:00
optim_baseline.h Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim_baseline.py Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim.cpp [WIP] Fix cpp grad accessor API (#40887) 2020-07-16 09:11:12 -07:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel_benchmark.cpp [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
parallel.cpp [PyTorch] Modify data_parallel to work with small tensors (#37704) 2020-05-04 11:06:42 -07:00
parameterdict.cpp Python/C++ API Parity: Add impl and tests for ParameterDict (#40654) 2020-06-29 08:50:44 -07:00
parameterlist.cpp Impl for ParameterList (#41259) 2020-07-12 20:50:31 -07:00
README.md
rnn.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
sequential.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
serialize.cpp Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
static.cpp
support.cpp Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632) 2019-11-13 15:17:11 -08:00
support.h Changes warnings generated in cpp to show point of Python origination (#36052) 2020-04-25 21:18:58 -07:00
tensor_cuda.cpp Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547) 2020-01-25 20:00:54 -08:00
tensor_indexing.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
tensor_options_cuda.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor_options.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor.cpp Change to.dtype_layout to c10-full (#41169) 2020-07-10 16:04:34 -07:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00
transformer.cpp C++ APIs Transformer NN Module Top Layer (#44333) 2020-09-11 08:25:27 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.