pytorch/test/cpp/api
Mike Ruberry ccfce9d4a9 Adds fft namespace (#41911)
Summary:
This PR creates a new namespace, torch.fft (torch::fft) and puts a single function, fft, in it. This function is analogous to is a simplified version of NumPy's [numpy.fft.fft](https://numpy.org/doc/1.18/reference/generated/numpy.fft.fft.html?highlight=fft#numpy.fft.fft) that accepts no optional arguments. It is intended to demonstrate how to add and document functions in the namespace, and is not intended to deprecate the existing torch.fft function.

Adding this namespace was complicated by the existence of the torch.fft function in Python. Creating a torch.fft Python module makes this name ambiguous: does it refer to a function or module? If the JIT didn't exist, a solution to this problem would have been to make torch.fft refer to a callable class that mimicked both the function and module. The JIT, however, cannot understand this pattern. As a workaround it's required to explicitly `import torch.fft` to access the torch.fft.fft function in Python:

```
import torch.fft

t = torch.randn(128, dtype=torch.cdouble)
torch.fft.fft(t)
```

See https://github.com/pytorch/pytorch/issues/42175 for future work. Another possible future PR is to get the JIT to understand torch.fft as a callable class so it need not be imported explicitly to be used.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41911

Reviewed By: glaringlee

Differential Revision: D22941894

Pulled By: mruberry

fbshipit-source-id: c8e0b44cbe90d21e998ca3832cf3a533f28dbe8d
2020-08-06 00:20:50 -07:00
..
any.cpp [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027) 2020-02-17 20:38:02 -08:00
autograd.cpp retain undefined tensors in backward pass (#41490) 2020-07-17 12:42:50 -07:00
CMakeLists.txt Adds fft namespace (#41911) 2020-08-06 00:20:50 -07:00
dataloader.cpp Fix typos (#30606) 2019-12-02 20:17:42 -08:00
dispatch.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
enum.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
fft.cpp Adds fft namespace (#41911) 2020-08-06 00:20:50 -07:00
functional.cpp Support for XNNPACK max pooling operator. (#35354) 2020-04-03 22:53:15 -07:00
init_baseline.h
init_baseline.py
init.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
integration.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp
misc.cpp Throw error if torch.set_deterministic(True) is called with nondeterministic CuBLAS config (#41377) 2020-08-05 12:42:24 -07:00
module.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
modulelist.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
modules.cpp register parameters correctly in c++ MultiheadAttention (#42037) 2020-07-27 13:58:11 -07:00
namespace.cpp Remove using namespace torch::autograd from header files (#34423) 2020-03-09 10:31:21 -07:00
nn_utils.cpp [WIP] Fix cpp grad accessor API (#40887) 2020-07-16 09:11:12 -07:00
operations.cpp Add test to cross function 2020-07-29 22:48:52 -07:00
optim_baseline.h Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim_baseline.py Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim.cpp [WIP] Fix cpp grad accessor API (#40887) 2020-07-16 09:11:12 -07:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel_benchmark.cpp [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
parallel.cpp [PyTorch] Modify data_parallel to work with small tensors (#37704) 2020-05-04 11:06:42 -07:00
parameterdict.cpp Python/C++ API Parity: Add impl and tests for ParameterDict (#40654) 2020-06-29 08:50:44 -07:00
parameterlist.cpp Impl for ParameterList (#41259) 2020-07-12 20:50:31 -07:00
README.md
rnn.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
sequential.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
serialize.cpp Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
static.cpp
support.cpp Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632) 2019-11-13 15:17:11 -08:00
support.h Changes warnings generated in cpp to show point of Python origination (#36052) 2020-04-25 21:18:58 -07:00
tensor_cuda.cpp Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547) 2020-01-25 20:00:54 -08:00
tensor_indexing.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
tensor_options_cuda.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor_options.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor.cpp Change to.dtype_layout to c10-full (#41169) 2020-07-10 16:04:34 -07:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.