pytorch/test/cpp/api
Ryan Spring 4f8b986e28 Implement Tanh Gelu Approximation (#61439)
Summary:
1. Implements https://github.com/pytorch/pytorch/issues/39853
2. Adds approximate boolean flag to Gelu
3. Enables Tanh Gelu approximation
4. Adds double backward support for Gelu
5. Enable Tanh Gelu in NvFuser

```
def gelu(x, approximate : str = 'none'):
    if approximate == 'tanh':
        # sqrt(2/pi) = 0.7978845608028654
        return 0.5 * x * (1.0 + torch.tanh(0.7978845608028654 * (x + 0.044715 * torch.pow(x, 3.0))))
    else:
        return x * normcdf(x)
```

Linking XLA PR - https://github.com/pytorch/xla/pull/3039

Pull Request resolved: https://github.com/pytorch/pytorch/pull/61439

Reviewed By: VitalyFedyunin

Differential Revision: D33894937

Pulled By: jbschlosser

fbshipit-source-id: b65e8fb6ea66168af8f34f45ed50e92737a33851
(cherry picked from commit 6e986f91a9)
2022-02-14 03:40:32 +00:00
..
any.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
autograd.cpp [easy][PyTorch] Use at::native::is_nonzero (#67195) 2021-10-26 12:40:32 -07:00
CMakeLists.txt Compile without -Wno-unused-variable (take 2) (#66041) 2021-10-04 20:39:39 -07:00
dataloader.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
dispatch.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
enum.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
expanding-array.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
fft.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
functional.cpp Implement Tanh Gelu Approximation (#61439) 2022-02-14 03:40:32 +00:00
grad_mode.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
imethod.cpp [deploy][1/n] Make deploy code conform to PyTorch style. (#65861) 2021-09-30 22:59:47 -07:00
inference_mode.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
init_baseline.h
init_baseline.py Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default 2021-08-12 11:45:01 -07:00
init.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
integration.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
jit.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
memory.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
meta_tensor.cpp
misc.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
module.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
moduledict.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
modulelist.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
modules.cpp Implement Tanh Gelu Approximation (#61439) 2022-02-14 03:40:32 +00:00
namespace.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
nn_utils.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
operations.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
optim_baseline.h
optim_baseline.py Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default 2021-08-12 11:45:01 -07:00
optim.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
ordered_dict.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
parallel_benchmark.cpp
parallel.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
parameterdict.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
parameterlist.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
README.md
rnn.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
sequential.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
serialize.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
special.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
static.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
support.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
support.h
tensor_cuda.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
tensor_flatten.cpp
tensor_indexing.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
tensor_options_cuda.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
tensor_options.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
tensor.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
torch_include.cpp Disable avoid-non-const-global-variables lint check (#62008) 2021-07-22 18:04:40 -07:00
transformer.cpp Callable activation function support for Transformer modules (C++) (#62342) 2021-08-02 08:06:39 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.