pytorch/test/cpp/api
Rishub Tamirisa f3b8638074 Adding nn.ZeroPad1d and nn.ZeroPad3d (#96295)
Fixes #95796

### Implementation
Adds python implementation for `nn.ZeroPad1d` and `nn.ZeroPad3d` in `torch/nn/modules/padding.py`.

Adds cpp implementation for `nn::ZeroPad1d` and `nn::ZeroPad3d` in the following 3 files, refactored with templates similarly to `nn::ConstantPad`'s implementation: <br>
- `torch/crsc/api/include/torch/nn/modules/padding.h`
- `torch/csrc/api/include/torch/nn/options/padding.h`
- `torch/csrc/api/src/nn/modules/padding.cpp`

Also added relevant definitions in `torch/nn/modules/__init__.py`.
### Testing
Adds the following tests:
-  cpp tests of similar length and structure as `ConstantPad` and the existing `ZeroPad2d` impl in `test/cpp/api/modules.cpp`
- cpp API parity tests in `torch/testing/_internal/common_nn.py`
- module init tests in `test/test_module_init.py`

Also added relevant definitions in `test/cpp_api_parity/parity-tracker.md`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/96295
Approved by: https://github.com/soulitzer
2023-03-10 03:51:41 +00:00
..
any.cpp
autograd.cpp
CMakeLists.txt
dataloader.cpp set -Wsuggest-override for builds (#89852) 2022-12-19 22:08:47 +00:00
dispatch.cpp
enum.cpp
expanding-array.cpp
fft.cpp
functional.cpp
grad_mode.cpp
inference_mode.cpp
init_baseline.h
init_baseline.py
init.cpp
integration.cpp
jit.cpp
memory.cpp
meta_tensor.cpp
misc.cpp
module.cpp [nn] zero_grad() set_to_none default True (#92731) 2023-01-26 01:04:28 +00:00
moduledict.cpp [fix] nn c++ : segfault in modulelist and moduledict (#93074) 2023-01-27 12:20:19 +00:00
modulelist.cpp [fix] nn c++ : segfault in modulelist and moduledict (#93074) 2023-01-27 12:20:19 +00:00
modules.cpp Adding nn.ZeroPad1d and nn.ZeroPad3d (#96295) 2023-03-10 03:51:41 +00:00
namespace.cpp
nested.cpp
nn_utils.cpp Changing the use from ASSERT_EQ to ASSERT_FLOAT_EQ on nn_utils test. (#83693) 2022-11-15 04:10:52 +00:00
operations.cpp
optim_baseline.h
optim_baseline.py
optim.cpp [nn] add set_to_none flag for C++ optim endpoint (#92989) 2023-01-26 04:16:52 +00:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp
parameterdict.cpp
parameterlist.cpp
README.md
rnn.cpp
sequential.cpp
serialize.cpp Error on ZeroTensor serialization (#88803) 2022-11-11 08:51:29 +00:00
special.cpp
static.cpp
support.cpp
support.h
tensor_cuda.cpp
tensor_flatten.cpp
tensor_indexing.cpp
tensor_options_cuda.cpp
tensor_options.cpp
tensor.cpp [autograd] disable backward/grad for complex scalar output (#92753) 2023-02-23 11:38:27 +00:00
torch_include.cpp
transformer.cpp

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.