pytorch/test/cpp/api
Kurt Mohler 3fe4718d16 Add padding_idx argument to EmbeddingBag (#49237)
Summary:
This PR adds a `padding_idx` parameter to `nn.EmbeddingBag` and `nn.functional.embedding_bag`. As with `nn.Embedding`'s `padding_idx` argument, if an embedding's index is equal to `padding_idx` it is ignored, so it is not included in the reduction.

This PR does not add support for `padding_idx` for quantized or ONNX `EmbeddingBag` for opset10/11 (opset9 is supported). In these cases, an error is thrown if `padding_idx` is provided.

Fixes https://github.com/pytorch/pytorch/issues/3194

Pull Request resolved: https://github.com/pytorch/pytorch/pull/49237

Reviewed By: walterddr, VitalyFedyunin

Differential Revision: D26948258

Pulled By: jbschlosser

fbshipit-source-id: 3ca672f7e768941f3261ab405fc7597c97ce3dfc
2021-04-14 09:38:01 -07:00
..
any.cpp [C++ API] Allow skipping default arguments in module's forward method when module is used in Sequential (#33027) 2020-02-17 20:38:02 -08:00
autograd.cpp Fix autograd when inputs contains tensors without materialized grad_fn (#51940) 2021-02-11 09:22:15 -08:00
CMakeLists.txt Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
dataloader.cpp Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
dispatch.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
enum.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
expanding-array.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
fft.cpp Remove deprecated spectral ops from torch namespace (#48594) 2020-12-05 04:12:32 -08:00
functional.cpp Add padding_idx argument to EmbeddingBag (#49237) 2021-04-14 09:38:01 -07:00
grad_mode.cpp [WIP]Relax some limitations of InferenceMode. (#54403) 2021-04-09 14:40:37 -07:00
inference_mode.cpp [WIP]Relax some limitations of InferenceMode. (#54403) 2021-04-09 14:40:37 -07:00
init_baseline.h Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
init_baseline.py Kaiming Initialization (#14718) 2019-02-15 14:58:22 -08:00
init.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#39023) 2020-05-27 14:07:26 -07:00
integration.cpp [C++ API] Remove deprecated torch::nn::BatchNorm / FeatureDropout / modules_ordered_dict and torch::nn::init::Nonlinearity / FanMode (#34508) 2020-03-12 10:09:58 -07:00
jit.cpp Remove attempToRecoverType (#26767) 2019-10-16 11:07:13 -07:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp codegen: Resolve overload ambiguities created by defaulted arguments (#49348) 2021-01-04 11:59:16 -08:00
module.cpp [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984) 2020-04-23 01:08:00 -07:00
moduledict.cpp Implement C++ ModuleDict (#47707) 2020-11-19 08:07:51 -08:00
modulelist.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
modules.cpp Add padding_idx argument to EmbeddingBag (#49237) 2021-04-14 09:38:01 -07:00
namespace.cpp Remove using namespace torch::autograd from header files (#34423) 2020-03-09 10:31:21 -07:00
nn_utils.cpp Flip clip_grad_norm default for error_if_nonfinite to false (#55169) 2021-04-02 12:25:32 -07:00
operations.cpp [Codemod][GleanFbcode] Remove dead includes in caffe2/test (#43953) 2020-09-01 21:48:28 -07:00
optim_baseline.h Add AdamW to C++ frontend (#40009) 2020-06-18 15:28:12 -07:00
optim_baseline.py Remove legacy constructor calls from pytorch codebase. (#54142) 2021-04-11 15:45:17 -07:00
optim.cpp Adding learning rate schedulers to C++ API (#52268) 2021-03-10 23:09:51 -08:00
ordered_dict.cpp Change C++ API test files to only include torch/torch.h (#27067) 2019-10-10 09:46:29 -07:00
parallel_benchmark.cpp [aten] Pass std::function<> to thread_pool by value, instead of const ref. (#37681) 2020-05-05 08:41:38 -07:00
parallel.cpp [PyTorch] Modify data_parallel to work with small tensors (#37704) 2020-05-04 11:06:42 -07:00
parameterdict.cpp Python/C++ API Parity: Add impl and tests for ParameterDict (#40654) 2020-06-29 08:50:44 -07:00
parameterlist.cpp Impl for ParameterList (#41259) 2020-07-12 20:50:31 -07:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Adding support for CuDNN-based LSTM with projections (#47725) 2020-12-16 11:27:02 -08:00
sequential.cpp [C++ API] RNN / GRU / LSTM layer refactoring (#34322) 2020-03-15 17:48:29 -07:00
serialize.cpp Modernize for-loops (#50912) 2021-01-22 10:53:24 -08:00
special.cpp [special] add torch.special namespace (#52296) 2021-03-04 00:04:36 -08:00
static.cpp Re-organize C++ API torch::nn folder structure (#26262) 2019-09-17 10:07:29 -07:00
support.cpp Use default dtype for torch::tensor(floating_point_values) and torch::tensor(empty braced-init-list) when dtype is not specified (#29632) 2019-11-13 15:17:11 -08:00
support.h Implement public API InferenceMode and its error handling (#55008) 2021-03-31 10:48:00 -07:00
tensor_cuda.cpp Fix MagmaInitializesCorrectly_CUDA by using an invertible matrix (#32547) 2020-01-25 20:00:54 -08:00
tensor_flatten.cpp fix unflatten_dense_tensor when there is empty tensor inside (#50321) 2021-01-23 12:14:34 -08:00
tensor_indexing.cpp Making ops c10-full: list of optional tensors (#49138) 2021-01-04 05:04:02 -08:00
tensor_options_cuda.cpp Deprecate tensor.type() (#30281) 2019-12-05 10:55:34 -08:00
tensor_options.cpp [PyTorch] Narrow Device to 2 bytes by narrowing DeviceType and DeviceIndex (#47023) 2020-11-18 19:39:40 -08:00
tensor.cpp Change to.dtype_layout to c10-full (#41169) 2020-07-10 16:04:34 -07:00
torch_include.cpp Relax set_num_threads restriction in parallel native case (#27947) 2019-10-16 21:53:36 -07:00
transformer.cpp C++ APIs Transformer NN Module Top Layer (#44333) 2020-09-11 08:25:27 -07:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.