pytorch/test/cpp/api
soulitzer 516f3198d6 Fix retains grad behavior after in-place (#79996)
See this doc: https://docs.google.com/document/d/1KiRdnoj6B4cI3yl017hTbCqcOGO1gWIpUf20sldipHM/edit#

Two issues (1) regarding hooks in general and (2) regarding retains grad hooks are fixed, Python hooks, which rely on a different mechanism are not discussed here:
- Hooks in cpp in general
  - (fixed) new hooks to registered to a newer version of the tensor no longer get applied to grad_fn
    associated with older version of the tensor when the first hook was ever registered
  - (unchanged) hooks registered to the older version of the tensor remain active on
- Retains grad hooks
  - (fixed) now get moved to the latest grad_fn. NB: To the user, retains_grad is not considered hooks
    or expected to behave like hooks (which we consider properties of the grad_fn) vs retains_gradness
    which is a property of the tensor.
- (not in this PR) Python hooks
  - (will fix) same issue as hooks in cpp where new hooks are being applied to grad_fn associated
    with the older version of the tensor
Pull Request resolved: https://github.com/pytorch/pytorch/pull/79996
Approved by: https://github.com/albanD
2022-07-08 19:13:28 +00:00
..
any.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
autograd.cpp Fix retains grad behavior after in-place (#79996) 2022-07-08 19:13:28 +00:00
CMakeLists.txt Make Wunused-local-typedef a hard error (#77918) 2022-06-09 18:14:01 +00:00
dataloader.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
dispatch.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
enum.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
expanding-array.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
fft.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
functional.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
grad_mode.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
imethod.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
inference_mode.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
init_baseline.h [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
init_baseline.py Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default 2021-08-12 11:45:01 -07:00
init.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
integration.cpp use irange for loops 10 (#69394) 2021-12-09 09:49:34 -08:00
jit.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
memory.cpp
meta_tensor.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
misc.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
module.cpp Fix //:module_test Conversion_MultiCUDA (#79926) 2022-06-21 23:32:18 +00:00
moduledict.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
modulelist.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
modules.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
namespace.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
nn_utils.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
operations.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
optim_baseline.h [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
optim_baseline.py Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default 2021-08-12 11:45:01 -07:00
optim.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
ordered_dict.cpp
parallel_benchmark.cpp
parallel.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
parameterdict.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
parameterlist.cpp use irange for loops 5 (#66744) 2021-10-18 21:59:50 -07:00
README.md
rnn.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
sequential.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
serialize.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
special.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
static.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
support.cpp
support.h [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_cuda.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_flatten.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_indexing.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_options_cuda.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor_options.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensor.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
torch_include.cpp
transformer.cpp [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.