pytorch/test/cpp/api
Edward Yang e35418b3be New implementations of DeviceGuard, StreamGuard and MultiStreamGuard (with CUDA specializations) (#13342)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13342

This PR introduces a few new concepts:

- DeviceGuardImplInterface, and implementations for CPU and CUDA, which
  provide a generic interface for interfacing with device and stream state,
  without requiring a direct dependency on the code in question.
- InlineDeviceGuard, a general template for generating both specialized
  and dynamically dispatched device guard implementations.  Dynamic
  dispatch is done by specializing it on a VirtualGuardImpl.
- Provide a device-independent DeviceGuard class, which can be used even
  from CPU code. It uses the aforementioned dynamic dispatch.
- CUDA-specialized CUDAGuard class, which doesn't have a dynamic dispatch
  but can only be used from CUDA.
- StreamGuard, which is the same as above, but for streams rather than
  devices.
- Optional variants of all the aforementioned guards, which are a no-op if
  no device/stream is specified
- CUDAMultiStreamGuard, specifically for the case when we want to set
  a device on every guard.

There are some subtle semantic changes, which have been thoroughly documented
in the class definition.

BC-breaking changes:

- Move constructor/assignment have been removed from all device guard
  implementations.
- In some cases where you previously wrote 'set_device' (or 'set_stream'), you now must write
  'reset_device', because if you switch devices/device types, the stream/device on the
  previous device is unset.  This is different from previous behavior.
- CUDAGuard no longer handles streams, or multiple streams.  Use CUDAStreamGuard
  or CUDAMultiStreamGuard as appropriate for your use case.

Reviewed By: dzhulgakov

Differential Revision: D12849620

fbshipit-source-id: f61956256f0b12be754b3234fcc73c2abc1be04e
2018-11-11 12:11:10 -08:00
..
any.cpp Remove size() from BatchDataset and templatize IndexType (#12960) 2018-11-05 17:13:09 -08:00
CMakeLists.txt Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
dataloader.cpp Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
expanding-array.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
integration.cpp Use MNIST dataset in C++ integration test (#13737) 2018-11-09 09:55:02 -08:00
jit.cpp Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
memory.cpp Hide c10::optional and nullopt in torch namespace (#12927) 2018-10-26 00:08:04 -07:00
misc.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
module.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
modules.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
optim_baseline.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim_baseline.py Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
optim.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
ordered_dict.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
parallel.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
README.md Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
rnn.cpp Use MNIST dataset in C++ integration test (#13737) 2018-11-09 09:55:02 -08:00
sequential.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
serialize.cpp Replace cursors with OrderedDict (#13427) 2018-11-07 11:10:05 -08:00
static.cpp Rewrite C++ API tests in gtest (#11953) 2018-09-21 21:28:16 -07:00
support.h Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
tensor_cuda.cpp Add tensor.to(options) (#13146) 2018-10-29 16:26:06 -07:00
tensor_options_cuda.cpp New implementations of DeviceGuard, StreamGuard and MultiStreamGuard (with CUDA specializations) (#13342) 2018-11-11 12:11:10 -08:00
tensor_options.cpp Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00
tensor.cpp Use torch:: instead of at:: in all C++ APIs (#13523) 2018-11-06 14:32:25 -08:00

C++ Frontend Tests

In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.

CUDA Tests

To make a test runnable only on platforms with CUDA, you should suffix your test with _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_CUDA) { }

To make it runnable only on platforms with at least two CUDA machines, suffix it with _MultiCUDA instead of _CUDA, e.g.

TEST(MyTestSuite, MyTestCase_MultiCUDA) { }

There is logic in main.cpp that detects the availability and number of CUDA devices and supplies the appropriate negative filters to GoogleTest.

Integration Tests

Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:

$ python tools/download_mnist.py -d test/cpp/api/mnist

The required paths will be referenced as test/cpp/api/mnist/... in the test code, so you must run the integration tests from the PyTorch root folder.