mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13342 This PR introduces a few new concepts: - DeviceGuardImplInterface, and implementations for CPU and CUDA, which provide a generic interface for interfacing with device and stream state, without requiring a direct dependency on the code in question. - InlineDeviceGuard, a general template for generating both specialized and dynamically dispatched device guard implementations. Dynamic dispatch is done by specializing it on a VirtualGuardImpl. - Provide a device-independent DeviceGuard class, which can be used even from CPU code. It uses the aforementioned dynamic dispatch. - CUDA-specialized CUDAGuard class, which doesn't have a dynamic dispatch but can only be used from CUDA. - StreamGuard, which is the same as above, but for streams rather than devices. - Optional variants of all the aforementioned guards, which are a no-op if no device/stream is specified - CUDAMultiStreamGuard, specifically for the case when we want to set a device on every guard. There are some subtle semantic changes, which have been thoroughly documented in the class definition. BC-breaking changes: - Move constructor/assignment have been removed from all device guard implementations. - In some cases where you previously wrote 'set_device' (or 'set_stream'), you now must write 'reset_device', because if you switch devices/device types, the stream/device on the previous device is unset. This is different from previous behavior. - CUDAGuard no longer handles streams, or multiple streams. Use CUDAStreamGuard or CUDAMultiStreamGuard as appropriate for your use case. Reviewed By: dzhulgakov Differential Revision: D12849620 fbshipit-source-id: f61956256f0b12be754b3234fcc73c2abc1be04e |
||
|---|---|---|
| .. | ||
| any.cpp | ||
| CMakeLists.txt | ||
| dataloader.cpp | ||
| expanding-array.cpp | ||
| integration.cpp | ||
| jit.cpp | ||
| memory.cpp | ||
| misc.cpp | ||
| module.cpp | ||
| modules.cpp | ||
| optim_baseline.h | ||
| optim_baseline.py | ||
| optim.cpp | ||
| ordered_dict.cpp | ||
| parallel.cpp | ||
| README.md | ||
| rnn.cpp | ||
| sequential.cpp | ||
| serialize.cpp | ||
| static.cpp | ||
| support.h | ||
| tensor_cuda.cpp | ||
| tensor_options_cuda.cpp | ||
| tensor_options.cpp | ||
| tensor.cpp | ||
C++ Frontend Tests
In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.
CUDA Tests
To make a test runnable only on platforms with CUDA, you should suffix your
test with _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_CUDA) { }
To make it runnable only on platforms with at least two CUDA machines, suffix
it with _MultiCUDA instead of _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_MultiCUDA) { }
There is logic in main.cpp that detects the availability and number of CUDA
devices and supplies the appropriate negative filters to GoogleTest.
Integration Tests
Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:
$ python tools/download_mnist.py -d test/cpp/api/mnist
The required paths will be referenced as test/cpp/api/mnist/... in the test
code, so you must run the integration tests from the PyTorch root folder.