mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: The implementation of several modules in C++ frontend currently has calls to `options.name_`, which is bad practice because `options.name_` should be a private options field and we should use `options.name()` to access its value. This PR makes `options.name_` actually private and changes all callsites of `options.name_` to `options.name()`. After this change, we can change all module options to have a map as the underlying data structure, and require that all options must be able to be stored in `c10::IValue`. These changes together would make serializing module options much easier. Note that this PR is BC-breaking in the following way: Previously, calling `options.name_` in C++ module implementation works because `options.name_` was a public field. After this PR, `options.name_` becomes private, and to get the value of `options.name_` we should call `options.name()`, and to set the value of `options.name_` we should call `options.name(new_value)`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/26419 Differential Revision: D17481507 Pulled By: yf225 fbshipit-source-id: 93e4ed0e1d79ef57104ad748809d03e25da61ed3 |
||
|---|---|---|
| .. | ||
| any.cpp | ||
| autograd.cpp | ||
| CMakeLists.txt | ||
| dataloader.cpp | ||
| expanding-array.cpp | ||
| functional.cpp | ||
| init_baseline.h | ||
| init_baseline.py | ||
| init.cpp | ||
| integration.cpp | ||
| jit.cpp | ||
| memory.cpp | ||
| misc.cpp | ||
| module.cpp | ||
| modulelist.cpp | ||
| modules.cpp | ||
| optim_baseline.h | ||
| optim_baseline.py | ||
| optim.cpp | ||
| ordered_dict.cpp | ||
| parallel.cpp | ||
| README.md | ||
| rnn.cpp | ||
| sequential.cpp | ||
| serialize.cpp | ||
| static.cpp | ||
| support.h | ||
| tensor_cuda.cpp | ||
| tensor_options_cuda.cpp | ||
| tensor_options.cpp | ||
| tensor.cpp | ||
| torch_include.cpp | ||
C++ Frontend Tests
In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.
CUDA Tests
To make a test runnable only on platforms with CUDA, you should suffix your
test with _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_CUDA) { }
To make it runnable only on platforms with at least two CUDA machines, suffix
it with _MultiCUDA instead of _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_MultiCUDA) { }
There is logic in main.cpp that detects the availability and number of CUDA
devices and supplies the appropriate negative filters to GoogleTest.
Integration Tests
Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:
$ python tools/download_mnist.py -d test/cpp/api/mnist
The required paths will be referenced as test/cpp/api/mnist/... in the test
code, so you must run the integration tests from the PyTorch root folder.