Summary: Operations on `Variable`s (or `torch::Tensor`) usually return `at::Tensor`. This is usually fine, but the `AnyModule` used in the implementation of `torch::Sequential` is very picky about types, and does not understand implicit conversions like this. This means that `sequential.forward(at_tensor_that_is_actually_a_variable)` will fail unless you wrap `at_tensor_that_is_actually_a_variable` with `torch::Tensor`. This PR adds a special case to `AnyModule` that will convert an `at::Tensor` to `torch::Tensor` when the tensor is really a variable, and else just pass the `at::Tensor`. This is a nice little usability improvement for the often-used `Sequential` class. ebetica ezyang Closes https://github.com/pytorch/pytorch/pull/8968 Reviewed By: ezyang Differential Revision: D8670407 Pulled By: goldsborough fbshipit-source-id: 3635ed6ed28238f3900ce4a876d07f1b11713831 |
||
|---|---|---|
| .. | ||
| any.cpp | ||
| cursor.cpp | ||
| integration.cpp | ||
| main.cpp | ||
| misc.cpp | ||
| module.cpp | ||
| modules.cpp | ||
| optim_baseline.h | ||
| optim_baseline.py | ||
| optim.cpp | ||
| README.md | ||
| rnn.cpp | ||
| sequential.cpp | ||
| serialization.cpp | ||
| static.cpp | ||
| tensor_cuda.cpp | ||
| tensor_options_cuda.cpp | ||
| tensor_options.cpp | ||
| tensor.cpp | ||
| util.h | ||
C++ API Tests
In this folder live the tests for PyTorch's C++ API (formerly known as autogradpp). They use the Catch2 test framework.
CUDA Tests
The way we handle CUDA tests is by separating them into a separate TEST_CASE
(e.g. we have optim and optim_cuda test cases in optim.cpp), and giving
them the [cuda] tag. Then, inside main.cpp we detect at runtime whether
CUDA is available. If not, we disable these CUDA tests by appending ~[cuda]
to the test specifications. The ~ disables the tag.
One annoying aspect is that Catch only allows filtering on test cases and not
sections. Ideally, one could have a section like LSTM inside the RNN test
case, and give this section a [cuda] tag to only run it when CUDA is
available. Instead, we have to create a whole separate RNN_cuda test case and
put all these CUDA sections in there.
Integration Tests
Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:
$ python tools/download_mnist.py -d test/cpp/api/mnist