mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29653 I didn't remove is_variable from Tensor for BC reasons, but I did remove as many uses as I could from the codebase. at::impl::variable_excluded_from_dispatch got moved to TensorBody.h so that it's more widely accessible. This diff is NOT semantics preserving. Here are the major differences: - In a number of native operator implementations, we tested that arguments are not variable. I replaced these with asserts that variable is excluded from dispatch. I actually don't think these asserts are really necessary now (they should certainly be true, but it's hard to get it wrong), but I've kept them for old time's sake. At least, they'll detect if you call these functions before you've processed variable (indicating a bug in your kernel.) - There are a number of places where we do a per-tensor test for being a variable, for better error reporting when someone commits Tensor/Variable confusion. Although these tests are substantively the same as the tests above, in these cases I decided to *delete* the test entirely. The reasoning is that in these cases, we didn't really care about dispatch (also, see above; I'm not too sure we really need the dispatch asserts), we cared about Tensor/Variable confusion. Since Tensor/Variable confusion is impossible now, we don't need the tests. One of the key factors which pushed me one way or another was whether or not a function was doing per-tensor validation; if I kept the assert in such functions, I'd repeatedly access the TLS. Even if we want to bring back the asserts, they would have to go somewhere else. Another similar idiom is the number of places we do !x.defined() || x.is_variable(); I treated this equivalently. - nuclear_norm's computation of compute_uv is a bit weird, but I think it's OK to just delete the is_variable case (I *suspect* that it is always the case that self.is_variable(), but it doesn't really matter.) Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Differential Revision: D18496168 Pulled By: ezyang fbshipit-source-id: 5a1ded931e0c10a6b758ba64a8380d34110e0c3e |
||
|---|---|---|
| .. | ||
| any.cpp | ||
| autograd.cpp | ||
| CMakeLists.txt | ||
| dataloader.cpp | ||
| enum.cpp | ||
| expanding-array.cpp | ||
| functional.cpp | ||
| init_baseline.h | ||
| init_baseline.py | ||
| init.cpp | ||
| integration.cpp | ||
| jit.cpp | ||
| memory.cpp | ||
| misc.cpp | ||
| module.cpp | ||
| modulelist.cpp | ||
| modules.cpp | ||
| nn_utils.cpp | ||
| optim_baseline.h | ||
| optim_baseline.py | ||
| optim.cpp | ||
| ordered_dict.cpp | ||
| parallel.cpp | ||
| README.md | ||
| rnn.cpp | ||
| sequential.cpp | ||
| serialize.cpp | ||
| static.cpp | ||
| support.cpp | ||
| support.h | ||
| tensor_cuda.cpp | ||
| tensor_options_cuda.cpp | ||
| tensor_options.cpp | ||
| tensor.cpp | ||
| torch_include.cpp | ||
C++ Frontend Tests
In this folder live the tests for PyTorch's C++ Frontend. They use the GoogleTest test framework.
CUDA Tests
To make a test runnable only on platforms with CUDA, you should suffix your
test with _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_CUDA) { }
To make it runnable only on platforms with at least two CUDA machines, suffix
it with _MultiCUDA instead of _CUDA, e.g.
TEST(MyTestSuite, MyTestCase_MultiCUDA) { }
There is logic in main.cpp that detects the availability and number of CUDA
devices and supplies the appropriate negative filters to GoogleTest.
Integration Tests
Integration tests use the MNIST dataset. You must download it by running the following command from the PyTorch root folder:
$ python tools/download_mnist.py -d test/cpp/api/mnist
The required paths will be referenced as test/cpp/api/mnist/... in the test
code, so you must run the integration tests from the PyTorch root folder.