mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39111 In our present alias analysis, we consider any Value that enter another container as entering the heap, and thus aliasing all other heap values of the same type. There are a number of advantages to this approach: - it is not to hard to maintain the aliasDb implementation - it is much easier from an op schema perspective - there are many composite list ops registered internally and externally that would be tricky to register and get right if we did something more complicated - It limits the size of the AliasDb, because a container of size 10 only contains a single memory dag element instead of 10 elements. The downside is that we have are unable to handle the simple and extremely common case of a list of tensors being used in an ATen op. In an example like: ``` def foo(input): x = torch.tensor([1, 2, 3, 4]) y = [x, x] input.add_(1) return torch.cat(y) ``` we will consider x to be written to. any write to any wildcard element (an element that enters a tuple, an element that is taken from a list) will mark x as written to. This can be limiting for our ability to create a functional subset and fuse graphs - as a result, 4 of TorchVision classification models could not be functionalized. Test Plan: Imported from OSS Reviewed By: SplitInfinity Differential Revision: D23828003 Pulled By: eellison fbshipit-source-id: 9109fcb6f2ca20ca897cae71683530285da9d537 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| CMakeLists.txt | ||
| gtest.cpp | ||
| README.md | ||
| test_alias_analysis.cpp | ||
| test_argument_spec.cpp | ||
| test_autodiff.cpp | ||
| test_backend.cpp | ||
| test_base.cpp | ||
| test_base.h | ||
| test_class_import.cpp | ||
| test_class_parser.cpp | ||
| test_class_type.cpp | ||
| test_cleanup_passes.cpp | ||
| test_code_template.cpp | ||
| test_constant_pooling.cpp | ||
| test_create_autodiff_subgraphs.cpp | ||
| test_custom_class.cpp | ||
| test_custom_operators.cpp | ||
| test_dce.cpp | ||
| test_fuser.cpp | ||
| test_gpu.cpp | ||
| test_graph_executor.cpp | ||
| test_inliner.cpp | ||
| test_interface.cpp | ||
| test_interpreter.cpp | ||
| test_ir.cpp | ||
| test_irparser.cpp | ||
| test_jit_type.cpp | ||
| test_lite_interpreter.cpp | ||
| test_lite_trainer.cpp | ||
| test_memory_dag.cpp | ||
| test_misc.cpp | ||
| test_mobile_type_parser.cpp | ||
| test_module_api.cpp | ||
| test_peephole_optimize.cpp | ||
| test_qualified_name.cpp | ||
| test_save_load.cpp | ||
| test_schema_matching.cpp | ||
| test_subgraph_matcher.cpp | ||
| test_subgraph_rewriter.cpp | ||
| test_subgraph_utils.cpp | ||
| test_utils.cpp | ||
| test_utils.h | ||
| tests_setup.py | ||
| tests.h | ||
| torch_python_test.cpp | ||
JIT C++ Tests
How to add a new test
First, create a new test file. Test files should have be placed in this
directory, with a name that starts with test_, like test_foo.cpp.
Here is an example test file you can copy-paste.
#include <test/cpp/jit/test_base.h>
// Tests go in torch::jit
namespace torch {
namespace jit {
// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
// ...
}
void testCaseTwo() {
// ...
}
}
}
Then, register your test in tests.h:
// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_) \
_(ADFormulas) \
_(Attributes) \
...
_(CaseOne) // note that the `test` prefix is omitted.
_(CaseTwo)
We glob all the test files together in CMakeLists.txt so that you don't
have to edit it every time you add a test. Unfortunately, this means that in
order to get the build to pick up your new test file, you need to re-run
cmake:
python setup.py build --cmake
Why do we have two different test runners?
We have two different ways of running our cpp tests:
- With
gtest, from a standalone binary. - With Python, from
TestJit.test_cppandTestJit.test_cpp_cuda(intest/test_jit.py)
We want both because we need to test things from a pure-C++ environment and with all our various Python patch-points enabled.
How do I run the tests?
The following commands assume you are in PyTorch root.
- With
gtest:# (re)build the test binary ninja build/bin/test_jit # run build/bin/test_jit --gtest_filter='glob_style_filter*' - With Python:
python test/test_jit.py TestJit.test_cpp TestJit.test_cpp_cuda