mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666 Changes: - Introduce a `ConcreteModuleType` concept. This acts both as the key into the type cache, and as the source of truth for `ModuleValue::attr` queries. It needs to do both jobs because that's how we ensure correctness (if the types are different, it's because `ModuleValue::attr` would return different things). - Now `recursive_script` will first construct a `ConcreteModuleType` and search for a pre-existing type before starting compilation. - All previous paths to creating a `ScriptModule` (including inheriting from `ScriptModule`) are now rewritten to go through `create_script_module`, so that we have only a single place where construction happens. Behavioral changes: - Big change to `torch.jit.ScriptModule` inheritance: all attributes are now recursively scripted if possible, matching recursive scripting semantics. This makes it hard to keep something from being scripted (for example, a Python submodule). Possibly we'll need an `ignore()` type thing for attributes. In particular, this adds `self.training` to *every* ScriptModule, since it's present on every `nn.Module`. - I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before. Test Plan: Imported from OSS Differential Revision: D17551196 Pulled By: suo fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| CMakeLists.txt | ||
| gtest.cpp | ||
| README.md | ||
| test_alias_analysis.cpp | ||
| test_argument_spec.cpp | ||
| test_autodiff.cpp | ||
| test_base.h | ||
| test_class_import.cpp | ||
| test_class_parser.cpp | ||
| test_code_template.cpp | ||
| test_constant_pooling.cpp | ||
| test_constant_propagation.cpp | ||
| test_create_autodiff_subgraphs.cpp | ||
| test_custom_operators.cpp | ||
| test_dce.cpp | ||
| test_fuser.cpp | ||
| test_graph_executor.cpp | ||
| test_inliner.cpp | ||
| test_interpreter.cpp | ||
| test_ir.cpp | ||
| test_irparser.cpp | ||
| test_ivalue.cpp | ||
| test_lite_interpreter.cpp | ||
| test_misc.cpp | ||
| test_netdef_converter.cpp | ||
| test_peephole_optimize.cpp | ||
| test_qualified_name.cpp | ||
| test_save_load.cpp | ||
| test_schema_matching.cpp | ||
| test_subgraph_matcher.cpp | ||
| test_subgraph_rewriter.cpp | ||
| test_subgraph_utils.cpp | ||
| test_utils.cpp | ||
| test_utils.h | ||
| tests_setup.py | ||
| tests.h | ||
| torch_python_test.cpp | ||
JIT C++ Tests
How to add a new test
First, create a new test file. Test files should have be placed in this
directory, with a name that starts with test_, like test_foo.cpp.
Here is an example test file you can copy-paste.
#include <test/cpp/jit/test_base.h>
// Tests go in torch::jit
namespace torch {
namespace jit {
// 1. Test cases are void() functions.
// 2. They start with the prefix `test`
void testCaseOne() {
// ...
}
void testCaseTwo() {
// ...
}
}
}
Then, register your test in tests.h:
// Add to TH_FORALL_TESTS_CUDA instead for CUDA-requiring tests
#define TH_FORALL_TESTS(_) \
_(ADFormulas) \
_(Attributes) \
...
_(CaseOne) // note that the `test` prefix is omitted.
_(CaseTwo)
We glob all the test files together in CMakeLists.txt so that you don't
have to edit it every time you add a test. Unfortunately, this means that in
order to get the build to pick up your new test file, you need to re-run
cmake:
python setup.py build --cmake
Why do we have two different test runners?
We have two different ways of running our cpp tests:
- With
gtest, from a standalone binary. - With Python, from
TestJit.test_cppandTestJit.test_cpp_cuda(intest/test_jit.py)
We want both because we need to test things from a pure-C++ environment and with all our various Python patch-points enabled.
How do I run the tests?
The following commands assume you are in PyTorch root.
- With
gtest:# (re)build the test binary ninja build/bin/test_jit # run build/bin/test_jit --gtest_filter='glob_style_filter*' - With Python:
python test/test_jit.py TestJit.test_cpp TestJit.test_cpp_cuda