mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61791 methods from forward During inlining we attached InlinedCallstack to nodes being inlined. In the process we attach moodule information as well, such that if CallMethod is being inlined we know which class instance and class type the method belongs to. However, CallMethod can be calling a method of the same object to which the graph belongs. e.g.: ``` def forward(self, input): x = input + 10 return forward_impl_(x, input) ``` Here forward_impl is method defined on the same class in which forward is defined. Existing module hierarchy annotation will mislabel this as unknown instance since the method is not associated with output of GetAttr node (it would be we had called self.conv.forward_impl_ for example). Change in this PR reconciles this by creating a placeholder name "SELF" for module instance indicating that you can traverse InlinedCallStack backwards to find first node with name != SELF, which would be the name of the object. e.g.: TOP(ResNet)::forward.SELF(ResNet)::_forward_impl.layer1(Sequential)::forward.0(BasicBlock)::forward.conv1(Conv2d)::forward.SELF(Conv2d)::_conv_forward Test Plan: Add test Imported from OSS Reviewed By: larryliu0820 Differential Revision: D29745443 fbshipit-source-id: 1525e41df53913341c4c36a56772454782a0ba93 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| CMakeLists.txt | ||
| README.md | ||
| script_module_v4.ptl | ||
| script_module_v5.ptl | ||
| script_module_v6.ptl | ||
| test_alias_analysis.cpp | ||
| test_argument_spec.cpp | ||
| test_autodiff.cpp | ||
| test_backend_compiler_lib.cpp | ||
| test_backend_compiler_preprocess.cpp | ||
| test_backend_lib.cpp | ||
| test_backend.cpp | ||
| test_class_import.cpp | ||
| test_class_parser.cpp | ||
| test_class_type.cpp | ||
| test_cleanup_passes.cpp | ||
| test_code_template.cpp | ||
| test_concat_opt.cpp | ||
| test_constant_pooling.cpp | ||
| test_create_autodiff_subgraphs.cpp | ||
| test_cs_debug_info_serialization.cpp | ||
| test_custom_class_registrations.cpp | ||
| test_custom_class_registrations.h | ||
| test_custom_class.cpp | ||
| test_custom_operators.cpp | ||
| test_dce.cpp | ||
| test_fuser.cpp | ||
| test_gpu.cpp | ||
| test_graph_executor.cpp | ||
| test_inliner.cpp | ||
| test_interface.cpp | ||
| test_interpreter_async.pt | ||
| test_interpreter.cpp | ||
| test_ir.cpp | ||
| test_irparser.cpp | ||
| test_jit_logging_levels.cpp | ||
| test_jit_type.cpp | ||
| test_lite_interpreter.cpp | ||
| test_lite_trainer.cpp | ||
| test_memory_dag.cpp | ||
| test_misc.cpp | ||
| test_mobile_type_parser.cpp | ||
| test_module_api.cpp | ||
| test_peephole_optimize.cpp | ||
| test_qualified_name.cpp | ||
| test_save_load.cpp | ||
| test_schema_matching.cpp | ||
| test_script_profile.cpp | ||
| test_subgraph_matcher.cpp | ||
| test_subgraph_rewriter.cpp | ||
| test_subgraph_utils.cpp | ||
| test_utils.cpp | ||
| test_utils.h | ||
| tests_setup.py | ||
| torch_python_test.cpp | ||
JIT C++ Tests
Adding a new test
First, create a new test file. Test files should have be placed in this
directory, with a name that starts with test_, like test_foo.cpp.
In general a single test suite
Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.
A test file may look like:
#include <gtest/gtest.h>
using namespace ::torch::jit
TEST(FooTest, BarBaz) {
// ...
}
// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
// ...
}
// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
// ...
}
Building and running the tests
The following commands assume you are in PyTorch root.
# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'