mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Following the creation of effect tokens (https://github.com/pytorch/pytorch/pull/120296), we want to now add support for these tokens in export because the calling/returning convention has changed. The inputs are now `(tokens, params, buffers, constants, user_inputs)` and the outputs are `(tokens, buffer_mutations, user_mutations, user_outputs)`. The graph looks something like: ``` graph(): %arg0_1 : [num_users=1] = placeholder[target=arg0_1] %attr : [num_users=2] = placeholder[target=attr] %arg1_1 : [num_users=2] = placeholder[target=arg1_1] %with_effects : [num_users=2] = call_function[target=torch._higher_order_ops.effects.with_effects](args = (%arg0_1, _TorchScriptTesting.takes_foo.default, %attr, %arg1_1), kwargs = {}) %getitem : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects, 0), kwargs = {}) %getitem_1 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects, 1), kwargs = {}) %with_effects_1 : [num_users=2] = call_function[target=torch._higher_order_ops.effects.with_effects](args = (%getitem, _TorchScriptTesting.takes_foo.default, %attr, %getitem_1), kwargs = {}) %getitem_2 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects_1, 0), kwargs = {}) %getitem_3 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects_1, 1), kwargs = {}) %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, %getitem_3), kwargs = {}) return (getitem_2, add) ``` During unlifting, we will first remove the tokens and with_effect calls using the `remove_effect_tokens` pass. (cc @SherlockNoMad on the pass to remove tokens). This is so that this won't change the calling conventions when retracing. The graph after unlifting looks something like: ``` graph(): %attr_1 : [num_users=2] = get_attr[target=attr] %arg1_1 : [num_users=2] = placeholder[target=arg1_1] %takes_foo_default_1 : [num_users=1] = call_function[target=torch.ops._TorchScriptTesting.takes_foo.default](args = (%attr_1, %arg1_1), kwargs = {}) %takes_foo_default : [num_users=1] = call_function[target=torch.ops._TorchScriptTesting.takes_foo.default](args = (%attr_1, %takes_foo_default_1), kwargs = {}) %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, %takes_foo_default), kwargs = {}) return (add,) ``` Serialization support will be added in a followup. Note: tokens only affect custom ops that take in ScriptObjects, not ScriptObject methods yet. Differential Revision: [D54639390](https://our.internmc.facebook.com/intern/diff/D54639390) Pull Request resolved: https://github.com/pytorch/pytorch/pull/121424 Approved by: https://github.com/tugsbayasgalan |
||
|---|---|---|
| .. | ||
| upgrader_models | ||
| __init__.py | ||
| CMakeLists.txt | ||
| README.md | ||
| script_module_v4.ptl | ||
| script_module_v5.ptl | ||
| script_module_v6.ptl | ||
| source_range_test.cpp | ||
| test_add_if_then_else.cpp | ||
| test_alias_analysis.cpp | ||
| test_argument_spec.cpp | ||
| test_autodiff.cpp | ||
| test_backend_compiler_lib.cpp | ||
| test_backend_compiler_preprocess.cpp | ||
| test_backend_lib.cpp | ||
| test_backend.cpp | ||
| test_class_import.cpp | ||
| test_class_parser.cpp | ||
| test_class_type.cpp | ||
| test_cleanup_passes.cpp | ||
| test_code_template.cpp | ||
| test_concat_opt.cpp | ||
| test_constant_pooling.cpp | ||
| test_create_autodiff_subgraphs.cpp | ||
| test_cs_debug_info_serialization.cpp | ||
| test_custom_class_registrations.cpp | ||
| test_custom_class_registrations.h | ||
| test_custom_class.cpp | ||
| test_custom_operators.cpp | ||
| test_dce.cpp | ||
| test_exception.cpp | ||
| test_file_format.cpp | ||
| test_flatbuffer.cpp | ||
| test_fuser.cpp | ||
| test_graph_executor.cpp | ||
| test_graph_iterator.cpp | ||
| test_inliner.cpp | ||
| test_interface.cpp | ||
| test_interpreter_async.pt | ||
| test_interpreter.cpp | ||
| test_ir.cpp | ||
| test_irparser.cpp | ||
| test_jit_logging_levels.cpp | ||
| test_jit_type.cpp | ||
| test_lite_interpreter_direct.cpp | ||
| test_lite_interpreter.cpp | ||
| test_lite_trainer.cpp | ||
| test_load_upgraders.cpp | ||
| test_memory_dag.cpp | ||
| test_misc.cpp | ||
| test_mobile_type_parser.cpp | ||
| test_module_api.cpp | ||
| test_op_replacement.cpp | ||
| test_peephole_optimize.cpp | ||
| test_qualified_name.cpp | ||
| test_save_load.cpp | ||
| test_schema_info.cpp | ||
| test_schema_matching.cpp | ||
| test_script_profile.cpp | ||
| test_shape_analysis.cpp | ||
| test_stack_opt.cpp | ||
| test_subgraph_matcher.cpp | ||
| test_subgraph_rewriter.cpp | ||
| test_subgraph_utils.cpp | ||
| test_union.cpp | ||
| test_upgrader_utils.cpp | ||
| test_utils.cpp | ||
| test_utils.h | ||
| tests_setup.py | ||
| torch_python_test.cpp | ||
JIT C++ Tests
Adding a new test
First, create a new test file. Test files should have be placed in this
directory, with a name that starts with test_, like test_foo.cpp.
In general a single test suite
Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.
A test file may look like:
#include <gtest/gtest.h>
using namespace ::torch::jit
TEST(FooTest, BarBaz) {
// ...
}
// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
// ...
}
// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
// ...
}
Building and running the tests
The following commands assume you are in PyTorch root.
# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'