pytorch/test/cpp/jit
angelayi e8836759d0 [export] Add effect token to export (#121424)
Following the creation of effect tokens (https://github.com/pytorch/pytorch/pull/120296), we want to now add support for these tokens in export because the calling/returning convention has changed. The inputs are now `(tokens, params, buffers, constants, user_inputs)` and the outputs are `(tokens, buffer_mutations, user_mutations, user_outputs)`. The graph looks something like:
```
graph():
    %arg0_1 : [num_users=1] = placeholder[target=arg0_1]
    %attr : [num_users=2] = placeholder[target=attr]
    %arg1_1 : [num_users=2] = placeholder[target=arg1_1]
    %with_effects : [num_users=2] = call_function[target=torch._higher_order_ops.effects.with_effects](args = (%arg0_1, _TorchScriptTesting.takes_foo.default, %attr, %arg1_1), kwargs = {})
    %getitem : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects, 0), kwargs = {})
    %getitem_1 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects, 1), kwargs = {})
    %with_effects_1 : [num_users=2] = call_function[target=torch._higher_order_ops.effects.with_effects](args = (%getitem, _TorchScriptTesting.takes_foo.default, %attr, %getitem_1), kwargs = {})
    %getitem_2 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects_1, 0), kwargs = {})
    %getitem_3 : [num_users=1] = call_function[target=operator.getitem](args = (%with_effects_1, 1), kwargs = {})
    %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, %getitem_3), kwargs = {})
    return (getitem_2, add)
```

During unlifting, we will first remove the tokens and with_effect calls using the `remove_effect_tokens` pass. (cc @SherlockNoMad on the pass to remove tokens). This is so that this won't change the calling conventions when retracing. The graph after unlifting looks something like:
```
graph():
    %attr_1 : [num_users=2] = get_attr[target=attr]
    %arg1_1 : [num_users=2] = placeholder[target=arg1_1]
    %takes_foo_default_1 : [num_users=1] = call_function[target=torch.ops._TorchScriptTesting.takes_foo.default](args = (%attr_1, %arg1_1), kwargs = {})
    %takes_foo_default : [num_users=1] = call_function[target=torch.ops._TorchScriptTesting.takes_foo.default](args = (%attr_1, %takes_foo_default_1), kwargs = {})
    %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%arg1_1, %takes_foo_default), kwargs = {})
    return (add,)
```

Serialization support will be added in a followup.
Note: tokens only affect custom ops that take in ScriptObjects, not ScriptObject methods yet.

Differential Revision: [D54639390](https://our.internmc.facebook.com/intern/diff/D54639390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121424
Approved by: https://github.com/tugsbayasgalan
2024-03-09 02:43:26 +00:00
..
upgrader_models
__init__.py
CMakeLists.txt [ROCm] remove HCC references (#111975) 2023-10-26 02:39:10 +00:00
README.md
script_module_v4.ptl
script_module_v5.ptl
script_module_v6.ptl
source_range_test.cpp
test_add_if_then_else.cpp
test_alias_analysis.cpp [Reland] Move torch::make_unique to std::make_unique (#109780) 2023-09-21 18:30:21 +00:00
test_argument_spec.cpp [2/N] Cleanup header inclusions in torch_cpu by iwyu (#109964) 2023-11-19 20:56:32 +00:00
test_autodiff.cpp
test_backend_compiler_lib.cpp [c10] Move profiler clock to libc10 for timestamps (#111972) 2023-10-27 16:18:40 +00:00
test_backend_compiler_preprocess.cpp
test_backend_lib.cpp [BE] Enforce missing override keyword (#104032) 2023-06-24 02:34:24 +00:00
test_backend.cpp
test_class_import.cpp
test_class_parser.cpp
test_class_type.cpp
test_cleanup_passes.cpp
test_code_template.cpp
test_concat_opt.cpp [2/N] Cleanup header inclusions in torch_cpu by iwyu (#109964) 2023-11-19 20:56:32 +00:00
test_constant_pooling.cpp
test_create_autodiff_subgraphs.cpp
test_cs_debug_info_serialization.cpp
test_custom_class_registrations.cpp [export] Add effect token to export (#121424) 2024-03-09 02:43:26 +00:00
test_custom_class_registrations.h
test_custom_class.cpp
test_custom_operators.cpp
test_dce.cpp
test_exception.cpp Fix typo under test directory (#111304) 2023-10-16 23:06:06 +00:00
test_file_format.cpp
test_flatbuffer.cpp Segmentation fault in flatbuffers when parsing malformed modules (#95221) 2023-05-24 21:16:19 +00:00
test_fuser.cpp
test_graph_executor.cpp
test_graph_iterator.cpp
test_inliner.cpp
test_interface.cpp
test_interpreter_async.pt
test_interpreter.cpp
test_ir.cpp
test_irparser.cpp
test_jit_logging_levels.cpp
test_jit_type.cpp
test_lite_interpreter_direct.cpp
test_lite_interpreter.cpp [Reland] Add -Wdeprecated and related fixes (#110019) 2023-09-28 03:34:29 +00:00
test_lite_trainer.cpp Segmentation fault in flatbuffers when parsing malformed modules (#95221) 2023-05-24 21:16:19 +00:00
test_load_upgraders.cpp
test_memory_dag.cpp Fix C++20 build (#112333) 2024-02-13 05:10:19 +00:00
test_misc.cpp Check QNNPACK support for the platform before running test (#119139) 2024-02-12 20:21:07 +00:00
test_mobile_type_parser.cpp
test_module_api.cpp Fix typo under test directory (#111304) 2023-10-16 23:06:06 +00:00
test_op_replacement.cpp
test_peephole_optimize.cpp
test_qualified_name.cpp
test_save_load.cpp Add support for PickleOpCode::APPEND in torch unpickler (#104027) 2023-08-30 14:24:50 +00:00
test_schema_info.cpp
test_schema_matching.cpp
test_script_profile.cpp
test_shape_analysis.cpp [PyTorch] Redirect c10::optional to std::optional (#101995) 2023-11-30 02:46:41 +00:00
test_stack_opt.cpp [2/N] Cleanup header inclusions in torch_cpu by iwyu (#109964) 2023-11-19 20:56:32 +00:00
test_subgraph_matcher.cpp
test_subgraph_rewriter.cpp
test_subgraph_utils.cpp
test_union.cpp
test_upgrader_utils.cpp
test_utils.cpp
test_utils.h
tests_setup.py Revert "Fix ordered dict loading with LibTorch (#100743)" 2023-05-10 15:29:14 +00:00
torch_python_test.cpp

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'