pytorch/test/cpp/jit
Raziel Alvarez Guevara c5cd993add Adds a bool is_available() method to the backend contract (#53068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53068

Adds a ```bool is_available()``` method to the backend contract: it returns ```true``` if ```compile()``` and ```execute()``` can be called; ```false``` otherwise.

It is used to implement the following changes in the ```LoweredModule```:
* ```compile()``` in ```__setstate__``` will run if ```is_available()```, else ```__setstate__``` throws an exception (“Backend not available.”).
* ```compile()``` at ```LoweredModule``` creation will run if ```is_available()```, else a WARNING will be thrown.
* ```execute()``` will only be executed if ```is_available()``` returns true; else throws an exception (“Backend not available.”).

The goal of these changes is to ensure we have a well defined behaviour for the different combinations of backend availability on-host and on-target.

More specifically, backends may have different capabilities to compile and/or execute the Module, depending whether this happens on-host (i.e. where the program is being written) or on-target (where the program is being executed).

First of all, we know that "preprocess" always takes place, and that only happens on-host at creation time. So, we can assume that any compilation is needed/possible on-host then all of it could be pushed here.

Overall, we want to ensure the following:

**On host**

| compile | execute | Outcome |
| -- | -- | -- |
| No | No | On module creation, LoweredModule is generated, with a warning  (since compilation and execution can still take place on-target). On module load, throws an exception (since execution is not possible). |
| No | Yes | This configuration should not be possible. This assumes the full compiler is not available, even if some work was done in preprocess the program cannot be finalized for execution. |
| Yes | No | In this case, the expectation would be for is_available() to return false, and compilation logic to move into preprocess. |
| Yes | Yes | All good. This is the only case that is_available() should return true. |

**On target**

| compile | execute | Outcome |
| -- | -- | -- |
| No | No | Loading the LoweredModule throws an exception. Since execution is not possible. |
| No | Yes | Basically this is another instance of Yes/Yes: compilation per se may not be possible on device, which means compile() can be called without issue but it is a no-op, and thus is_available should return true. Consequently, loading the LoweredModule: Succeeds, if the preprocessed module is ready for execution. Fails with exception otherwise. |
| Yes | No | This configuration should not be possible. Just putting here for completeness. |
| Yes | Yes | All good. This, along with No/Yes case (because compilation is assumed to have happened on-host, so it's just another instance of Yes/Yes), are the cases where is_available() should return true. |

**Refactoring existing code**
This change also updates other backends (Glow) code, to implement the is_available() method to have the same behaviour as before this change (i.e. always available).

This should not cause backward incompatibilities with already saved models since we're adding a new method to the PyTorchBackendInterface.
Models saved with the old interface that didn't have is_available() will still find the other 2 methods in the bound object (i.e. compile and execute), and the saved LoweredModule logic will be the old one.

**Future**
We plan to use is_available() to implement support for fallback to the PyTorch interpreter.
ghstack-source-id: 123498571

Test Plan: Added C++ (test_backend.cpp) and Python (test_backends.py) tests to validate the exceptions.

Reviewed By: jackm321, spaugh, iseeyuan

Differential Revision: D26615833

fbshipit-source-id: 562e8b11db25784348b5f86bbc4179aedf15e0d3
2021-03-10 00:24:16 -08:00
..
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00
CMakeLists.txt Add a demo backend with compiler (#52603) 2021-02-26 11:53:34 -08:00
README.md port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_alias_analysis.cpp [codemod][fbcode/caffe2] Apply clang-format update fixes 2021-01-09 14:37:36 -08:00
test_argument_spec.cpp [codemod][fbcode/caffe2] Apply clang-format update fixes 2021-01-09 14:37:36 -08:00
test_autodiff.cpp Add inputs argument to autograd.backward() (#46855) 2020-11-02 14:32:38 -08:00
test_backend_compiler_lib.cpp Adds a bool is_available() method to the backend contract (#53068) 2021-03-10 00:24:16 -08:00
test_backend_lib.cpp Adds a bool is_available() method to the backend contract (#53068) 2021-03-10 00:24:16 -08:00
test_backend.cpp Adds a bool is_available() method to the backend contract (#53068) 2021-03-10 00:24:16 -08:00
test_class_import.cpp gtest-ify JIT tests, through the letter c (#45249) 2020-09-24 00:21:20 -07:00
test_class_parser.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_class_type.cpp [JIT] Print out CU address in ClassType::repr_str() (#50194) 2021-01-19 23:04:30 -08:00
test_cleanup_passes.cpp gtest-ify JIT tests, through the letter c (#45249) 2020-09-24 00:21:20 -07:00
test_code_template.cpp gtest-ify JIT tests, through the letter c (#45249) 2020-09-24 00:21:20 -07:00
test_constant_pooling.cpp [JIT] Fix Dict bug in constant hashing (#45929) 2020-10-07 17:40:17 -07:00
test_create_autodiff_subgraphs.cpp gtest-ify JIT tests, through the letter c (#45249) 2020-09-24 00:21:20 -07:00
test_custom_class_registrations.cpp [WIP][FX] Fix tracing support for torchbind (#52884) 2021-03-05 23:40:16 -08:00
test_custom_class_registrations.h gtest-ify JIT tests, through the letter c (#45249) 2020-09-24 00:21:20 -07:00
test_custom_class.cpp [TorchScript] Support user defined classes as constants (#5062) 2020-11-16 20:52:02 -08:00
test_custom_operators.cpp op_whitelist -> op_allowlist (#52150) 2021-02-22 12:23:42 -08:00
test_dce.cpp gtest-ify JIT tests, through the letter c (#45249) 2020-09-24 00:21:20 -07:00
test_fuser.cpp Use c10::irange for great good (#52153) 2021-02-24 18:43:50 -08:00
test_gpu.cpp Fix wrong TORCH_CHECK usages (#52670) 2021-02-23 14:47:51 -08:00
test_graph_executor.cpp [JIT] Fix archive file extension in examples and docs (#50649) 2021-01-20 02:04:46 -08:00
test_inliner.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_interface.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_interpreter_async.pt [DI] Allow explicit taskLauncher for torchscript interpreter (#46865) 2020-11-04 17:07:55 -08:00
test_interpreter.cpp [JIT] Fix archive file extension in examples and docs (#50649) 2021-01-20 02:04:46 -08:00
test_ir.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_irparser.cpp Fix stride printing/parsing formatting (#45156) 2020-10-06 15:06:46 -07:00
test_jit_type.cpp [PyTorch][codemod] Replace immediately-dereferenced expect calls w/expectRef (#50228) 2021-01-13 16:13:55 -08:00
test_lite_interpreter.cpp Add a demo backend with compiler (#52603) 2021-02-26 11:53:34 -08:00
test_lite_trainer.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_memory_dag.cpp [jit] gtest-ify test_alias_analysis.cpp (#45018) 2020-09-21 12:19:37 -07:00
test_misc.cpp Add a filter to remove mutation (#51923) 2021-03-01 21:22:33 -08:00
test_mobile_type_parser.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_module_api.cpp [JIT] Update freezing api (#52337) 2021-02-18 00:17:27 -08:00
test_peephole_optimize.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_qualified_name.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_save_load.cpp Adding JIT support for cuda streams and events (#48020) 2020-12-29 20:24:57 -08:00
test_schema_matching.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_subgraph_matcher.cpp [JIT] Support multiple outputs in subgraph matcher. (#48992) 2020-12-15 13:09:24 -08:00
test_subgraph_rewriter.cpp [JIT] Support multiple outputs in subgraph matcher. (#48992) 2020-12-15 13:09:24 -08:00
test_subgraph_utils.cpp Extend subgraph utils to cover merging a node following a subgraph (#52513) 2021-03-01 21:22:43 -08:00
test_utils.cpp port all JIT tests to gtest (#45264) 2020-09-25 11:37:43 -07:00
test_utils.h Add a demo backend with compiler (#52603) 2021-02-26 11:53:34 -08:00
tests_setup.py Add default arguments to cuda stream and events (#53025) 2021-03-02 14:37:24 -08:00
torch_python_test.cpp [jit] Pull (most) tests out of libtorch_python (#44795) 2020-09-18 14:04:40 -07:00

JIT C++ Tests

Adding a new test

First, create a new test file. Test files should have be placed in this directory, with a name that starts with test_, like test_foo.cpp.

In general a single test suite

Add your test file to the JIT_TEST_SRCS list in test/cpp/jit/CMakeLists.txt.

A test file may look like:

#include <gtest/gtest.h>

using namespace ::torch::jit

TEST(FooTest, BarBaz) {
   // ...
}

// Append '_CUDA' to the test case name will automatically filter it out if CUDA
// is not compiled.
TEST(FooTest, NeedsAGpu_CUDA) {
   // ...
}

// Similarly, if only one GPU is detected, tests with `_MultiCUDA` at the end
// will not be run.
TEST(FooTest, NeedsMultipleGpus_MultiCUDA) {
   // ...
}

Building and running the tests

The following commands assume you are in PyTorch root.

# ... Build PyTorch from source, e.g.
python setup.py develop
# (re)build just the binary
ninja -C build bin/test_jit
# run tests
build/bin/test_jit --gtest_filter='glob_style_filter*'