pytorch/test/cpp/jit/test_backend_lib.cpp
Raziel Alvarez Guevara c5cd993add Adds a bool is_available() method to the backend contract (#53068)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53068

Adds a ```bool is_available()``` method to the backend contract: it returns ```true``` if ```compile()``` and ```execute()``` can be called; ```false``` otherwise.

It is used to implement the following changes in the ```LoweredModule```:
* ```compile()``` in ```__setstate__``` will run if ```is_available()```, else ```__setstate__``` throws an exception (“Backend not available.”).
* ```compile()``` at ```LoweredModule``` creation will run if ```is_available()```, else a WARNING will be thrown.
* ```execute()``` will only be executed if ```is_available()``` returns true; else throws an exception (“Backend not available.”).

The goal of these changes is to ensure we have a well defined behaviour for the different combinations of backend availability on-host and on-target.

More specifically, backends may have different capabilities to compile and/or execute the Module, depending whether this happens on-host (i.e. where the program is being written) or on-target (where the program is being executed).

First of all, we know that "preprocess" always takes place, and that only happens on-host at creation time. So, we can assume that any compilation is needed/possible on-host then all of it could be pushed here.

Overall, we want to ensure the following:

**On host**

| compile | execute | Outcome |
| -- | -- | -- |
| No | No | On module creation, LoweredModule is generated, with a warning  (since compilation and execution can still take place on-target). On module load, throws an exception (since execution is not possible). |
| No | Yes | This configuration should not be possible. This assumes the full compiler is not available, even if some work was done in preprocess the program cannot be finalized for execution. |
| Yes | No | In this case, the expectation would be for is_available() to return false, and compilation logic to move into preprocess. |
| Yes | Yes | All good. This is the only case that is_available() should return true. |

**On target**

| compile | execute | Outcome |
| -- | -- | -- |
| No | No | Loading the LoweredModule throws an exception. Since execution is not possible. |
| No | Yes | Basically this is another instance of Yes/Yes: compilation per se may not be possible on device, which means compile() can be called without issue but it is a no-op, and thus is_available should return true. Consequently, loading the LoweredModule: Succeeds, if the preprocessed module is ready for execution. Fails with exception otherwise. |
| Yes | No | This configuration should not be possible. Just putting here for completeness. |
| Yes | Yes | All good. This, along with No/Yes case (because compilation is assumed to have happened on-host, so it's just another instance of Yes/Yes), are the cases where is_available() should return true. |

**Refactoring existing code**
This change also updates other backends (Glow) code, to implement the is_available() method to have the same behaviour as before this change (i.e. always available).

This should not cause backward incompatibilities with already saved models since we're adding a new method to the PyTorchBackendInterface.
Models saved with the old interface that didn't have is_available() will still find the other 2 methods in the bound object (i.e. compile and execute), and the saved LoweredModule logic will be the old one.

**Future**
We plan to use is_available() to implement support for fallback to the PyTorch interpreter.
ghstack-source-id: 123498571

Test Plan: Added C++ (test_backend.cpp) and Python (test_backends.py) tests to validate the exceptions.

Reviewed By: jackm321, spaugh, iseeyuan

Differential Revision: D26615833

fbshipit-source-id: 562e8b11db25784348b5f86bbc4179aedf15e0d3
2021-03-10 00:24:16 -08:00

85 lines
2.6 KiB
C++

#include <torch/csrc/jit/backends/backend.h>
namespace torch {
namespace jit {
// This test JIT backend is intended to do the minimal amount of work
// necessary to test that the JIT backend registration endpoints and
// code generation are working correctly. It is not intended to
// produce numerically correct results.
template <bool isAvailable>
class TestBackend : public PyTorchBackendInterface {
public:
// Constructor.
explicit TestBackend() {}
virtual ~TestBackend() = default;
bool is_available() override {
return isAvailable;
}
c10::impl::GenericDict compile(
c10::IValue processed,
c10::impl::GenericDict method_compile_spec) override {
auto spec =
c10::impl::toTypedDict<std::string, at::IValue>(method_compile_spec);
// Return the same string as a value for every key in method_compile_spec.
auto handles = c10::Dict<std::string, std::string>();
for (const auto& it : spec) {
handles.insert(it.key(), it.key());
}
return c10::impl::toGenericDict(handles);
}
c10::impl::GenericList execute(
c10::IValue handle,
c10::impl::GenericList inputs) override {
TORCH_INTERNAL_ASSERT(handle.isString());
TORCH_INTERNAL_ASSERT(inputs.size() > 0);
c10::List<at::Tensor> output_list;
// Implement simple accumulator and negative accumulator (?) ops. Return one
// or both of them depending on the handle to make sure multiple outputs are
// handled.
c10::IValue value = inputs[0];
at::Tensor accum = value.toTensor();
accum = accum.clone();
at::Tensor sub_accum = value.toTensor();
sub_accum = sub_accum.clone();
for (size_t i = 1, e = inputs.size(); i < e; ++i) {
value = inputs[i];
accum.add_(value.toTensor(), 1.0);
sub_accum.sub_(value.toTensor(), 1.0);
}
if (handle.toStringRef() == "accum") {
output_list.emplace_back(accum);
} else if (handle.toStringRef() == "sub_accum") {
output_list.emplace_back(sub_accum);
} else if (handle.toStringRef() == "forward") {
output_list.emplace_back(accum);
output_list.emplace_back(sub_accum);
}
return c10::impl::toList(output_list);
}
};
namespace {
c10::IValue preprocess(
const Module& mod,
const c10::Dict<IValue, IValue>& method_compile_spec) {
return mod._ivalue();
}
static auto cls_available =
torch::jit::backend<TestBackend<true>>("test_backend", preprocess);
static auto cls_unavailable = torch::jit::backend<TestBackend<false>>(
"test_backend_unavailable",
preprocess);
} // namespace
} // namespace jit
} // namespace torch