mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/55462 handles and symbolicate exception callstack thrown from backend. Objective of this diff is to achieve improve error reporting when exceptions are raised from lowered backend. We would effectively like to get the same model level stack trace that you would get without having lowered some module to backend. For example: ``` class AA(nn.Module): def forward(self, x, y): return x + y class A(nn.Module): def __init__(...): self.AA0 = AA() def forward(self, x, y): return self.AA0.forward(x, y) + 3 class B(nn.Module): def forward(self, x): return x + 2 class C(nn.Module): def __init__(...): self.A0 = A() self.B0 = B() def forward(self, x, y): return self.A0.forward(x, y) + self.B0.forward(x) ``` If the we then do C().forward(torch.rand((2,3)), torch.rand(14,2))) we will likely see error stack like: ``` C++ exception with description "The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File "<string>", line 3, in forward def forward(self, x, y): return self.A0.forward(x, y) + self.B0.forward(x) ~~~~~~~~~~~~~~~ <--- HERE File "<string>", line 3, in forward def forward(self, x, y): return self.AA0.forward(x, y) + 3 ~~~~~~~~~~~~~~~~ <--- HERE File "<string>", line 3, in forward def forward(self, x, y): return x + y ~~~~~ <--- HERE ``` We would like to see the same error stack if we lowered C.A0 to some backend. With this diff we get something like: ``` Module hierarchy:top(C).A0(backend_with_compiler_demoLoweredModule).AA0(AA) Traceback of TorchScript (most recent call last): File "<string>", line 3, in FunctionName_UNKNOWN def forward(self, x, y): return self.A0.forward(x, y) + self.B0.forward(x) ~~~~~~~~~~~~~~~ <--- HERE File "<string>", line 5, in FunctionName_UNKNOWN typed_inputs: List[Any] = [x, y, ] if self.__backend.is_available() : _0, = self.__backend.execute(self.__handles["forward"], typed_inputs) ~~~~~~~~~~~~~~~~~~~~~~ <--- HERE assert isinstance(_0, Tensor) return _0 File "<string>", line 3, in FunctionName_UNKNOWN def forward(self, x, y): return self.AA0.forward(x, y) + 3 ~~~~~~~~~~~~~~~~ <--- HERE File "<string>", line 3, in FunctionName_UNKNOWN def forward(self, x, y): return x + y ~~~~~ <--- HERE ``` This is achieved in 3 parts: Part 1: A. BackendDebugInfoRecorder: During backend lowering, in `to_backend`, before calling the preprocess function corresponding to the backend. This will facilitate recording of debug info (such as source range + inlined callstack) for the lowered module. B. Instantiate WithBackendDebugInfoRecorder with BackendDebugInfoRecorder. This initializes thread local pointer to BackendDebugInfoRecorder. C. generate_debug_handles: In preprocess function, the backend will call generate_debug_handles for each method being lowered separately. generate_debug_handles takes `Graph` of the method being lowered and returns a map of Node*-to-debug_handles. Backend is responsible for storing debug handles appropriately so as to raise exception (and later profiling) using debug handles when the exception being raised corresponds to particular Node that was lowered. Inside generate_debug_handles, we will query the current BackendDebugHandleInfoRecorder, that is issuing debug handles. This debug handle manager will issue debug handles as well as record debug_handles-to-<source range, inlined callstack> map. D. Back in `to_backend`, once the preprocess function is has finished lowering the module, we will call `stopRecord` on BackendDebugInfoRecorder. This will return the debug info map. This debug info is then stored inside the lowered module. Part 2: Serialization: During serialization for bytecode (lite interpreter), we will do two things: 1. Extract all the source ranges that are contained inside debug_handles-to-<source range, inlined callstack> map for lowered module. This will be source range corresponding to debug handles, including what is there is inlined callstack. Since we replaced original module with lowered module, we wont be serializing code for the original module and thus no source range. That is why the source range will have to be stored separately. We will lump all the source ranges for all the lowered modules in one single debug_pkl file. 2. Then we will serialize debug_handles-to-<source range, inlined callstack> map. Now during deserialization we will be able to reconstruct debug_handles-to-<source range, inlined callstack> map. Given all debug_handles are unique we would not need any module information. Test Plan: Tests are added in test_backend.cpp Tests are added in test_backend.cpp Imported from OSS Differential Revision: D27621330 D27621330 Reviewed By: raziel Pulled By: kimishpatel fbshipit-source-id: 0650ec68cda0df0a945864658cab226a97ba1890
136 lines
4.8 KiB
C++
136 lines
4.8 KiB
C++
#include <torch/csrc/jit/backends/backend.h>
|
|
#include <torch/csrc/jit/backends/backend_exception.h>
|
|
|
|
namespace torch {
|
|
namespace jit {
|
|
|
|
// Implementation of a PyTorch Backend that can process, compile and execute
|
|
// TorchScript Modules composed of 'add' and 'sub' operators. It just supports
|
|
// for modules that implement a sum or subtraction of 2 inputs (i.e. in1 + in2
|
|
// or in1 - in2). Hence the methods of the models expect exactly 2 inputs of
|
|
// type Tensor. This backend is used to demonstrate the flow of compilation and
|
|
// execution with minimum amount of work. It's not intended to a practical
|
|
// backend that can be used for actual inference.
|
|
|
|
// Implementation details:
|
|
//
|
|
// Compilation
|
|
// 1. A backend with minimum compilation features, "backend_with_compiler_demo"
|
|
// is added.
|
|
// 2. The compilation happens AOT in the preprocess function registered to this
|
|
// backend.
|
|
// 3. Compiled results are stored in a string blob for each method. They are
|
|
// serialized to the lowered module with __getstate__ function.
|
|
// 4. Error message with model source code is thrown, for features not handled
|
|
// by the backend compiler.
|
|
//
|
|
// Runtime
|
|
// 1. The compiled blob is loaded in __setstate__ method.
|
|
// 2. The compile function of the backend: parse the preprocessed blob to the
|
|
// format (a list of tokens) that the backend can understand.
|
|
// 3. The execute function of the backend executes the specified method
|
|
// (handle).
|
|
|
|
namespace {
|
|
std::vector<std::tuple<std::string, int64_t>> parseMethodHandle(
|
|
const std::string& blob) {
|
|
std::vector<std::tuple<std::string, int64_t>> result;
|
|
std::stringstream s_stream(blob);
|
|
constexpr char debug_handle_token[] = "<debug_handle>";
|
|
while (s_stream.good()) {
|
|
std::string substr;
|
|
getline(s_stream, substr, ',');
|
|
auto debug_handle_pos = substr.find(debug_handle_token);
|
|
int64_t debug_handle{-1};
|
|
auto instruction = substr.substr(0);
|
|
if (debug_handle_pos != std::string::npos) {
|
|
instruction = substr.substr(0, debug_handle_pos);
|
|
debug_handle = stoi(substr.substr(debug_handle_pos + 14));
|
|
}
|
|
result.push_back(std::make_tuple(instruction, debug_handle));
|
|
}
|
|
return result;
|
|
}
|
|
} // namespace
|
|
|
|
class BackendWithCompiler : public PyTorchBackendInterface {
|
|
public:
|
|
// Constructor.
|
|
// NOLINTNEXTLINE(modernize-use-equals-default)
|
|
explicit BackendWithCompiler() {}
|
|
// NOLINTNEXTLINE(modernize-use-override)
|
|
virtual ~BackendWithCompiler() = default;
|
|
|
|
bool is_available() override {
|
|
return true;
|
|
}
|
|
|
|
// Since the actual compilation is done AOT,
|
|
c10::impl::GenericDict compile(
|
|
c10::IValue processed,
|
|
c10::impl::GenericDict method_compile_spec) override {
|
|
auto dict = processed.toGenericDict();
|
|
auto handles =
|
|
c10::Dict<std::string, std::vector<std::tuple<std::string, int64_t>>>();
|
|
for (const auto& kv : dict) {
|
|
auto tokens = parseMethodHandle(kv.value().toStringRef());
|
|
handles.insert(kv.key().toStringRef(), tokens);
|
|
}
|
|
return c10::impl::toGenericDict(handles);
|
|
}
|
|
|
|
c10::impl::GenericList execute(
|
|
c10::IValue handle,
|
|
c10::impl::GenericList inputs) override {
|
|
TORCH_INTERNAL_ASSERT(inputs.size() == 2);
|
|
c10::IValue val0 = inputs[0];
|
|
at::Tensor x = val0.toTensor();
|
|
c10::IValue val1 = inputs[1];
|
|
at::Tensor h = val1.toTensor();
|
|
|
|
c10::List<at::Tensor> output_list;
|
|
double scalar_val = 1.0;
|
|
for (const auto& token : handle.toList()) {
|
|
IValue val = token;
|
|
auto instruction = val.toTuple()->elements()[0].toStringRef();
|
|
auto debug_handle = val.toTuple()->elements()[1].toInt();
|
|
double const_val = 1.0;
|
|
try {
|
|
if (instruction.rfind("prim::Constant", 0) == 0) {
|
|
TORCH_CHECK(
|
|
instruction.size() > 15,
|
|
"Constant value is expected in ",
|
|
instruction);
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-magic-numbers)
|
|
auto sub = instruction.substr(15);
|
|
// NOLINTNEXTLINE(clang-analyzer-deadcode.DeadStores)
|
|
const_val = stod(sub);
|
|
} else if (instruction == "aten::add") {
|
|
output_list.emplace_back(x.add(h, const_val));
|
|
} else if (instruction == "aten::sub") {
|
|
output_list.emplace_back(x.sub(h, const_val));
|
|
} else {
|
|
TORCH_CHECK(
|
|
false,
|
|
"Instruction, ",
|
|
instruction,
|
|
" is not supported. ",
|
|
"Contact the backend POC for details. ");
|
|
}
|
|
} catch (c10::Error& e) {
|
|
TORCH_DELEGATED_BACKEND_THROW(false, e.what(), debug_handle);
|
|
}
|
|
}
|
|
return c10::impl::toList(output_list);
|
|
}
|
|
};
|
|
|
|
namespace {
|
|
constexpr auto backend_name = "backend_with_compiler_demo";
|
|
// NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)
|
|
static auto cls = torch::jit::backend<BackendWithCompiler>(backend_name);
|
|
} // namespace
|
|
|
|
} // namespace jit
|
|
} // namespace torch
|