pytorch/torch/csrc/utils/init.cpp
David Riazati 1ec12fd491 Add minidump collection via breakpad (#55647)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55647

This adds [breakpad](https://github.com/google/breakpad) which comes with out-of-the-box utilities to register a signal handler that writes out a minidump on an unhandled exception. Right now this is gated behind a flag in `torch.utils`, but in the future it could be on by default. Sizewise this adds aboute 500k to `libtorch_cpu.so` (187275968 B to 187810016 B).

```bash
$ cat <<EOF > test.py
import torch

torch.utils.enable_minidump_collection()

# temporary util that just segfaults
torch._C._crash()
EOF

$ python test.py
Wrote minidump to /tmp/pytorch_crashes/6a829041-50e9-4247-ea992f99-a74cf47a.dmp
fish: “python test.py” terminated by signal SIGSEGV (Address boundary error)
$ minidump-2-core /tmp/pytorch_crashes/6a829041-50e9-4247-ea992f99-a74cf47a.dmp -o core.dmp
$ gdb python core.dmp
... commence debugging ...
```

Right now all exceptions that get passed up to Python don't trigger the signal handler (which by default only
handles [these](https://github.com/google/breakpad/blob/main/src/client/linux/handler/exception_handler.cc#L115)). It would be possible for PyTorch exceptions to explicitly write a minidump when passed up to Python (maybe only when the exception is unhandled or something).

Test Plan: Imported from OSS

Reviewed By: ailzhang

Differential Revision: D27679767

Pulled By: driazati

fbshipit-source-id: 1ab3b5160b6dc405f5097eb25acc644d533358d7
2021-04-16 13:05:01 -07:00

65 lines
2.5 KiB
C++

#include <ATen/core/ivalue.h>
#include <torch/csrc/utils/init.h>
#include <torch/csrc/utils/throughput_benchmark.h>
#include <torch/csrc/utils/crash_handler.h>
#include <pybind11/functional.h>
namespace torch {
namespace throughput_benchmark {
void initThroughputBenchmarkBindings(PyObject* module) {
auto m = py::handle(module).cast<py::module>();
using namespace torch::throughput_benchmark;
py::class_<BenchmarkConfig>(m, "BenchmarkConfig")
.def(py::init<>())
.def_readwrite(
"num_calling_threads", &BenchmarkConfig::num_calling_threads)
.def_readwrite("num_worker_threads", &BenchmarkConfig::num_worker_threads)
.def_readwrite("num_warmup_iters", &BenchmarkConfig::num_warmup_iters)
.def_readwrite("num_iters", &BenchmarkConfig::num_iters)
.def_readwrite("profiler_output_path", &BenchmarkConfig::profiler_output_path);
py::class_<BenchmarkExecutionStats>(m, "BenchmarkExecutionStats")
.def_readonly("latency_avg_ms", &BenchmarkExecutionStats::latency_avg_ms)
.def_readonly("num_iters", &BenchmarkExecutionStats::num_iters);
py::class_<ThroughputBenchmark>(m, "ThroughputBenchmark", py::dynamic_attr())
.def(py::init<jit::Module>())
.def(py::init<py::object>())
.def(
"add_input",
[](ThroughputBenchmark& self, py::args args, py::kwargs kwargs) {
self.addInput(std::move(args), std::move(kwargs));
})
.def(
"run_once",
[](ThroughputBenchmark& self, py::args args, py::kwargs kwargs) {
// Depending on this being ScriptModule of nn.Module we will release
// the GIL or not further down in the stack
return self.runOnce(std::move(args), std::move(kwargs));
})
.def("benchmark", [](ThroughputBenchmark& self, BenchmarkConfig config) {
// The benchmark always runs without the GIL. GIL will be used where
// needed. This will happen only in the nn.Module mode when manipulating
// inputs and running actual inference
pybind11::gil_scoped_release no_gil_guard;
return self.benchmark(config);
});
}
} // namespace throughput_benchmark
namespace crash_handler {
void initCrashHandlerBindings(PyObject* module) {
auto m = pybind11::handle(module).cast<pybind11::module>();
m.def("_enable_minidump_collection", _enable_minidump_collection)
.def("_get_minidump_directory", _get_minidump_directory);
}
} // namespace crash_handler
} // namespace torch