pytorch/torch/csrc/utils/cuda_lazy_init.cpp
Edward Yang 72822ee6b2 Fix #11430 (CPU only builds raise opaque error message when calling .… (#11533)
Summary:
…cuda())

While I was at it, I audited all other ways I know how we might get a CUDA
type from PyTorch and fixed more constructors which don't work.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11533

Differential Revision: D9775786

Pulled By: ezyang

fbshipit-source-id: cd07cdd375fdf74945539ec475a48bf08cbc0c17
2018-09-14 09:10:08 -07:00

30 lines
780 B
C++

#include "cuda_lazy_init.h"
#include "torch/csrc/python_headers.h"
#include <mutex>
#include "torch/csrc/Exceptions.h"
#include "torch/csrc/utils/object_ptr.h"
namespace torch {
namespace utils {
void cuda_lazy_init() {
AutoGIL g;
// Protected by the GIL. We don't use call_once because under ASAN it
// has a buggy implementation that deadlocks if an instance throws an
// exception. In any case, call_once isn't necessary, because we
// have taken a lock.
static bool run_yet = false;
if (!run_yet) {
auto module = THPObjectPtr(PyImport_ImportModule("torch.cuda"));
if (!module) throw python_error();
auto res = THPObjectPtr(PyObject_CallMethod(module.get(), "_lazy_init", ""));
if (!res) throw python_error();
run_yet = true;
}
}
}
}