pytorch/torch/csrc/utils/pybind.h
Yangqing Jia 713e706618 Move exception to C10 (#12354)
Summary:
There are still a few work to be done:

- Move logging and unify AT_WARN with LOG(ERROR).
- A few header files are still being plumbed through, need cleaning.
- caffe2::EnforceNotMet aliasing is not done yet.
- need to unify the macros. See c10/util/Exception.h

This is mainly a codemod and not causing functional changes. If you find your job failing and trace back to this diff, usually it can be fixed by the following approaches:

(1) add //caffe2/c10:c10 to your dependency (or transitive dependency).
(2) change objects such as at::Error, at::Optional to the c10 namespace.
(3) change functions to the c10 namespace. Especially, caffe2::MakeString is not overridden by the unified c10::str function. Nothing else changes.

Please kindly consider not reverting this diff - it involves multiple rounds of rebasing and the fix is usually simple. Contact jiayq@ or AI Platform Dev for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12354

Reviewed By: orionr

Differential Revision: D10238910

Pulled By: Yangqing

fbshipit-source-id: 7794d5bf2797ab0ca6ebaccaa2f7ebbd50ff8f32
2018-10-15 13:33:18 -07:00

99 lines
2.8 KiB
C++

#pragma once
#include "torch/csrc/python_headers.h"
#include <ATen/ATen.h>
#include <pybind11/pybind11.h>
#include <pybind11/stl.h>
#include "torch/csrc/DynamicTypes.h"
#include "torch/csrc/autograd/python_variable.h"
#include "torch/csrc/utils/python_tuples.h"
#include "torch/csrc/utils/python_numbers.h"
#include <stdexcept>
namespace py = pybind11;
namespace pybind11 { namespace detail {
// torch.autograd.Variable <-> at::Tensor conversions (without unwrapping)
template <>
struct type_caster<at::Tensor> {
public:
PYBIND11_TYPE_CASTER(at::Tensor, _("at::Tensor"));
bool load(handle src, bool) {
PyObject* obj = src.ptr();
if (THPVariable_Check(obj)) {
value = reinterpret_cast<THPVariable*>(obj)->cdata;
return true;
}
return false;
}
static handle
cast(at::Tensor src, return_value_policy /* policy */, handle /* parent */) {
if (!src.is_variable()) {
throw std::runtime_error(
"Expected tensor's dynamic type to be Variable, not Tensor");
}
return handle(THPVariable_Wrap(torch::autograd::Variable(src)));
}
};
template<> struct type_caster<torch::autograd::Variable> {
public:
PYBIND11_TYPE_CASTER(torch::autograd::Variable, _("torch::autograd::Variable"));
bool load(handle src, bool) {
PyObject *source = src.ptr();
if (THPVariable_Check(source)) {
value = ((THPVariable*)source)->cdata;
return true;
} else {
return false;
}
}
static handle cast(torch::autograd::Variable src, return_value_policy /* policy */, handle /* parent */) {
return handle(THPVariable_Wrap(src));
}
};
template<> struct type_caster<at::IntList> {
public:
PYBIND11_TYPE_CASTER(at::IntList, _("at::IntList"));
bool load(handle src, bool) {
PyObject *source = src.ptr();
auto tuple = PyTuple_Check(source);
if (tuple || PyList_Check(source)) {
auto size = tuple ? PyTuple_GET_SIZE(source) : PyList_GET_SIZE(source);
v_value.resize(size);
for (int idx = 0; idx < size; idx++) {
PyObject* obj = tuple ? PyTuple_GET_ITEM(source, idx) : PyList_GET_ITEM(source, idx);
if (THPVariable_Check(obj)) {
v_value[idx] = THPVariable_Unpack(obj).item<int64_t>();
} else if (PyLong_Check(obj)) {
// use THPUtils_unpackLong after it is safe to include python_numbers.h
v_value[idx] = THPUtils_unpackLong(obj);
} else {
return false;
}
}
value = v_value;
return true;
}
return false;
}
static handle cast(at::IntList src, return_value_policy /* policy */, handle /* parent */) {
return handle(THPUtils_packInt64Array(src.size(), src.data()));
}
private:
std::vector<int64_t> v_value;
};
// http://pybind11.readthedocs.io/en/stable/advanced/cast/stl.html#c-17-library-containers
template <typename T>
struct type_caster<c10::optional<T>> : optional_caster<c10::optional<T>> {};
}} // namespace pybind11::detail