pytorch/torch/csrc/autograd/python_variable_indexing.cpp
Edward Yang aa49aa856c Tensor type set (#25308)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25308

Instead of storing a single TensorTypeId in a Tensor, we store a bitset of tensor type IDs in a Tensor, TensorTypeSet. This class comes with some unit tests.  This is in preparation for making Variable a TensorTypeId. In order to help flush out places where this makes a semantic difference, we rename `Tensor::type_id()` to `Tensor::type_set()` and smoke out all of the locations where this was semantically meaningful.

Because the new tensor type set is 64-bits, this increases the size of Tensor by a word.

Listing of semantic changes:
* Many TensorImpl related constructors just propagate TensorTypeId to a parent constructor. These are pretty simple to adjust.
  * Backend extensions are now in the business of explicitly constructing a TensorTypeSet and then passing it in. This is probably OK for now but when Variable drops, these dispatch IDs may get immediately overwritten to have Variable set.
* `sparseTensorSetToDeviceType` and similar functions previously did an equality test with TensorTypeId, to determine what an appropriate device type is. This equality is now replaced with a set inclusion test. This is valid, under the assumption that we don't ever have weird sets like "this tensor is simultaneously a sparse CPU tensor and a sparse CUDA tensor", which will be true in the short term plan of adding Variable to the dispatch ID.
* `impl::dispatchTypeId` was generally introduced for cases where we legitimately need to convert from `TensorTypeSet -> TensorTypeId` in a dispatch related manner. At the moment, the implementation is trivial, but they will soon be adjusted to handle TLS. I've tried to make these call sites as forwards compatible as possible:
  * `checked_tensor_unwrap` and co now use `dispatchTypeId`. When Variable is added to the type set, these will always be called in a context where the Variable type ID is disabled, so we will get the correct underlying tensor type ID.
  * Uses of `Backend` in dispatch are now replaced with `TensorTypeSet`. The general heuristic here for whether or not to accept a `TensorTypeId` or `TensorTypeSet` is that we want to make the generated code as simple as possible. It is easier to retrieve a `TensorTypeSet`, so that's a more appropriate API in these cases.
* In some cases, I could not conveniently switch an implementation to the new semantics, because it was blocked on some other refactor. In this case, I introduced `legacyExtractTypeId`, which gives what would be a BC-compatible `TensorTypeSet` to `TensorTypeId` implementation that will continue to report the same values it would have prior to this change. This is **different** from `dispatchTypeId`, because this function does NOT respect TLS; it always ignores Variable type IDs.
  * c10 dispatcher tests, which are oblivious to Variable dispatch, use this BC function (actually, they use `extractTypeId`, an overload for Tensor.
  * The implementation of `new_*` methods heavily relies on tensor type ID, I chose not to unwind this. PR to refactor this at https://github.com/pytorch/pytorch/pull/25475
  * Slicing also relies on tensor type ID, see `torch/csrc/autograd/python_variable_indexing.cpp` (though in some cases in this file, I was able to replace use of tensor type ID with TensorOptions)
* In some cases, there is an equality test on tensor type ID which would be better done by testing "tensor axes". In those cases, I replaced those equality tests with more equality tests.
  * Example: `torch/csrc/nn/type_checks.h`
  * There is a total punt in `torch/csrc/tensor/python_tensor.cpp` where "instance of" checking is done via dispatch ids. In general, the Variable-ness of a tensor doesn't participate in instanceof testing. It's not entirely clear what to do here.
  * Instead of storing `Backend` in `VariableInfo`, we now just store Layout.

c10 dispatcher test updates were done with:

```
:%s/\([^ ]\+\)\.type_id()/extractTypeId(\1)/g
:%s/\([^( ]\+\)->type_id()/extractTypeId(*\1)/g
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/25308

Differential Revision: D17092791

Test Plan: sandcastle and ossci

Reviewed By: bwasti

Pulled By: ezyang

fbshipit-source-id: 22207d14fe62dd31ee19cc5011af22e3d9aabb5b
2019-09-10 10:30:54 -07:00

398 lines
13 KiB
C++

#include <torch/csrc/autograd/python_variable_indexing.h>
#include <torch/csrc/DynamicTypes.h>
#include <torch/csrc/Exceptions.h>
#include <torch/csrc/THP_export.h>
#include <torch/csrc/autograd/function.h>
#include <torch/csrc/autograd/python_variable.h>
#include <torch/csrc/autograd/utils/wrap_outputs.h>
#include <torch/csrc/autograd/variable.h>
#include <torch/csrc/utils/python_compat.h>
#include <torch/csrc/utils/python_numbers.h>
#include <torch/csrc/utils/tensor_new.h>
#include <torch/csrc/jit/tracer.h>
#include <torch/csrc/utils/tensor_types.h>
#include <ATen/DeviceGuard.h>
#include <ATen/ExpandUtils.h>
#include <c10/core/TensorOptions.h>
#include <ATen/core/LegacyTypeDispatch.h>
#include <vector>
#include <tuple>
using namespace at;
using namespace torch::autograd::utils;
namespace torch { namespace autograd {
Py_ssize_t THPVariable_length(PyObject* self) {
HANDLE_TH_ERRORS
auto& self_ = reinterpret_cast<THPVariable*>(self)->cdata;
if (self_.dim() == 0) {
return 0;
}
return (Py_ssize_t)self_.size(0);
END_HANDLE_TH_ERRORS_RET(-1)
}
// We allow indexing by integers, slices, ellipsis, None, Variables,
// and tuples of those types. We also handle bools as if they were a
// Variable[ByteTensor].
static int64_t count_specified_dimensions(PyObject* index) {
// Count the number of indexed dimensions (everything but ellipsis and None)
int64_t count = 0;
auto size = PyTuple_GET_SIZE(index); // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
for (Py_ssize_t i = 0; i < size; i++) {
PyObject* obj = PyTuple_GET_ITEM(index, i); // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
if (THPVariable_Check(obj)) {
auto& var = reinterpret_cast<THPVariable*>(obj)->cdata;
if (var.scalar_type() == kByte || var.scalar_type() == kBool) {
count += var.dim();
} else {
count++;
}
} else if (obj != Py_None && obj != Py_Ellipsis && obj != Py_True && obj != Py_False) { // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
count++;
}
}
return count;
}
[[noreturn]]
static void invalid_index(PyObject* obj) {
throw IndexError(
"only integers, slices (`:`), ellipsis (`...`), None and long or byte "
"Variables are valid indices (got %s)", Py_TYPE(obj)->tp_name);
}
static Variable applySlice(const Variable& self, int64_t dim, PyObject* slice, bool ensure_view=false) {
Py_ssize_t start, stop, step;
auto length = self.size(dim);
if (!THPUtils_unpackSlice(slice, &start, &stop, &step)) {
throw python_error();
}
if (step == 0) {
throw ValueError("step cannot be zero");
}
if (step < 0) {
// TODO: implement negative step
throw ValueError("negative step not yet supported");
}
// Skip this optimization if we are tracing, as the trace may be polymorphic
// over the shape of the `self` tensor, and we still want to record
// the slice.
if (!ensure_view && start == 0 && stop == length && step == 1 && !jit::tracer::isTracing()) {
return self;
}
return self.slice(dim, start, stop, step);
}
static Variable applySelect(const Variable& self, int64_t dim, int64_t index, int64_t real_dim=0) {
if (index == 0 && dim == 0 && self.dim() == 0) {
throw IndexError(
"invalid index of a 0-dim tensor. "
"Use tensor.item() to convert a 0-dim tensor to a Python number");
}
int64_t size = self.size(dim);
if (index < -size || index >= size) {
throw IndexError("index %lld is out of bounds for dimension %lld with size %lld",
index, real_dim, size);
}
// if the index is negative, do not normalize it because that would fix the index
// on the current tensor size in the tracer.
// aten::select also works on negative indices
return self.select(dim, index);
}
static Variable sequenceToVariable(c10::TensorTypeId type_id, PyObject* seq) {
return torch::utils::indexing_tensor_from_data(type_id, kLong, c10::nullopt, seq);
}
static Variable valueToTensor(c10::TensorOptions options, PyObject* value) {
if (THPVariable_Check(value)) {
return reinterpret_cast<THPVariable*>(value)->cdata;
}
options = options.is_variable(true);
if (THPUtils_checkLong(value) || PyBool_Check(value)) {
return at::scalar_tensor(Scalar(THPUtils_unpackLong(value)), options);
}
if (PyFloat_Check(value)) {
return at::scalar_tensor(Scalar(THPUtils_unpackDouble(value)), options);
}
throw TypeError(
"can't assign a %s to a %s",
Py_TYPE(value)->tp_name,
torch::utils::type_to_string(getNonVariableDeprecatedTypeProperties(options.backend(), typeMetaToScalarType(options.dtype()))).c_str());
}
static Variable boolToIndexingTensor(const Variable& self, bool value) {
// booleans add a dimension of size 1. true indexes this dimension as if 0:, false as empty.
if (value) {
return at::zeros({1}, self.options().dtype(kLong));
} else {
return at::empty({0}, self.options().dtype(kLong));
}
}
static Variable applySlicing(const Variable& self, PyObject* index, variable_list& outIndices) {
int64_t size = PyTuple_GET_SIZE(index); // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
int64_t dim = 0;
int64_t specified_dims = count_specified_dimensions(index);
auto handle_var = [&](const Variable& var) {
// TODO: check scalarType
outIndices.resize(dim + 1);
outIndices[dim] = var;
dim++;
};
if (specified_dims > self.dim()) {
throw IndexError("too many indices for tensor of dimension %d", (int)self.dim());
}
Variable result = self;
for (int64_t i = 0; i < size; i++) {
PyObject* obj = PyTuple_GET_ITEM(index, i); // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
if (THPUtils_checkLong(obj)) {
result = applySelect(result, dim, THPUtils_unpackLong(obj), i);
} else if (PySlice_Check(obj)) {
result = applySlice(result, dim, obj);
dim++;
} else if (obj == Py_Ellipsis) {
dim += self.dim() - specified_dims;
} else if (obj == Py_None) {
result = result.unsqueeze(dim);
dim++;
} else if (PyBool_Check(obj)) {
result = result.unsqueeze(dim);
handle_var(boolToIndexingTensor(result, obj == Py_True)); // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
} else if (THPVariable_Check(obj)) {
auto& var = THPVariable_Unpack(obj);
auto scalar_type = var.scalar_type();
if (var.dim() == 0 && at::isIntegralType(scalar_type, /*includeBool=*/true)) {
if (scalar_type != at::kByte && scalar_type != at::kBool) {
result = applySelect(result, dim, THPUtils_unpackLong(obj), i);
} else {
result = result.unsqueeze(dim);
if(scalar_type == at::kBool) {
handle_var(boolToIndexingTensor(result, var.item<bool>() != 0));
} else {
handle_var(boolToIndexingTensor(result, var.item<uint8_t>() != 0));
}
}
} else {
handle_var(var);
}
} else if (PySequence_Check(obj)) {
// TODO: Naughty naughty get out of jail free
// (Fixing this means I have to fix the call chain though :/)
handle_var(sequenceToVariable(legacyExtractTypeId(self), obj));
} else {
auto index = THPObjectPtr(PyNumber_Index(obj));
if (!index) {
PyErr_Clear();
invalid_index(obj);
}
result = applySelect(result, dim, THPUtils_unpackLong(index), i);
}
}
return result;
}
static std::vector<Tensor> typeConvertIndices(const Variable& self, const variable_list& indices) {
std::vector<Tensor> converted_inds(indices.size());
for (size_t i = 0; i < indices.size(); ++i) {
const auto &ind = indices[i];
if (ind.defined()) {
converted_inds[i] = ind.to(ind.options().device(self.device()));
} else {
converted_inds[i] = indices[i];
}
}
return converted_inds;
}
static Variable dispatch_index(const Variable& self, const variable_list& indices) {
AutoNoGIL no_gil;
std::vector<Tensor> converted_indices = typeConvertIndices(self, indices);
OptionalDeviceGuard device_guard(device_of(self));
return self.index(converted_indices);
}
static Variable dispatch_index_put_(Variable& self, const variable_list& indices, const Variable& value) {
AutoNoGIL no_gil;
std::vector<Tensor> converted_indices = typeConvertIndices(self, indices);
OptionalDeviceGuard device_guard(device_of(self));
return self.index_put_(converted_indices, value);
}
static bool treatSequenceAsTuple(PyObject* index) {
if (PyTuple_Check(index)) {
return true;
}
if (!PySequence_Check(index)) {
return false;
}
// This uses a heuristics from NumPy for determining whether to treat
// non-tuple sequences as if they were a tuple. From the NumPy code comments:
//
// "At this point, we're left with a non-tuple, non-array, sequence:
// typically, a list. We use some somewhat-arbitrary heuristics from here
// onwards to decided whether to treat that list as a single index, or a
// list of indices. Backwards compatibility only takes effect for short
// sequences - otherwise we treat it like any other scalar."
auto n = PySequence_Size(index);
if (n < 0) {
// Negative size indicates a Python error in the PySequence_Size call.
PyErr_Clear();
return false;
}
if (n >= 32) {
return false;
}
for (Py_ssize_t i = 0; i < n; i++) {
auto obj = THPObjectPtr{PySequence_GetItem(index, i)};
if (!obj.get()) {
PyErr_Clear();
return false;
}
if (THPVariable_Check(obj.get()) || PySequence_Check(obj.get()) || PySlice_Check(obj.get())) {
return true;
}
if (obj.get() == Py_Ellipsis || obj.get() == Py_None) {
return true;
}
}
return false;
}
static THPObjectPtr wrapTuple(PyObject* index) {
THPObjectPtr res;
if (treatSequenceAsTuple(index)) {
res = PySequence_Tuple(index);
} else {
res = PyTuple_Pack(1, index); // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
}
if (!res) throw python_error();
return res;
}
PyObject* THPVariable_getitem(PyObject* self, PyObject* index) {
HANDLE_TH_ERRORS
auto& self_ = reinterpret_cast<THPVariable*>(self)->cdata;
OptionalDeviceGuard device_guard(device_of(self_));
// handle simple types: integers, slices, ellipsis
if (index == Py_None) {
return wrap(self_.unsqueeze(0));
} else if (index == Py_Ellipsis) {
return wrap(at::alias(self_));
} else if (THPUtils_checkLong(index)) {
return wrap(applySelect(self_, 0, THPUtils_unpackLong(index)));
} else if (PySlice_Check(index)) {
return wrap(applySlice(self_, 0, index, true));
}
// wrap index in a tuple if it's not already one
THPObjectPtr holder = wrapTuple(index);
variable_list variableIndices;
Variable sliced = applySlicing(self_, holder.get(), variableIndices);
if (variableIndices.empty()) {
if (sliced.is_same(self_)) {
// ensure we return a shallow copy for things like x[...]
sliced = at::alias(sliced);
}
return wrap(sliced);
}
// indexing by tensors ("advanced" indexing)
return wrap(dispatch_index(sliced, variableIndices));
Py_RETURN_NONE;
END_HANDLE_TH_ERRORS
}
// To match numpy semantics:
// As a special case for backwards compatibility,
// strip away unit dimensions from the left of 'src'
static IntArrayRef slicePrefix1sSize(IntArrayRef sizes) {
size_t first_non1_src = sizes.size();
for (size_t i = 0; i < sizes.size(); ++i) {
if (sizes[i] != 1) {
first_non1_src = i;
break;
}
}
return sizes.slice(first_non1_src);
}
static void copy_to(Variable dst, const Variable& src) {
Tensor b_src;
IntArrayRef sliced_src_sizes = slicePrefix1sSize(src.sizes());
std::tie(b_src) = expand_inplace(dst, src.view(sliced_src_sizes), "setitem");
dst.copy_(b_src);
}
int THPVariable_setitem(PyObject* self, PyObject* index, PyObject* py_value) {
HANDLE_TH_ERRORS
if (py_value == nullptr) {
throw TypeError("Tensor does not support deleting items");
}
auto& self_ = reinterpret_cast<THPVariable*>(self)->cdata;
OptionalDeviceGuard device_guard(device_of(self_));
Variable value;
// TODO: This qint special case looks very suspicious...
if (isQIntType(self_.scalar_type())) {
value = valueToTensor(device(kCPU).dtype(kFloat), py_value);
} else {
value = valueToTensor(self_.options(), py_value);
}
// handle simple types: integers, slices, ellipsis, bool
if (index == Py_False) { // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
// do nothing for false (technically we should check the size, but we don't have
// real 0-sized shapes.
return 0;
} else if (index == Py_Ellipsis) {
copy_to(self_, value);
return 0;
} else if (index == Py_None || index == Py_True) { // NOLINT(cppcoreguidelines-pro-type-cstyle-cast)
copy_to(self_.unsqueeze(0), value);
return 0;
} else if (THPUtils_checkLong(index)) {
copy_to(applySelect(self_, 0, THPUtils_unpackLong(index)), value);
return 0;
} else if (PySlice_Check(index)) {
copy_to(applySlice(self_, 0, index), value);
return 0;
}
// wrap index in a tuple if it's not already one
THPObjectPtr holder = wrapTuple(index);
variable_list variableIndices;
Variable sliced = applySlicing(self_, holder.get(), variableIndices);
if (variableIndices.empty()) {
copy_to(sliced, value);
return 0;
}
IntArrayRef slicedValueSizes = slicePrefix1sSize(value.sizes());
torch::autograd::Variable valuesSliced;
if (!value.sizes().equals(slicedValueSizes)) {
valuesSliced = value.view(slicedValueSizes);
} else {
valuesSliced = value;
}
dispatch_index_put_(sliced, variableIndices, valuesSliced);
return 0;
END_HANDLE_TH_ERRORS_RET(-1)
}
}} // namespace torch::autograd