pytorch/torch/csrc/autograd/cpp_hook.cpp
Peter Bell d701357d92 Factor out TensorBase that doesn't depend on native operators (#63612)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/63612

This makes Tensor inherit from a new class TensorBase, that provides a subset of Tensor that doesn't
directly depend on native_functions.yaml. Code that only includes TensorBase.h with thus not need to
be rebuilt every time someone changes an operator signature.

Making `Tensor` inherit from this class means that `const TensorBase&` parameters will be callable
with an ordinary `Tensor`. I've also made `Tensor` constructible and assignable from `TensorBase` to
minimize friction in code mixing the two types.

To help enforce that `Tensor.h` and `Functions.h` aren't accidentally included, I've added an error
into `Operators.h` if `TORCH_ASSERT_NO_OPERATORS` is defined. We can either set this in the build
system for certain folders, or just define it at the top of any file.

I've also included an example of manually special-casing the commonly used `contiguous` operator.
The inline function's slow path defers to `TensorBase::__dispatch_contiguous` which is defined in
`Tensor.cpp`. I've made it so `OptionalTensorRef` is constructible from `TensorBase`, so I can
materialize a `Tensor` for use in dispatch without actually increasing its refcount.

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D30728580

Pulled By: ezyang

fbshipit-source-id: 2cbc8eee08043382ee6904ea8e743b1286921c03
2021-09-08 13:28:54 -07:00

45 lines
1.2 KiB
C++

#include <torch/csrc/autograd/cpp_hook.h>
#include <torch/csrc/autograd/variable.h>
#include <torch/csrc/autograd/custom_function.h>
namespace {
using torch::autograd::Variable;
void check_single_result (const at::TensorBase &value, const at::TensorBase &result, std::string hook_name) {
if (!value.defined()) {
throw std::runtime_error("can't replace a empty gradient with a non-empty value");
}
torch::autograd::check_variable_result(value, result, hook_name);
}
}
namespace torch { namespace autograd {
// NOLINTNEXTLINE(modernize-pass-by-value)
CppFunctionPreHook::CppFunctionPreHook(const std::shared_ptr<hooks_list> &hooks, int value_idx)
: hooks_(hooks)
, value_idx_(value_idx)
{}
variable_list CppFunctionPreHook::operator()(const variable_list& values) {
auto value = values[value_idx_];
for (unsigned i = 0; i < hooks_->size(); ++i) {
auto &hook = (*hooks_)[i];
if (!hook) {
// hook was removed
continue;
}
auto res = hook(value);
if (!res.defined()) {
// Don't change gradient
continue;
}
check_single_result(value, res, c10::to_string(i));
value = std::move(res);
}
variable_list results(values);
results[value_idx_] = value;
return results;
}
}} // namespace torch::autograd