pytorch/torch/csrc/utils/tensor_numpy.h
Peter Bell 40d1f77384 Codegen: python_torch_functions only include relevant operators (#68693)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68693

Generation of python bindings for native functions is split over 8
different files. One for each namespace, with the torch namespace
split into 3 shards, and methods in their own file as well. This
change ensures that editing any single (non-method) operator only
causes one of these files to be rebuilt.

Test Plan: Imported from OSS

Reviewed By: jbschlosser

Differential Revision: D32596270

Pulled By: albanD

fbshipit-source-id: 0570ec69e7476b8f1bc21138ba18fe8f95ebbe3f
(cherry picked from commit ba0fc71a3a)
2022-01-21 15:37:06 +00:00

22 lines
582 B
C++

#pragma once
#include <torch/csrc/python_headers.h>
#include <ATen/core/Tensor.h>
namespace torch { namespace utils {
PyObject* tensor_to_numpy(const at::Tensor& tensor);
at::Tensor tensor_from_numpy(PyObject* obj, bool warn_if_not_writeable=true);
int aten_to_numpy_dtype(const at::ScalarType scalar_type);
at::ScalarType numpy_dtype_to_aten(int dtype);
bool is_numpy_available();
bool is_numpy_int(PyObject* obj);
bool is_numpy_scalar(PyObject* obj);
void warn_numpy_not_writeable();
at::Tensor tensor_from_cuda_array_interface(PyObject* obj);
}} // namespace torch::utils