mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
Summary:
This PR restructures the public-facing C++ headers in a backwards compatible way. The problem right now is that the C++ extension header `torch/extension.h` does not include the C++ frontend headers from `torch/torch.h`. However, those C++ frontend headers can be convenient. Further, including the C++ frontend main header `torch/torch.h` in a C++ extension currently raises a warning because we want to move people away from exclusively including `torch/torch.h` in extensions (which was the correct thing 6 months ago), since that *used* to be the main C++ extension header but is now the main C++ frontend header. In short: it should be possible to include the C++ frontend functionality from `torch/torch.h`, but without including that header directly because it's deprecated for extensions.
For clarification: why is `torch/torch.h` deprecated for extensions? Because for extensions we need to include Python stuff, but for the C++ frontend we don't want this Python stuff. For now the python stuff is included in `torch/torch.h` whenever the header is used from a C++ extension (enabled by a macro passed by `cpp_extensions.py`) to not break existing users, but this should change in the future.
The overall fix is simple:
1. C++ frontend sub-headers move from `torch/torch.h` into `torch/all.h`.
2. `torch/all.h` is included in:
1. `torch/torch.h`, as is.
2. `torch/extensions.h`, to now also give C++ extension users this functionality.
With the next release we can then:
1. Remove the Python includes from `torch/torch.h`
2. Move C++-only sub-headers from `all.h` back into `torch.h`
3. Make `extension.h` include `torch.h` and `Python.h`
This will then break old C++ extensions that include `torch/torch.h`, since the correct header for C++ extensions is `torch/extension.h`.
I've also gone ahead and deprecated `torch::CPU` et al. since those are long due to die.
ezyang soumith apaszke fmassa
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13482
Differential Revision: D12924999
Pulled By: goldsborough
fbshipit-source-id: 5bb7bdc005fcb7b525195b769065176514efad8a
46 lines
1.7 KiB
C++
46 lines
1.7 KiB
C++
#pragma once
|
|
|
|
#include <ATen/ATen.h>
|
|
#include <ATen/core/Deprecated.h>
|
|
#include <torch/csrc/THP_export.h>
|
|
|
|
namespace torch {
|
|
|
|
// NOTE: This API is currently highly experimental and may change drastically
|
|
// in the near future.
|
|
|
|
// These functions provide a small wrapper around aten ensuring
|
|
// that we create tensors with type Variable rather than raw tensors
|
|
// when we create new tensors. We also provide a few accessors like
|
|
// requires_grad that make it easier to get to varible information when we have
|
|
// a at::Tensor
|
|
|
|
/// Returns a `TypeExtendedInterface` object for the given backend (e.g.
|
|
/// `at::kCPU`) and `ScalarType` (e.g. `at::kDouble`).
|
|
/// TODO: Eliminate this function as much as possible
|
|
AT_DEPRECATED(THP_CLASS at::TypeExtendedInterface& getVariableType(
|
|
at::Backend backend,
|
|
at::ScalarType type));
|
|
|
|
/// Returns a `TypeExtendedInterface` object for the CPU backend and the given
|
|
/// `ScalarType` (e.g. `at::kDouble`). Equivalent to `getVariableType(kCPU,
|
|
/// type)`.
|
|
/// TODO: Eliminate this function as much as possible
|
|
AT_DEPRECATED(THP_CLASS at::TypeExtendedInterface& CPU(at::ScalarType type));
|
|
|
|
/// Returns a `TypeExtendedInterface` object for the CUDA backend and the given
|
|
/// `ScalarType` (e.g. `at::kDouble`). Equivalent to `getVariableType(kCUDA,
|
|
/// type)`.
|
|
/// TODO: Eliminate this function as much as possible
|
|
AT_DEPRECATED(THP_CLASS at::TypeExtendedInterface& CUDA(at::ScalarType type));
|
|
|
|
/// Sets the `requires_grad` property of the given `Tensor`.
|
|
AT_DEPRECATED(THP_CLASS void set_requires_grad(
|
|
at::Tensor& tensor,
|
|
bool requires_grad) noexcept);
|
|
|
|
/// Returns the `requires_grad` of the given `Tensor`.
|
|
AT_DEPRECATED(THP_CLASS bool requires_grad(const at::Tensor& tensor) noexcept);
|
|
|
|
} // namespace torch
|