pytorch/torch/csrc/utils/device_lazy_init.h
rzou 889e3eeed3 Avoid cuda init to FakeTensorMode (#124413)
Also partially fixes #122109

This PR:
- We add a C++ flag (only_lift_cpu_tensors) to toggle the
  torch.tensor(1, device='cuda') ctor strategy.
  When false (default), it does the current PyTorch behavior
  of unconditionally constructing a concrete CUDA tensor then calling
  lift_fresh on it. When true, we instead construct a concrete CPU
  tensor, call lift_fresh, and then call Tensor.to(device) (under any ambient
  modes).
- FakeTensorMode flips this flag depending on if CUDA is available or
  not. We don't unconditionally set the flag to True because that is
  likely BC-breaking.

Test Plan:
- existing tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124413
Approved by: https://github.com/eellison
2024-04-19 02:39:35 +00:00

51 lines
1.7 KiB
C++

#pragma once
#include <c10/core/TensorOptions.h>
// device_lazy_init() is always compiled, even for CPU-only builds.
namespace torch::utils {
/**
* This mechanism of lazy initialization is designed for each device backend.
* Currently, CUDA and XPU follow this design. This function `device_lazy_init`
* MUST be called before you attempt to access any Type(CUDA or XPU) object
* from ATen, in any way. It guarantees that the device runtime status is lazily
* initialized when the first runtime API is requested.
*
* Here are some common ways that a device object may be retrieved:
* - You call getNonVariableType or getNonVariableTypeOpt
* - You call toBackend() on a Type
*
* It's important to do this correctly, because if you forget to add it you'll
* get an oblique error message seems like "Cannot initialize CUDA without
* ATen_cuda library" or "Cannot initialize XPU without ATen_xpu library" if you
* try to use CUDA or XPU functionality from a CPU-only build, which is not good
* UX.
*/
void device_lazy_init(at::DeviceType device_type);
void set_requires_device_init(at::DeviceType device_type, bool value);
static inline void maybe_initialize_device(at::Device& device) {
// Add more devices here to enable lazy initialization.
if (device.is_cuda() || device.is_xpu() || device.is_privateuseone()) {
device_lazy_init(device.type());
}
}
static inline void maybe_initialize_device(c10::optional<at::Device>& device) {
if (!device.has_value()) {
return;
}
maybe_initialize_device(device.value());
}
static inline void maybe_initialize_device(const at::TensorOptions& options) {
auto device = options.device();
maybe_initialize_device(device);
}
bool is_device_initialized(at::DeviceType device_type);
} // namespace torch::utils