mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: …cuda()) While I was at it, I audited all other ways I know how we might get a CUDA type from PyTorch and fixed more constructors which don't work. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/11533 Differential Revision: D9775786 Pulled By: ezyang fbshipit-source-id: cd07cdd375fdf74945539ec475a48bf08cbc0c17
25 lines
750 B
C++
25 lines
750 B
C++
#pragma once
|
|
|
|
// cuda_lazy_init() is always compiled, even for CPU-only builds.
|
|
// Thus, it does not live in the cuda/ folder.
|
|
|
|
namespace torch {
|
|
namespace utils {
|
|
|
|
// The INVARIANT is that this function MUST be called before you attempt
|
|
// to get a CUDA Type object from ATen, in any way. Here are some common
|
|
// ways that a Type object may be retrieved:
|
|
//
|
|
// - You call getNonVariableType or getNonVariableTypeOpt
|
|
// - You call toBackend() on a Type
|
|
//
|
|
// It's important to do this correctly, because if you forget to add it
|
|
// you'll get an oblique error message about "Cannot initialize CUDA without
|
|
// ATen_cuda library" if you try to use CUDA functionality from a CPU-only
|
|
// build, which is not good UX.
|
|
//
|
|
void cuda_lazy_init();
|
|
|
|
}
|
|
}
|