mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
* Add hip support for caffe2 core * Add MIOPEN header/wrapper to caffe2 core * Add HIP device into caffe2 PB * top level makefile change for rocm/hip * makefile scaffolding for AMD/RocM/HIP * Makefile scafodding for AMD/RocM/HIP; add makefile/utility for HIP files * caffe2 PB update for AMD/ROCM HIP device * Add AMD/RocM/Thrust dependency * HIP threadpool update * Fix makefile macro * makefile fix: duplicate test/binary name * makefile clean-up * makefile clean-up * add HIP operator registry * add utilities for hip device * Add USE_HIP to config summary * makefile fix for BUILD_TEST * merge latest * Fix indentation * code clean-up * Guard builds without HIP and use the same cmake script as PyTorch to find HIP * Setup rocm environment variables in build.sh (ideally should be done in the docker images) * setup locale * set HIP_PLATFORM * Revert "set HIP_PLATFORM" This reverts commit 8ec58db2b390c9259220c49fa34cd403568300ad. * continue the build script environment variables mess * HCC_AMDGPU_TARGET * Cleanup the mess, has been fixed in the lastest docker images * Assign protobuf field hip_gpu_id a new field number for backward compatibility * change name to avoid conflict * Fix duplicated thread pool flag * Refactor cmake files to not add hip includes and libs globally * Fix the wrong usage of environment variables detection in cmake * Add MIOPEN CNN operators * Revert "Add MIOPEN CNN operators" This reverts commit 6e89ad4385b5b8967a7854c4adda52c012cee42a. * Resolve merge conflicts * . * Update GetAsyncNetHIPThreadPool * Enable BUILD_CAFFE2 in pytorch build * Unifiy USE_HIP and USE_ROCM * always check USE_ROCM * . * remove unrelated change * move all core hip files to separate subdirectory * . * . * recurse glob core directory * . * correct include * .
40 lines
897 B
C++
40 lines
897 B
C++
#include <atomic>
|
|
|
|
#include "caffe2/core/common.h"
|
|
|
|
namespace caffe2 {
|
|
|
|
// A global variable to mark if Caffe2 has cuda linked to the current runtime.
|
|
// Do not directly use this variable, but instead use the HasCudaRuntime()
|
|
// function below.
|
|
std::atomic<bool> g_caffe2_has_cuda_linked{false};
|
|
std::atomic<bool> g_caffe2_has_hip_linked{false};
|
|
|
|
bool HasCudaRuntime() {
|
|
return g_caffe2_has_cuda_linked.load();
|
|
}
|
|
|
|
bool HasHipRuntime() {
|
|
return g_caffe2_has_hip_linked.load();
|
|
}
|
|
|
|
namespace internal {
|
|
void SetCudaRuntimeFlag() {
|
|
g_caffe2_has_cuda_linked.store(true);
|
|
}
|
|
|
|
void SetHipRuntimeFlag() {
|
|
g_caffe2_has_hip_linked.store(true);
|
|
}
|
|
} // namespace internal
|
|
|
|
const std::map<string, string>& GetBuildOptions() {
|
|
#ifndef CAFFE2_BUILD_STRINGS
|
|
#define CAFFE2_BUILD_STRINGS {}
|
|
#endif
|
|
static const std::map<string, string> kMap = CAFFE2_BUILD_STRINGS;
|
|
return kMap;
|
|
}
|
|
|
|
} // namespace caffe2
|