pytorch/caffe2/predictor/ThreadLocalPtr.cc
Alexander Sidorov d522b3ca58 BlackBoxPredictor OSS part N: ThreadLocalPtr, InferenceGraph (#23257)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23257

Overal context: open-source BlackBoxPredictor as the entry
point for inference in Caffe2 (thread safe abstraction for Caffe2
inference). This should be used in ThroughputBenchmark for the purpose
of framework comparison
This specific diff:
There should be no harm in moving transformation code to
OSS. On the advantages side we will be able to compare production
Caffe2 setup with PyTorch in the most fair way via
ThroughputBenchmark. This approach avoid any complicated
transformation regirstries. Building those proper would be significant
engineering effort as well as production risk. In the past we had SEVs
related to transforms being turned off due to various refactors. Given
that we don't plan to build any other significant investments into
transformation logic except existing ones (like TVM and Glow), and
those also relate to open-source technologies, I came up to the
conclusion of moving to OSS the whole thing.

Reviewed By: zrphercule

Differential Revision: D16428124

fbshipit-source-id: b35deada5c015cd97b91ae12a7ea4aac53bd14b8
2019-07-24 14:35:30 -07:00

80 lines
2.0 KiB
C++

#include "ThreadLocalPtr.h"
#include <algorithm>
namespace caffe2 {
// meyer's singleton
AllThreadLocalHelperVector* getAllThreadLocalHelperVector() {
// leak the pointer to avoid dealing with destruction order issues
static auto* instance = new AllThreadLocalHelperVector();
return instance;
}
ThreadLocalHelper* getThreadLocalHelper() {
static thread_local ThreadLocalHelper instance;
return &instance;
}
// AllThreadLocalHelperVector
void AllThreadLocalHelperVector::push_back(ThreadLocalHelper* helper) {
std::lock_guard<std::mutex> lg(mutex_);
vector_.push_back(helper);
}
void AllThreadLocalHelperVector::erase(ThreadLocalHelper* helper) {
std::lock_guard<std::mutex> lg(mutex_);
vector_.erase(
std::remove(vector_.begin(), vector_.end(), helper), vector_.end());
}
void AllThreadLocalHelperVector::erase_tlp(ThreadLocalPtrImpl* ptr) {
std::lock_guard<std::mutex> lg(mutex_);
for (auto* ins : vector_) {
ins->erase(ptr);
}
}
// ThreadLocalHelper
ThreadLocalHelper::ThreadLocalHelper() {
getAllThreadLocalHelperVector()->push_back(this);
}
ThreadLocalHelper::~ThreadLocalHelper() {
getAllThreadLocalHelperVector()->erase(this);
}
void ThreadLocalHelper::insert(
ThreadLocalPtrImpl* tl_ptr,
std::shared_ptr<void> ptr) {
std::lock_guard<std::mutex> lg(mutex_);
mapping_.insert(std::make_pair(tl_ptr, std::move(ptr)));
}
void* ThreadLocalHelper::get(ThreadLocalPtrImpl* key) {
/* Grabbing the mutex for the thread local map protecting the case
* when other object exits(~ThreadLocalPtrImpl()), and removes the
* element in the map, which will change the iterator returned
* by find.
*/
std::lock_guard<std::mutex> lg(mutex_);
auto it = mapping_.find(key);
if (it == mapping_.end()) {
return nullptr;
} else {
return it->second.get();
}
}
void ThreadLocalHelper::erase(ThreadLocalPtrImpl* key) {
std::lock_guard<std::mutex> lg(mutex_);
mapping_.erase(key);
}
ThreadLocalPtrImpl::~ThreadLocalPtrImpl() {
getAllThreadLocalHelperVector()->erase_tlp(this);
}
} // namespace caffe2