pytorch/test/cpp/api/torch_include.cpp
Ilia Cherniavskii 19956b200d Relax set_num_threads restriction in parallel native case (#27947)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27947

Don't throw exception if the requested size is the same as the currently
used one

Test Plan:
ATEN_THREADING=NATIVE python setup.py develop --cmake

Imported from OSS

Differential Revision: D17919416

fbshipit-source-id: 411f7c9bd6a46e7a003b43a200c2ce3b76453a2e
2019-10-16 21:53:36 -07:00

15 lines
401 B
C++

#include <gtest/gtest.h>
#include <torch/torch.h>
// NOTE: This test suite exists to make sure that common `torch::` functions
// can be used without additional includes beyond `torch/torch.h`.
TEST(TorchIncludeTest, GetSetNumThreads) {
torch::init_num_threads();
torch::set_num_threads(2);
torch::set_num_interop_threads(2);
torch::get_num_threads();
torch::get_num_interop_threads();
}