pytorch/test/cpp/api/meta_tensor.cpp
Brian Hirsh 27a3204982 generate C++ API for meta functions using at::meta:: (#58570)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/58570

**What the PR does**
Generate a fast-path `at::meta::{op}` API for calling meta functions without having to go through the dispatcher. This will be important for perf for external backends that want to use meta functions for shape checking (which seems likely to be what we end up doing for LazyTensorCore).

**Details**
In order to avoid naming collisions I had to make two small changes:
- rename `MetaFunctions.h` template -> `NativeMetaFunctions.h` (this is the file that declares the impl() function for every structured operator).
- rename the meta class: `at::meta::{op}::meta()` -> `at::meta::structured_{op}::meta()`

I also deleted a few unnecessary includes, since any file that includes NativeFunctions.h will automatically include NativeMetaFunctions.h.

**Why I made the change**
This change isn't actually immediately used anywhere; I already started writing it because I thought it would be useful for structured composite ops, but that isn't actually true (see [comment](https://github.com/pytorch/pytorch/pull/58266#issuecomment-843213147)). The change feels useful and unambiguous though so I think it's safe to add. I added explicit tests for C++ meta function calls just to ensure that I wrote it correctly - which is actually how I hit the internal linkage issue in the PR below this in the stack.

Test Plan: Imported from OSS

Reviewed By: pbelevich

Differential Revision: D28711299

Pulled By: bdhirsh

fbshipit-source-id: d410d17358c2b406f0191398093f17308b3c6b9e
2021-06-15 16:54:46 -07:00

36 lines
1.2 KiB
C++

#include <gtest/gtest.h>
#include <torch/torch.h>
#include <ATen/MetaFunctions.h>
#include <vector>
TEST(MetaTensorTest, MetaDeviceApi) {
auto a = at::ones({4}, at::kFloat);
auto b = at::ones({3, 4}, at::kFloat);
// at::add() will return a meta tensor if its inputs are also meta tensors.
auto out_meta = at::add(a.to(c10::kMeta), b.to(c10::kMeta));
ASSERT_EQ(a.device(), c10::kCPU);
ASSERT_EQ(b.device(), c10::kCPU);
ASSERT_EQ(out_meta.device(), c10::kMeta);
c10::IntArrayRef sizes_actual = out_meta.sizes();
std::vector<int64_t> sizes_expected = std::vector<int64_t>{3, 4};
ASSERT_EQ(sizes_actual, sizes_expected);
}
TEST(MetaTensorTest, MetaNamespaceApi) {
auto a = at::ones({4}, at::kFloat);
auto b = at::ones({3, 4}, at::kFloat);
// The at::meta:: namespace take in tensors from any backend
// and return a meta tensor.
auto out_meta = at::meta::add(a, b);
ASSERT_EQ(a.device(), c10::kCPU);
ASSERT_EQ(b.device(), c10::kCPU);
ASSERT_EQ(out_meta.device(), c10::kMeta);
c10::IntArrayRef sizes_actual = out_meta.sizes();
std::vector<int64_t> sizes_expected = std::vector<int64_t>{3, 4};
ASSERT_EQ(sizes_actual, sizes_expected);
}