pytorch/torch/csrc/jit/mobile/function.h
Martin Yuan 04cd777ed4 Create BUCK build for lite-interpreter (#27546)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27546

Add files in csrc/jit/mobile folder to torch_core, as a first step to have light interpreter built in BUCK. Next the files will be independent of torch_core (T54912812)
ghstack-source-id: 91523987

Test Plan:
buck build -c pytorch.enable_rtti=1 -c project.ignore= -c ndk.app_platform=android-23 -c user.libcxx_cflags=-DFOLLY_USE_LIBCPP=1 -c user.libcxx_cxxflags=-DFOLLY_USE_LIBCPP=1 -c ndk.cxx_runtime=libcxx -c user.ndk_cxxflags=-g0 //xplat/experimental/pytorch/mobile:lite_predictorAndroid#android-armv7 && adb push buck-out/gen/xplat/experimental/pytorch/mobile/lite_predictorAndroid#android-armv7 /data/local/tmp/
In adb shell:
data/local/tmp/lite_predictorAndroid\#android-armv7 add_it.bc

buck build -c project.ignore= @//fbcode/mode/dev-asan //xplat/experimental/pytorch/mobile:lite_predictor

Reviewed By: ljk53

Differential Revision: D17717547

fbshipit-source-id: 4c00a35eb231968d05d0d7b56bcfd5dc0258d4bb
2019-10-08 15:20:30 -07:00

37 lines
958 B
C++

#pragma once
#include <ATen/core/ivalue.h>
//#include <aten/src/Aten/core/operator_name.h>
#include <vector>
namespace torch{
namespace jit{
using Stack = std::vector<c10::IValue>;
enum OpCode : uint8_t;
namespace mobile {
struct Code;
class Function{
public:
Function(c10::QualifiedName name);
bool run(Stack& stack) const;
const std::string& name() const;
const c10::QualifiedName& qualname() const;
void append_instruction(OpCode op, int X, int N);
void append_operator(const std::string& name,
const std::string& overload_name);
void append_vararg_operator(const std::string& name,
const std::string& overload_name);
void build_vararg_operator_table();
void append_constant(const c10::IValue& constant);
void set_register_size(size_t size);
private:
c10::QualifiedName name_;
std::shared_ptr<Code> code_;
};
} // namespace mobile
} // namespace torch
} // namespace jit