pytorch/torch/csrc/jit/mobile/function.h
Ann Shan 9b3c72d46e [pytorch] Make mobile find_method return an optional (#43965)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/43965

As part of a larger effort to unify the API between the lite interpreter and full JIT:
- implement torch::jit::mobile::Method, a proxy for torch::jit::mobile::Function
- add support for overloaded operator() to mobile Method and Function
- mobile find_method now returns a c10::optional<Method> (so signature matches full jit)
- moves some implementation of Function from module.cpp to function.cpp
ghstack-source-id: 111161942

Test Plan: CI

Reviewed By: iseeyuan

Differential Revision: D23330762

fbshipit-source-id: bf0ba0d711d9566c92af31772057ecd35983ee6d
2020-09-03 14:46:18 -07:00

44 lines
1.1 KiB
C++

#pragma once
#include <ATen/core/ivalue.h>
//#include <aten/src/Aten/core/operator_name.h>
#include <vector>
namespace torch {
namespace jit {
using Stack = std::vector<c10::IValue>;
enum OpCode : uint8_t;
namespace mobile {
struct Code;
class Function {
public:
Function(c10::QualifiedName name);
bool run(Stack& stack) const;
c10::IValue operator()(Stack& stack);
const std::string& name() const;
const c10::QualifiedName& qualname() const;
void append_instruction(OpCode op, int X, int N);
bool append_operator(
const std::string& name,
const std::string& overload_name,
int64_t model_version);
void set_module_debug_info_list_size(size_t size);
void set_module_info(const std::string& module_info, size_t pc);
void append_constant(const c10::IValue& constant);
void append_type(const c10::TypePtr& type);
void set_register_size(size_t size);
std::string get_module_debug_info(size_t pc) const;
private:
c10::QualifiedName name_;
std::shared_ptr<Code> code_;
std::vector<std::string> pc_to_module_debug_info_;
};
} // namespace mobile
} // namespace jit
} // namespace torch