pytorch/torch/csrc/jit/mobile/function.h
Kimish Patel 17a5c67796 Add support to dump unsupported ops. Add lite_interpter_load test. (#34072)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34072

This diff helps check all the ops not supported by lite_interpreter.
Helpful mainly to find all the ops that need to be added instead of adding them
one by one.

Test Plan:
buck run caffe2/binaries:lite_interpreter_model_load --
--model=<bytecode-model-path>

Reviewed By: iseeyuan

Differential Revision: D20194092

fbshipit-source-id: 0d596cd0204308027194af7ed738551d0c32a374
2020-03-04 13:18:12 -08:00

36 lines
847 B
C++

#pragma once
#include <ATen/core/ivalue.h>
//#include <aten/src/Aten/core/operator_name.h>
#include <vector>
namespace torch{
namespace jit{
using Stack = std::vector<c10::IValue>;
enum OpCode : uint8_t;
namespace mobile {
struct Code;
class Function{
public:
Function(c10::QualifiedName name);
bool run(Stack& stack) const;
const std::string& name() const;
const c10::QualifiedName& qualname() const;
void append_instruction(OpCode op, int X, int N);
bool append_operator(const std::string& name,
const std::string& overload_name);
void append_constant(const c10::IValue& constant);
void append_type(const c10::TypePtr& type);
void set_register_size(size_t size);
private:
c10::QualifiedName name_;
std::shared_ptr<Code> code_;
};
} // namespace mobile
} // namespace torch
} // namespace jit