pytorch/test/cpp
Han Qi (qihqi) fed12ff680 [BE][flatbuffer] Remove code duplications and refactor (#79184)
Summary:
Remove code dup in import.cpp / export_modules.cpp such that
1. Only one copy of switching logic (detect flatbuffer / is_flatbuffer);
2. Move detection of includeness of flatbuffer to runtime (so no more macros)

This also reverts the dependency of import.cpp -> flatbuffer_loader.cpp to flatbuffer_loader.cpp -> import.cpp.

Differential Revision: D36926217

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79184
Approved by: https://github.com/zhxchen17
2022-06-20 16:37:38 +00:00
..
api [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
c10d [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
common Trim libshm deps, move tempfile.h to c10 (#17019) 2019-02-13 19:38:35 -08:00
dist_autograd [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
jit [BE][flatbuffer] Remove code duplications and refactor (#79184) 2022-06-20 16:37:38 +00:00
lazy Revert "Put symint overloads on a different name" 2022-06-15 17:15:21 +00:00
lite_interpreter_runtime [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
monitor torch/monitor: merge Interval and FixedCount stats (#72009) 2022-01-30 23:21:59 +00:00
profiler [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
rpc [lint] autoformat test/cpp and torch/csrc 2022-06-11 21:11:16 +00:00
tensorexpr [BE] Use CamelCase for enum class members (#79772) 2022-06-17 05:53:57 +00:00
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00