mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Summary:
This PR serves two purposes:
1. Design an abstraction over a serialization scheme for C++ modules, optimizers and tensors in general,
2. Add serialization to the ONNX/PyTorch proto format.
This is currently a rough prototype I coded up today, to get quick feedback.
For this I propose the following serialization interface within the C++ API:
```cpp
namespace torch { namespace serialize {
class Reader {
public:
virtual ~Reader() = default;
virtual void read(const std::string& key, Tensor& tensor, bool is_buffer = false) = 0;
virtual void finish() { }
};
class Writer {
public:
virtual ~Reader() = default;
virtual void writer(const std::string& key, const Tensor& tensor, bool is_buffer = false) = 0;
virtual void finish() { }
};
}} // namespace torch::serialize
```
There are then subclasses of these two for (1) Cereal and (2) Protobuf (called the "DefaultWriter" and "DefaultReader" to hide the implementation details). See `torch/serialize/cereal.h` and `torch/serialize/default.h`. This abstraction and subclassing for these two allows us to:
1. Provide a cereal-less serialization forward that we can ship and iterate on going forward,
2. Provide no-friction backwards compatibility with existing C++ API uses, mainly StarCraft.
The user-facing API is (conceptually):
```cpp
void torch::save(const Module& module, Writer& writer);
void torch::save(const Optimizer& optimizer, Writer& writer);
void torch::read(Module& module, Reader& reader);
void torch::read(Optimizer& optimizer, Reader& reader);
```
with implementations for both optimizers and modules that write into the `Writer` and read from the `Reader`
ebetica ezyang zdevito dzhulgakov
Pull Request resolved: https://github.com/pytorch/pytorch/pull/11619
Differential Revision: D9984664
Pulled By: goldsborough
fbshipit-source-id: e03afaa646221546e7f93bb8dfe3558e384a5847
32 lines
965 B
Python
32 lines
965 B
Python
import argparse
|
|
import os
|
|
import shlex
|
|
import subprocess
|
|
import sys
|
|
|
|
from setup_helpers.cuda import USE_CUDA
|
|
|
|
if __name__ == '__main__':
|
|
# Placeholder for future interface. For now just gives a nice -h.
|
|
parser = argparse.ArgumentParser(description='Build libtorch')
|
|
options = parser.parse_args()
|
|
|
|
os.environ['BUILD_TORCH'] = 'ON'
|
|
os.environ['BUILD_TEST'] = 'ON'
|
|
os.environ['ONNX_NAMESPACE'] = 'onnx_torch'
|
|
os.environ['PYTORCH_PYTHON'] = sys.executable
|
|
|
|
tools_path = os.path.dirname(os.path.abspath(__file__))
|
|
build_pytorch_libs = os.path.join(tools_path, 'build_pytorch_libs.sh')
|
|
|
|
command = [build_pytorch_libs, '--use-nnpack']
|
|
if USE_CUDA:
|
|
command.append('--use-cuda')
|
|
if os.environ.get('USE_CUDA_STATIC_LINK', False):
|
|
command.append('--cuda-static-link')
|
|
command.append('caffe2')
|
|
|
|
sys.stdout.flush()
|
|
sys.stderr.flush()
|
|
subprocess.check_call(command, universal_newlines=True)
|