pytorch/caffe2
Rohith Menon 879a90b322 [ModelLoading] Use byte encoding for uint8, fp16 etc. instead of int32 (#34343)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34343

Use byte encoding for uint8, fp16 etc. instead of int32 in TensorProto serialization/deserialization

tl;dr
- fp16 tensor deserialization 12x faster, serialized size 25% lower
- uint8 tensor deserialization 36x faster, serialized size 25% lower

Test Plan:
```
============================================================================
caffe2/caffe2/fb/predictor/ModelLoaderBenchmark.cpprelative  time/iter  iters/s
============================================================================
BlobProtoInt32DeserializationFloat16                        12.37ms    80.82
BlobProtoByteDeserializationFloat16             1125.46%     1.10ms   909.64
----------------------------------------------------------------------------
BlobProtoInt32DeserializationUInt8                          17.57ms    56.92
BlobProtoByteDeserializationUInt8               3629.45%   484.02us    2.07K
============================================================================
```

Reviewed By: yinghai

Differential Revision: D20137451

fbshipit-source-id: 8ed4be2286a6d4c7e134fcb0832f22bc645039a1
2020-03-06 11:58:30 -08:00
..
contrib Add Scalar::type() (#33603) 2020-02-26 22:25:18 -08:00
core [ModelLoading] Use byte encoding for uint8, fp16 etc. instead of int32 (#34343) 2020-03-06 11:58:30 -08:00
cuda_rtc Change ConvPoolOp<Context>::SetOutputSize to ConvPoolOp<Context>::GetOutputSize (#17764) 2019-03-07 18:38:53 -08:00
db [Caffe2] Skip //caffe2/caffe2:caffe2_test_cpu -- 'DBSeekTest\.RocksDB' 2020-02-21 21:30:02 -08:00
distributed Manual hipify caffe2/distributed and rocm update (no hcc modules support) (#18088) 2019-03-29 11:07:32 -07:00
experiments Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
ideep Enable mkldnn on windows (#31355) 2020-01-27 09:00:02 -08:00
image Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
mobile Fix spelling errors 2020-01-28 04:46:15 -08:00
mpi
observers preprocessor cleanup (#33957) 2020-03-02 13:37:19 -08:00
onnx Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
operators [caffe2] Fix signed unsigned comparison warning (#34161) 2020-03-04 08:02:44 -08:00
opt Add backward Int8Quantize shape inference (#34152) 2020-03-03 22:04:25 -08:00
perfkernels [caffe2] simplify caffe2 code with fbgemm handling block size 1 emb (#33774) 2020-02-27 14:45:28 -08:00
predictor Fix typos, via a Levenshtein-type corrector (#31523) 2020-01-17 16:03:19 -08:00
proto Add partition info message to NetDef (#33616) 2020-02-26 14:54:58 -08:00
python [AMD] Remove num_gpu check for remote execution (#34318) 2020-03-06 09:53:57 -08:00
quantization Tuck the packing logic into Int8FCPackWeight op (#34289) 2020-03-05 13:43:08 -08:00
queue Replace c10::guts::stuff with std::stuff (#30915) 2019-12-16 13:57:19 -08:00
serialize [jit] Add type tags to lists/dicts in pickle (#33255) 2020-03-03 16:48:21 -08:00
sgd preprocessor cleanup (#33957) 2020-03-02 13:37:19 -08:00
share [caffe2] simplify relative error expr (#32999) 2020-02-19 16:35:44 -08:00
test
transforms fix -Wsign-compare warnings for some files inside c2 (#18123) 2019-03-19 10:39:20 -07:00
utils Added nullptr check for pthradpool_get_threads_count (#34087) 2020-03-04 11:10:53 -08:00
video Fix compilation error when buildng with FFMPEG (#27589) 2020-02-13 11:23:48 -08:00
__init__.py
.clang-format
c2_aten_srcs.bzl Back out "Back out "Back out "Revert D18542342: Boxed variable dispatch""" (#30650) 2019-12-06 11:45:09 -08:00
CMakeLists.txt [pytorch][mobile] change mobile build scripts to build PyTorch by default (#34203) 2020-03-05 23:40:47 -08:00
README.md
release-notes.md
requirements.txt Add requests as a legit dependency (#25596) 2019-09-04 17:43:37 -07:00
VERSION_NUMBER

Caffe2

Jenkins Build Status

Caffe2 is a lightweight, modular, and scalable deep learning framework. Building on the original Caffe, Caffe2 is designed with expression, speed, and modularity in mind.

Questions and Feedback

Please use Github issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.

Further Resources on Caffe2.ai