mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61997 After profiling the model loading latency on AI Bench (Android Galaxy S8 US), it seems like a significant amount of time was spent reading data using FileAdapter, which internally calls IStreamAdapter. However, IStreamAdapter uses `std::istream` under the hood, which is not that efficient. This change reduces the model loading time from [~293ms](https://www.internalfb.com/intern/aibench/details/600870874797229) to [~254ms](https://www.internalfb.com/intern/aibench/details/163731416457694), which is a reduction of ~12%. ghstack-source-id: 134634610 Test Plan: See the AI Bench links above. Reviewed By: raziel Differential Revision: D29812191 fbshipit-source-id: 57810fdc1ac515305f5504f88ac5e9e4319e9d28 |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| crc_alt.h | ||
| crc.cc | ||
| file_adapter.cc | ||
| file_adapter.h | ||
| inline_container_test.cc | ||
| inline_container.cc | ||
| inline_container.h | ||
| istream_adapter.cc | ||
| istream_adapter.h | ||
| read_adapter_interface.cc | ||
| read_adapter_interface.h | ||
| versions.h | ||