mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 12:21:27 +01:00
While using save_cache_artifacts on internal workloads, we have noticed that repeatedly calling this function after every batch is incredibly expensive. This PR significantly speeds up this function call by opting out of pickle and redesigning serialization algorithm. Essentially what we want is to be able to call serialize many times without incurring costs from scratch. Pull Request resolved: https://github.com/pytorch/pytorch/pull/148227 Approved by: https://github.com/jamesjwu ghstack dependencies: #148226 |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| _cache.py | ||
| config.py | ||