mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Summary: pretty simple. if planner exists, which implies that planning is enabled, create a manager for each frame. the associated serial executor will use the withMemoryPlannner fn to ensure the deallocation is done after execution completes. Test Plan: CI Differential Revision: D73635809 Pull Request resolved: https://github.com/pytorch/pytorch/pull/157053 Approved by: https://github.com/henryoier, https://github.com/georgiaphillips |
||
|---|---|---|
| .. | ||
| aoti_abi_check | ||
| aoti_inference | ||
| api | ||
| c10d | ||
| common | ||
| dist_autograd | ||
| jit | ||
| lazy | ||
| lite_interpreter_runtime | ||
| monitor | ||
| nativert | ||
| profiler | ||
| rpc | ||
| tensorexpr | ||
| __init__.py | ||