mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-07 00:21:07 +01:00
MemPool is a separate pool of memory handled by the caching allocator. This PR adds the option let the caching allocator try to use this pool as a last resort instead of OOMing by associating a use_on_oom bool with each MemPool.
Usage:
Users can optionally specify a ``use_on_oom`` bool (which is False by default) during MemPool creation. If true, then the CUDACachingAllocator will be able to use memory in this pool as a last resort instead of OOMing.
```
pool = torch.cuda.MemPool(allocator, use_on_oom=True)
with torch.cuda.use_mem_pool(pool):
a = torch.randn(40 * 1024 * 1024, dtype=torch.uint8, device="cuda")
del a
# at the memory limit, this will succeed by using pool's memory in order to avoid the oom
b = torch.randn(40 * 1024 * 1024, dtype=torch.uint8, device="cuda")
```
Testing:
```
python test/test_cuda.py -k test_mempool_limited_memory_with_allocator
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/151487
Approved by: https://github.com/eqy, https://github.com/syed-ahmed, https://github.com/ngimel
|
||
|---|---|---|
| .. | ||
| amp | ||
| __init__.py | ||
| _gpu_trace.py | ||
| _memory_viz.py | ||
| _sanitizer.py | ||
| _utils.py | ||
| comm.py | ||
| error.py | ||
| gds.py | ||
| graphs.py | ||
| jiterator.py | ||
| memory.py | ||
| nccl.py | ||
| nvtx.py | ||
| profiler.py | ||
| random.py | ||
| sparse.py | ||
| streams.py | ||
| tunable.py | ||