pytorch/torch/cuda
Michael Wootton 2f3be2735f Don't split oversize cached blocks (#44742)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/35901

This change is designed to prevent fragmentation in the Caching Allocator.  Permissive block splitting in the allocator allows very large blocks to be split into many pieces.  Once split too finely it is unlikely all pieces will be 'free' at that same time so the original allocation can never be returned.   Anecdotally, we've seen a model run out of memory failing to alloc a 50 MB block on a 32 GB card while the caching allocator is holding 13 GB of 'split free blocks'

Approach:

- Large blocks above a certain size are designated "oversize".  This limit is currently set 1 decade above large, 200 MB
- Oversize blocks can not be split
- Oversize blocks must closely match the requested size (e.g. a 200 MB request will match an existing 205 MB block, but not a 300 MB block)
- In lieu of splitting oversize blocks there is a mechanism to quickly free a single oversize block (to the system allocator) to allow an appropriate size block to be allocated.  This will be activated under memory pressure and will prevent _release_cached_blocks()_ from triggering

Initial performance tests show this is similar or quicker than the original strategy.  Additional tests are ongoing.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44742

Reviewed By: zou3519

Differential Revision: D29186394

Pulled By: ezyang

fbshipit-source-id: c88918836db3f51df59de6d1b3e03602ebe306a9
2021-06-21 11:46:08 -07:00
..
amp [CUDA graphs] [BC-breaking] Makes torch.cuda.amp.GradScaler scale updates in-place for better composability with graph capture (#55562) 2021-04-30 13:03:05 -07:00
__init__.py Make old_gpu warning dynamic (#56621) 2021-04-23 17:52:07 -07:00
_utils.py Merge CUDA Streams and Events (#53902) 2021-04-05 08:19:55 -07:00
comm.py Add torch.cuda.comm to typechecking CI (#45350) 2020-09-25 12:13:43 -07:00
error.py remediation of S205607 2020-07-17 17:19:47 -07:00
memory.py Don't split oversize cached blocks (#44742) 2021-06-21 11:46:08 -07:00
nccl.py Clean up usage of torch._six partially (#49785) 2021-02-08 13:58:34 -08:00
nvtx.py [*.py] Rename "Arguments:" to "Args:" (#49736) 2020-12-28 09:34:47 -08:00
profiler.py Replace map(lambda constructs (#46462) 2020-10-22 09:50:22 -07:00
random.py doc string fix for torch.cuda.set_rng_state_all (#40544) 2020-06-25 08:37:14 -07:00
sparse.py
streams.py External stream (#59527) 2021-06-14 13:46:11 -07:00