mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Fix setting of memory fraction in test_garbage_collect_expandable (#164000)
Fixes #160598 Fixes #160551 Fixes #160507 This PR fixes a bug in the `test_garbage_collect_expandable` unit test where the finally block incorrectly re-reads the current per process memory fraction instead of setting the original value. With out the fix the other tests in the `test/test_cuda.py` test suite were impacted and failed with OOM error on ROCm. This ensures proper cleanup and isolation of test state, maintaining test correctness and avoiding side effects like the below OOM error that it caused. For example, `test_autocast_checkpointing` failed with the below error https://github.com/pytorch/pytorch/actions/runs/17982223758/job/51153974194 on ROCm `torch.OutOfMemoryError: HIP out of memory. Tried to allocate 76.00 MiB. GPU 0 has a total capacity of 255.69 GiB of which 252.97 GiB is free. 1.20 GiB allowed; Of the allocated memory 1.14 GiB is allocated by PyTorch, with 17.00 MiB allocated in private pools (e.g., HIP Graphs), and 18.63 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)` Pull Request resolved: https://github.com/pytorch/pytorch/pull/164000 Approved by: https://github.com/jeffdaily
This commit is contained in:
parent
ed3085814a
commit
e4ffd718ec
|
|
@ -4471,7 +4471,7 @@ class TestCudaMallocAsync(TestCase):
|
|||
# expandable_segment blocks can be in the free list when this is called.
|
||||
alloc(80)
|
||||
finally:
|
||||
orig = torch.cuda.get_per_process_memory_fraction(0)
|
||||
torch.cuda.memory.set_per_process_memory_fraction(orig)
|
||||
|
||||
def test_allocator_settings(self):
|
||||
def power2_div(size, div_factor):
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user