mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-08 07:39:33 +01:00
[Reference Cycle Detector] Ignore FakeTensor in cycle leak detection (#117116)
Summary: Skip FakeTensors since these tensors are not actually using GPU memory. Reference Cycle Detector does not need to generate plots for these tensors. Test Plan: CI and internal testing. Differential Revision: D52637209 Pull Request resolved: https://github.com/pytorch/pytorch/pull/117116 Approved by: https://github.com/zdevito, https://github.com/tianfengfrank
This commit is contained in:
parent
3e9bb8d4de
commit
7e37f63e5e
|
|
@ -310,7 +310,7 @@ def escape(n):
|
|||
|
||||
|
||||
def is_cuda_tensor(obj):
|
||||
return isinstance(obj, torch.Tensor) and obj.is_cuda
|
||||
return isinstance(obj, torch.Tensor) and obj.is_cuda and not isinstance(obj, torch._subclasses.FakeTensor)
|
||||
|
||||
def cuda_allocation_context():
|
||||
snapshot = torch.cuda.memory._snapshot()
|
||||
|
|
|
|||
Loading…
Reference in New Issue
Block a user