[Reference Cycle Detector] Ignore FakeTensor in cycle leak detection (#117116)

Summary: Skip FakeTensors since these tensors are not actually using GPU memory. Reference Cycle Detector does not need to generate plots for these tensors.

Test Plan: CI and internal testing.

Differential Revision: D52637209

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117116
Approved by: https://github.com/zdevito, https://github.com/tianfengfrank
This commit is contained in:
Aaron Shi 2024-01-10 21:33:56 +00:00 committed by PyTorch MergeBot
parent 3e9bb8d4de
commit 7e37f63e5e

View File

@ -310,7 +310,7 @@ def escape(n):
def is_cuda_tensor(obj):
return isinstance(obj, torch.Tensor) and obj.is_cuda
return isinstance(obj, torch.Tensor) and obj.is_cuda and not isinstance(obj, torch._subclasses.FakeTensor)
def cuda_allocation_context():
snapshot = torch.cuda.memory._snapshot()