pytorch/torch/csrc/cuda/memory_snapshot.h
Shivam Raikundalia 1083bc749d [Memory Snapshot] Add Flag to Toggle Global and Local Callbacks for Annotations (#154932)
Summary:
There are some cases where we want only local annotations for memory snapshot such as executing inside the cudastream callback, which cannot execute CUDA operators. Thus the cuda errors happen: Exception in RecordFunction callback: CUDA error: operation not permitted

However, we need to have an option to turn on the globally so that on-demand snapshot can get annotations. Additionally, there may be some cases in which auto-trace will also want annotations using record functions so we expose the flag to the auto-trace as well.

Test Plan:
Run MVAI executable and see that the errors go away

Rollback Plan:

Differential Revision: D75831687

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154932
Approved by: https://github.com/mzzchy, https://github.com/sanrise
2025-06-04 23:15:19 +00:00

34 lines
981 B
C++

#pragma once
#include <torch/csrc/Export.h>
#include <cstdint>
#include <optional>
#include <string>
namespace torch::cuda {
// C++-only versions of these, for python use
// those defined in cuda/Module.cpp which also record python state.
TORCH_CUDA_CU_API void _record_memory_history(
bool enabled,
bool record_context = true,
int64_t trace_alloc_max_entries = 1,
bool trace_alloc_record_context = false,
bool record_cpp_context = false,
bool clearHistory = false,
bool compileContext = false,
bool globalRecordAllocations = false);
TORCH_CUDA_CU_API void _record_memory_history(
std::optional<std::string> enabled = "all",
std::optional<std::string> context = "all",
const std::string& stacks = "all",
size_t max_entries = SIZE_MAX,
bool clearHistory = false,
bool compileContext = false,
bool globalRecordAllocations = false);
TORCH_CUDA_CU_API std::string _memory_snapshot_pickled();
} // namespace torch::cuda