pytorch/torch/cuda/profiler.py
Alexander Grund 93719440b8 Replace map(lambda constructs (#46462)
Summary:
Follow-up of https://github.com/pytorch/pytorch/issues/46461 with a similar goal

Makes them more readable and possibly faster. Care has to be taken because `map` applies the function immediately while `(x for x in xs)` is a generator expression which gets evaluated later. This is a benefit in some cases where it is not required to actually create the list of values in memory (e.g. when passing to `tuple` or `extend` or `join`)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/46462

Reviewed By: zou3519

Differential Revision: D24422343

Pulled By: ezyang

fbshipit-source-id: 252e33499c92ac0b15238f2df32681dbbda2b237
2020-10-22 09:50:22 -07:00

49 lines
1.2 KiB
Python

import tempfile
import contextlib
from . import cudart, check_error
DEFAULT_FLAGS = [
"gpustarttimestamp",
"gpuendtimestamp",
"gridsize3d",
"threadblocksize",
"streamid",
"enableonstart 0",
"conckerneltrace",
]
def init(output_file, flags=None, output_mode='key_value'):
rt = cudart()
if not hasattr(rt, 'cudaOutputMode'):
raise AssertionError("HIP does not support profiler initialization!")
flags = DEFAULT_FLAGS if flags is None else flags
if output_mode == 'key_value':
output_mode_enum = rt.cudaOutputMode.KeyValuePair
elif output_mode == 'csv':
output_mode_enum = rt.cudaOutputMode.CSV
else:
raise RuntimeError("supported CUDA profiler output modes are: key_value and csv")
with tempfile.NamedTemporaryFile(delete=True) as f:
f.write(b'\n'.join(f.encode('ascii') for f in flags))
f.flush()
check_error(rt.cudaProfilerInitialize(f.name, output_file, output_mode_enum))
def start():
check_error(cudart().cudaProfilerStart())
def stop():
check_error(cudart().cudaProfilerStop())
@contextlib.contextmanager
def profile():
try:
start()
yield
finally:
stop()