pytorch/torch/csrc/profiler
Kazuaki Ishizaki b5f9696d81 Fix typo under torch directory (#110824)
This PR fixes typo `the the` of comments and exception messages in files under `torch` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110824
Approved by: https://github.com/H-Huang
2023-10-09 19:16:43 +00:00
..
orchestration Fix typo under torch directory (#110824) 2023-10-09 19:16:43 +00:00
python [3/N] Move c10::variant to std::variant (#110141) 2023-09-28 18:43:55 +00:00
standalone
stubs Revert "[Reland] Upgrade NVTX to NVTX3 (#97582)" 2023-08-15 20:55:12 +00:00
unwind Enable aarch64 for fixing undefined symbol error. (#110542) 2023-10-05 16:16:06 +00:00
api.h
collection.cpp [4/N] Move remaining c10::variant calls to std::variant (#110382) 2023-10-02 23:52:04 +00:00
collection.h [4/N] Move remaining c10::variant calls to std::variant (#110382) 2023-10-02 23:52:04 +00:00
combined_traceback.cpp [3/N] fix clang-tidy warnings in torch/csrc (#108024) 2023-08-28 18:00:00 +00:00
combined_traceback.h Hand bind CapturedTraceback (#107438) 2023-08-18 19:05:52 +00:00
containers.h
data_flow.cpp [4/N] Move remaining c10::variant calls to std::variant (#110382) 2023-10-02 23:52:04 +00:00
data_flow.h [4/N] Move remaining c10::variant calls to std::variant (#110382) 2023-10-02 23:52:04 +00:00
events.h
kineto_client_interface.cpp
kineto_shim.cpp remove step invocation warning (#107216) 2023-08-28 14:35:25 +00:00
kineto_shim.h
perf-inl.h
perf.cpp
perf.h
README.md torch/csrc/profiler/README.md - stubs, RecordFunction, Autograd interaction (#108470) 2023-09-13 07:46:01 +00:00
util.cpp [profiler] Show shapes for lists of tensors in chrome traces #109263 (#109751) 2023-09-26 01:03:54 +00:00
util.h [profiler] Show shapes for lists of tensors in chrome traces #109263 (#109751) 2023-09-26 01:03:54 +00:00

Profiler Overview

This README describes the details of how the profiler is implemented.

The profiler instruments PyTorch to collect information about the model's execution. Its main features are:

  • Instrumenting op calls on the CPU side
  • Interfacing with Kineto to collect information from the GPU (or other accelerators)
  • Collecting python stack traces
  • Exporting this information, e.g. in a chrome trace, or to be processed by downstream tools like HTA

Table of Contents

Codebase Structure

TODO

RecordFunction

/aten/src/ATen/record_function.h

RecordFunction is used by the profiler to instrument CPU-side events.

RecordFunction is a general method of instrumenting function calls in PyTorch. It can be used for other general applications, e.g. see Features for Large-Scale Deployments. In PyTorch, it is already included at some important locations; notably, in the dispatcher, surrounding every op.

Users (or PyTorch itself) can register callbacks that will be executed whenever a RecordFunction guard is encountered. The profiler uses this mechanism to record the start and end times for each op call, as well as user-provided RecordFunction annotations. The RecordFunction machinery is designed to have relatively low overhead, especially when there are no callbacks registered. Nevertheless, there can still be some overhead.

There is also a python binding for RecordFunction in python (with torch.profiler.record_function); this is often used by users to annotate events corresponding to module-level events.

Autograd Integration

The autograd engine is responsible for automatically computing gradients.

The profiler records two pieces of information from the autograd engine:

  • Sequence number: this is a unique-per-thread index assigned to each op call(*) in the forward pass. When a backward op is triggered, it is also assigned a sequence number matching the sequence number of the forward op that caused that backward op to be executed. Using this information, the profiler is able to match forward and backward ops; in chrome traces, this feature can be enabled with the "fwd_bwd" flow events
  • Forward thread id: Autograd can be used in multi-threaded environments. The forward thread ID indicates the ID of the thread on which the forward op was executed on. This information is needed because the sequence number, mentioned above, is only unique within a thread; the forward thread ID is used for differentiating different ops with the same sequence number.

(*) Note that only op invocations whose inputs require gradients are assigned a sequence number

Collection and Post-Processing

TODO

Kineto Integration

TODO

Python Tracing

TODO