pytorch/tools
Kimish Patel d6d726f781 [Pytorch Backend delegation] Add api for backend lowering to query debug (#55462)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55462

handles and symbolicate exception callstack thrown from backend.

Objective of this diff is to achieve improve error reporting when
exceptions are raised from lowered backend. We would effectively like to
get the same model level stack trace that you would get without having
lowered some module to backend.

For example:
```
class AA(nn.Module):
  def forward(self, x, y):
    return x + y

class A(nn.Module):
  def __init__(...):
    self.AA0 = AA()
  def forward(self, x, y):
    return self.AA0.forward(x, y) + 3

class B(nn.Module):
  def forward(self, x):
    return x + 2

class C(nn.Module):
  def __init__(...):
    self.A0 = A()
    self.B0 = B()
  def forward(self, x, y):
    return self.A0.forward(x, y) + self.B0.forward(x)
```
If the we then do C().forward(torch.rand((2,3)), torch.rand(14,2))) we
will likely see error stack like:
```
C++ exception with description "The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "<string>", line 3, in forward

    def forward(self, x, y):
      return self.A0.forward(x, y) + self.B0.forward(x)
             ~~~~~~~~~~~~~~~ <--- HERE

  File "<string>", line 3, in forward

    def forward(self, x, y):
      return self.AA0.forward(x, y) + 3
             ~~~~~~~~~~~~~~~~ <--- HERE

  File "<string>", line 3, in forward

    def forward(self, x, y):
      return x + y
             ~~~~~ <--- HERE
```

We would like to see the same error stack if we lowered C.A0 to some
backend.

With this diff we get something like:
```
  Module hierarchy:top(C).A0(backend_with_compiler_demoLoweredModule).AA0(AA)
Traceback of TorchScript (most recent call last):
  File "<string>", line 3, in FunctionName_UNKNOWN

    def forward(self, x, y):
      return self.A0.forward(x, y) + self.B0.forward(x)
             ~~~~~~~~~~~~~~~ <--- HERE

  File "<string>", line 5, in FunctionName_UNKNOWN
                typed_inputs: List[Any] = [x, y, ]
                if self.__backend.is_available() :
                  _0, = self.__backend.execute(self.__handles["forward"], typed_inputs)
                        ~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
                  assert isinstance(_0, Tensor)
                  return _0
  File "<string>", line 3, in FunctionName_UNKNOWN

    def forward(self, x, y):
      return self.AA0.forward(x, y) + 3
             ~~~~~~~~~~~~~~~~ <--- HERE

  File "<string>", line 3, in FunctionName_UNKNOWN

    def forward(self, x, y):
      return x + y
             ~~~~~ <--- HERE
```
This is achieved in 3 parts:
Part 1:
A. BackendDebugInfoRecorder:
   During backend lowering, in `to_backend`, before calling the preprocess
   function corresponding to the backend. This will facilitate recording of
   debug info (such as source range + inlined callstack) for the lowered module.
B. Instantiate WithBackendDebugInfoRecorder with BackendDebugInfoRecorder.
   This initializes thread local pointer to BackendDebugInfoRecorder.
C. generate_debug_handles:
   In preprocess function, the backend will call generate_debug_handles
   for each method being lowered separately. generate_debug_handles
   takes `Graph` of the method being lowered and returns a map
   of Node*-to-debug_handles. Backend is responsible for storing debug
   handles appropriately so as to raise exception (and later profiling)
   using debug handles when the exception being raised corresponds to
   particular Node that was lowered.
   Inside generate_debug_handles, we will query the current
   BackendDebugHandleInfoRecorder, that is issuing debug handles. This debug
   handle manager will issue debug handles as well as record
   debug_handles-to-<source range, inlined callstack> map.
D. Back in `to_backend`, once the preprocess function is has finished
   lowering the module, we will call `stopRecord` on
   BackendDebugInfoRecorder. This will return the debug info map. This
   debug info is then stored inside the lowered module.

Part 2:
Serialization:
During serialization for bytecode (lite interpreter), we will do two
things:
1. Extract all the source ranges that are contained inside
debug_handles-to-<source range, inlined callstack> map for lowered
module. This will be source range corresponding to debug handles,
including what is there is inlined callstack. Since we replaced original
module with lowered module, we wont be serializing code for the original
module and thus no source range. That is why the source range will have
to be stored separately. We will lump all the source ranges for all the
lowered modules in one single debug_pkl file.
2. Then we will serialize debug_handles-to-<source range, inlined
callstack> map.

Now during deserialization we will be able to reconstruct
debug_handles-to-<source range, inlined callstack> map. Given all
debug_handles are unique we would not need any module information.

Test Plan:
Tests are added in test_backend.cpp

Tests are added in test_backend.cpp

Imported from OSS

Differential Revision:
D27621330
D27621330

Reviewed By: raziel

Pulled By: kimishpatel

fbshipit-source-id: 0650ec68cda0df0a945864658cab226a97ba1890
2021-05-22 08:33:07 -07:00
..
amd_build Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
autograd Change native functions to take c10::string_view args instead of std::string (#57680) 2021-05-20 18:15:45 -07:00
clang_format_hash [tools] Remove newline from clang-format reference hashes (#55328) 2021-04-06 17:17:19 -07:00
code_analyzer [CUDA graphs] [BC-breaking] Makes torch.cuda.amp.GradScaler scale updates in-place for better composability with graph capture (#55562) 2021-04-30 13:03:05 -07:00
code_coverage Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
codegen Change native functions to take c10::string_view args instead of std::string (#57680) 2021-05-20 18:15:45 -07:00
config
coverage_plugins_package torch.jit.ignore as a context manager (#55172) 2021-05-14 01:53:50 -07:00
fast_nvcc Use .gv instead of .dot for Graphviz in fast_nvcc (#53208) 2021-03-03 15:01:21 -08:00
gdb Fix Flake8 (#54540) 2021-03-23 13:50:03 -07:00
jit [nnc] Started codegenning some external calls (#58118) 2021-05-13 19:56:50 -07:00
lite_interpreter [Pytorch] Build lite interpreter as default for Android 2021-05-17 14:12:48 -07:00
pyi Add inference mode python bindings and tests (#58045) 2021-05-13 08:55:35 -07:00
rules [codemod][fbcode][1/n] Apply buildifier 2021-04-12 11:04:32 -07:00
setup_helpers Remove distutils (#57040) 2021-04-29 12:10:11 -07:00
shared matches_jit_signatures is dead (#53637) 2021-04-15 12:31:19 -07:00
stats_utils fix boto3 resource not close (#55082) 2021-03-31 16:49:15 -07:00
test [lint] Move shellcheck to its own step (#58623) 2021-05-21 18:23:40 -07:00
__init__.py remediation of S205607 2020-07-17 17:19:47 -07:00
actions_local_runner.py [lint] Move shellcheck to its own step (#58623) 2021-05-21 18:23:40 -07:00
build_libtorch.py Remove Incorrect Comment in tools/build_libtorch and remove Python2 support in the module import (#44888) 2020-09-18 10:03:36 -07:00
build_pytorch_libs.py Remove distutils (#57040) 2021-04-29 12:10:11 -07:00
build_variables.bzl [Pytorch Backend delegation] Add api for backend lowering to query debug (#55462) 2021-05-22 08:33:07 -07:00
clang_format_all.py [PyTorch] Autoformat c10 (#56830) 2021-04-30 21:23:28 -07:00
clang_format_ci.sh [PyTorch] Autoformat c10 (#56830) 2021-04-30 21:23:28 -07:00
clang_format_utils.py [tools] Remove newline from clang-format reference hashes (#55328) 2021-04-06 17:17:19 -07:00
clang_tidy.py Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
download_mnist.py Remove Incorrect Comment in tools/build_libtorch and remove Python2 support in the module import (#44888) 2020-09-18 10:03:36 -07:00
explicit_ci_jobs.py Allow zero jobs in tools/explicit_ci_jobs.py (#58176) 2021-05-12 13:03:34 -07:00
export_slow_tests.py Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
extract_scripts.py Harden "Add annotations" workflow (#56071) 2021-04-16 07:46:20 -07:00
flake8_hook.py Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
generate_torch_version.py Remove distutils (#57040) 2021-04-29 12:10:11 -07:00
generated_dirs.txt
git_add_generated_dirs.sh
git_reset_generated_dirs.sh
git-clang-format Disallow versionless Python shebangs (#58275) 2021-05-14 08:26:02 -07:00
git-pre-commit [ONNX] Utilize ONNX shape inference for ONNX exporter (#40628) 2020-08-30 18:35:46 -07:00
mypy_wrapper.py Print stderrs in tools/mypy_wrapper.py (#58265) 2021-05-13 16:25:42 -07:00
nightly.py Add lint for unqualified type: ignore (#56290) 2021-04-21 08:07:23 -07:00
print_test_stats.py catch exception when running print regression (#58751) 2021-05-21 14:59:42 -07:00
pytorch.version
README.md Share VS Code settings/extensions nicely (#57671) 2021-05-05 15:19:59 -07:00
render_junit.py Convert assert -> cast. (#57458) 2021-05-12 13:54:16 -07:00
run_shellcheck.sh Run ShellCheck on scripts in GitHub Actions workflows (#55486) 2021-04-08 13:15:00 -07:00
test_history.py Catch KeyboardInterrupt in tools/test_history.py (#57780) 2021-05-06 16:19:28 -07:00
trailing_newlines.py Lint trailing newlines (#54737) 2021-03-30 13:09:52 -07:00
translate_annotations.py Translate annotation line numbers from merge to head (#55569) 2021-04-09 11:12:40 -07:00
vscode_settings.py Share VS Code settings/extensions nicely (#57671) 2021-05-05 15:19:59 -07:00

This folder contains a number of scripts which are used as part of the PyTorch build process. This directory also doubles as a Python module hierarchy (thus the __init__.py).

Overview

Modern infrastructure:

  • autograd - Code generation for autograd. This includes definitions of all our derivatives.
  • jit - Code generation for JIT
  • shared - Generic infrastructure that scripts in tools may find useful.
    • module_loader.py - Makes it easier to import arbitrary Python files in a script, without having to add them to the PYTHONPATH first.

Legacy infrastructure (we should kill this):

  • cwrap - Implementation of legacy code generation for THNN/THCUNN. This is used by nnwrap.

Build system pieces:

  • setup_helpers - Helper code for searching for third-party dependencies on the user system.
  • build_pytorch_libs.py - cross-platform script that builds all of the constituent libraries of PyTorch, but not the PyTorch Python extension itself.
  • build_libtorch.py - Script for building libtorch, a standalone C++ library without Python support. This build script is tested in CI.
  • fast_nvcc - Mostly-transparent wrapper over nvcc that parallelizes compilation when used to build CUDA files for multiple architectures at once.
    • fast_nvcc.py - Python script, entrypoint to the fast nvcc wrapper.

Developer tools which you might find useful:

  • clang_tidy.py - Script for running clang-tidy on lines of your script which you changed.
  • extract_scripts.py - Extract scripts from .github/workflows/*.yml into a specified dir, on which linters such as run_shellcheck.sh can be run. Assumes that every run script has shell: bash unless a different shell is explicitly listed on that specific step (so defaults doesn't currently work), but also has some rules for other situations such as actions/github-script. Exits with nonzero status if any of the extracted scripts contain GitHub Actions expressions: ${{<expression> }}
  • git_add_generated_dirs.sh and git_reset_generated_dirs.sh - Use this to force add generated files to your Git index, so that you can conveniently run diffs on them when working on code-generation. (See also generated_dirs.txt which specifies the list of directories with generated files.)
  • mypy_wrapper.py - Run mypy on a single file using the appropriate subset of our mypy*.ini configs.
  • run_shellcheck.sh - Find *.sh files (recursively) in the directories specified as arguments, and run ShellCheck on all of them.
  • test_history.py - Query S3 to display history of a single test across multiple jobs over time.
  • trailing_newlines.py - Take names of UTF-8 files from stdin, print names of nonempty files whose contents don't end in exactly one trailing newline, exit with status 1 if no output printed or 0 if some filenames were printed.
  • translate_annotations.py - Read Flake8 or clang-tidy warnings (according to a --regex) from a --file, convert to the JSON format accepted by pytorch/add-annotations-github-action, and translate line numbers from HEAD back in time to the given --commit by running git diff-index --unified=0 appropriately.
  • vscode_settings.py - Merge .vscode/settings_recommended.json into your workspace-local .vscode/settings.json, preferring the former in case of conflicts but otherwise preserving the latter as much as possible.

Important if you want to run on AMD GPU:

  • amd_build - HIPify scripts, for transpiling CUDA into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to do this transpilation, but have separate entry-points for transpiling either PyTorch or Caffe2 code.
    • build_amd.py - Top-level entry point for HIPifying our codebase.

Tools which are only situationally useful: