pytorch/torch/_inductor/codegen
Shangdi Yu aa99e0958f Separate provenance tracking to different levels (#160383)
Summary: as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160383
Approved by: https://github.com/angelayi
2025-08-15 04:59:35 +00:00
..
aoti_runtime [AOTI] Save data sizes to constants_info (#154534) 2025-05-29 06:39:13 +00:00
cuda Revert "[cutlass] fix dictionary iteration error (#160552)" 2025-08-14 21:41:28 +00:00
mtia [Re-land][Inductor] Support native Inductor as backend for MTIA (#159211) 2025-07-29 17:03:24 +00:00
rocm Remove unnecessary "# noqa: set_linter" comments (#159467) 2025-08-06 21:31:52 +00:00
xpu [inductor][triton] support profile_scratch launcher arg (#159772) 2025-08-08 14:27:38 +00:00
__init__.py
aoti_hipify_utils.py [BE][3/16] fix typos in torch/ (torch/_inductor/) (#156313) 2025-06-23 02:57:12 +00:00
block_analysis.py [Inductor] Restrict block analysis to only match integer dims and strides (#149615) 2025-06-24 22:43:12 +00:00
common.py [inductor][triton] support profile_scratch launcher arg (#159772) 2025-08-08 14:27:38 +00:00
cpp_bmm_template.py
cpp_flex_attention_template.py [Inductor] Set the default value of min_chunk_size to 512 (#150762) 2025-07-21 12:46:05 +00:00
cpp_gemm_template.py [inductor] Add typing to _inductor/ir.py (#149958) 2025-06-30 15:56:35 +00:00
cpp_grouped_gemm_template.py
cpp_micro_gemm.py [Pyrefly][Refactor] Replace dict() calls with literal dict syntax for improved readability (#157735) 2025-07-08 18:10:33 +00:00
cpp_template_kernel.py [Inductor] Set the default value of min_chunk_size to 512 (#150762) 2025-07-21 12:46:05 +00:00
cpp_template.py codecache: Remove cpp_prefix.h duplication per build, then precompile it (#144293) 2025-05-16 17:41:36 +00:00
cpp_utils.py [aoti] Initial Metal support (#153959) 2025-05-23 05:45:35 +00:00
cpp_wrapper_cpu_array_ref.py [inductor] allocate non-blocking copy destinations in pinned memory (#155121) (#158758) 2025-08-07 17:07:26 +00:00
cpp_wrapper_cpu.py Fix unbacked symint and memory leak in inductor memory planning (#159839) 2025-08-11 17:16:15 +00:00
cpp_wrapper_gpu.py [inductor][triton] support profile_scratch launcher arg (#159772) 2025-08-08 14:27:38 +00:00
cpp_wrapper_mps.py [aoti][mps] Initialize mps kernels first (#159753) 2025-08-06 07:54:29 +00:00
cpp.py Separate provenance tracking to different levels (#160383) 2025-08-15 04:59:35 +00:00
cpu_device_op_overrides.py
cuda_combined_scheduling.py multi-kernel matmuls based on varying hint sizes (#156628) 2025-07-12 15:08:21 +00:00
debug_utils.py [Inductor] Refactor wrapper codegen to use Wrapper IR. (#150458) 2025-04-15 17:28:36 +00:00
halide.py [inductor] more size_hint_or_throw usage (#157394) 2025-07-02 20:20:59 +00:00
memory_planning.py Fix unbacked symint and memory leak in inductor memory planning (#159839) 2025-08-11 17:16:15 +00:00
mps_device_op_overrides.py [aoti] Initial Metal support (#153959) 2025-05-23 05:45:35 +00:00
mps.py [aoti][mps] Initialize mps kernels first (#159753) 2025-08-06 07:54:29 +00:00
multi_kernel.py multi-kernel matmuls based on varying hint sizes (#156628) 2025-07-12 15:08:21 +00:00
python_wrapper_mtia.py [Re-land][Inductor] Support native Inductor as backend for MTIA (#159211) 2025-07-29 17:03:24 +00:00
segmented_tree.py [inductor] dont reuse buffers if it affects peak (#145883) (#159530) 2025-08-14 21:14:36 +00:00
simd_kernel_features.py Replace runtime type parameterization (#155221) 2025-06-05 21:43:54 +00:00
simd.py Separate provenance tracking to different levels (#160383) 2025-08-15 04:59:35 +00:00
subgraph.py [inductor] Add typing to _inductor/ir.py (#149958) 2025-06-30 15:56:35 +00:00
triton_combo_kernel.py [BE][3/16] fix typos in torch/ (torch/_inductor/) (#156313) 2025-06-23 02:57:12 +00:00
triton_split_scan.py
triton_utils.py [Inductor] Fix a user-defined Triton kernel bool param codegen issue (#158845) 2025-07-24 00:19:27 +00:00
triton.py [inductor] fix triton bucketize mask propagation (#159961) 2025-08-12 19:59:32 +00:00
wrapper_fxir.py Fix launch grid calculation (#159497) 2025-08-02 01:12:58 +00:00
wrapper.py Separate provenance tracking to different levels (#160383) 2025-08-15 04:59:35 +00:00