pytorch/benchmarks/dynamo/pr_time_benchmarks
2025-10-21 19:43:55 +00:00
..
benchmarks Migrating some more callsites (#163580) 2025-10-19 15:52:17 +00:00
test_check_result Several enhancements for check_results.py (#137925) 2024-10-26 16:27:55 +00:00
__init__.py Add instruction count benchmark to run on pull requests (#131475) 2024-08-12 05:20:26 +00:00
benchmark_runner.sh [inductor] Minor compile time optimizations in DefaultHandler (#146282) 2025-02-08 18:00:40 +00:00
check_results.py reduce threshold to suggest changes to expected results (#160463) 2025-08-14 09:11:27 +00:00
expected_results.csv [reland][fx] Move Node._prepend/Node._remove_from_list to C++ (#165882) 2025-10-21 19:43:55 +00:00
log_benchmarking_time.py Only keep ListOfLinears module in basic_modules_benchmarks and add gpu version. (#135730) 2024-09-14 16:45:52 +00:00
README.md Add runtime_overhead PR Time Benchmark (#163866) 2025-09-27 03:26:59 +00:00

Instructions on how to make a new compile time benchmark

  1. Make a new benchmark file in /benchmarks/dynamo/pr_time_benchmarks/benchmarks/ eg. 0b75b7ff2b/benchmarks/dynamo/pr_time_benchmarks/benchmarks/add_loop.py
  2. cd into the pr_time_benchmarks directory cd benchmarks/dynamo/pr_time_benchmarks
  3. Run PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt
  4. (Optional) flip a flag that you know will change the benchmark and run again with b.txt PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt
  5. Compare a.txt and b.txt located within the benchmarks/dynamo/pr_time_benchmarks folder to make sure things look as you expect
  6. Check in your new benchmark file and submit a new PR
  7. In a few days, if your benchmark is stable, bug Laith Sakka to enable running your benchmark on all PRs. If you are a meta employee, you can find the dashboard here: https://internalfb.com/intern/unidash/dashboard/pt2_diff_time_metrics