pytorch/benchmarks/dynamo
IvanKobzarev 894ef8c1e3 [torchbench] Inductor freezing bfloat16 conv folding needs high tolerance (#145623)
Issue:
https://github.com/pytorch/pytorch/issues/144888

Torchbench of timm lcnet_050 model fails on accuracy in case of `--frezing` `--inference` `--bfloat16`
`res_error==0.12`
If to turn off convolution inductor constant folding - `res_error==0.016`

`float16 error ~ 0.00669`
`float16 without conv folding ~ 0.0018`

convolution folding results in increase of error almost at one order of magnitude.

I think we should revisit and try to do something to improve the accuracy for conv folding.
E.g. For example doing conv folding at compilation time with float64?

At the moment I am adding counters to identify if convolution folding happened, and in case of bfloat16 and conv_folding - increase multiplier to the max level (10) to pass accuracy test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145623
Approved by: https://github.com/eellison
2025-01-30 12:46:35 +00:00
..
ci_expected_accuracy Revert "[dynamo] Use polyfill to implement comparison operators (#144485)" 2025-01-29 21:30:42 +00:00
microbenchmarks PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
pr_time_benchmarks partitioner: avoid inserting duplicates into heap (#145082) 2025-01-28 23:44:45 +00:00
__init__.py
all_torchbench_models_list.txt Add benchmarks.py to run all benchmarks, add new file with all torchbench model names (#94146) 2023-02-08 01:18:38 +00:00
benchmarks.py PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
check_accuracy.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
check_csv.py Enable inductor CI for huggingface (#86792) 2022-10-21 01:38:46 +00:00
check_graph_breaks.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
check_memory_compression_ratio.py [inductor] Check memory compression ratio in model tests (#89305) 2023-01-30 22:01:06 +00:00
check_perf_csv.py [AOTI] Turn on the ABI-compatible mode as default (#136534) 2024-10-13 14:42:58 +00:00
combine_csv.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00
common.py [torchbench] Inductor freezing bfloat16 conv folding needs high tolerance (#145623) 2025-01-30 12:46:35 +00:00
dist_util.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
distributed.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00
expected_ci_perf_inductor_torchbench.csv [dynamo][hf_bigbird] Actually graph break on tensor.unsqueeze_/resize_ (#99986) 2023-04-26 18:50:06 +00:00
expected_ci_speedup_inductor_torchbench_cpu.csv [AOTI] Add a boxed_run API (#142213) 2025-01-14 18:47:42 +00:00
huggingface_models_list_cpu.txt tuned best BS with inductor on cpu for E2E models (#94181) 2023-02-09 13:32:57 +00:00
huggingface_models_list.txt [dynamo][benchmarks] HF - Fix seq len and batch sizes (#89165) 2022-11-17 06:14:24 +00:00
huggingface.py Enable autograd cache on inductor tests (#140890) 2024-11-27 20:41:43 +00:00
huggingface.yaml change GPT2ForSequenceClassification inference accuracy tolerance (#136749) 2024-10-12 01:12:28 +00:00
join_results.py [BE] Format .ci/ / .github/ / benchmarks/ / functorch/ / tools/ / torchgen/ with ruff format (#132577) 2024-10-11 18:30:26 +00:00
Makefile Fix the inductor ci (#128879) 2024-06-17 22:20:33 +00:00
parse_logs.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00
README.md [doc] Rewrite benchmarks/dynamo/README.md (#115485) 2023-12-10 00:37:53 +00:00
run_all.sh Add benchmarks.py to run all benchmarks, add new file with all torchbench model names (#94146) 2023-02-08 01:18:38 +00:00
run_delta.sh Utility for running delta comparisons between two flag configs (#95411) 2023-02-25 02:30:22 +00:00
runner.py [BE] fix ruff rule E226: add missing whitespace around operator in f-strings (#144415) 2025-01-08 21:55:00 +00:00
summarize_perf.py [BE]: Enable RUF015 codebase wide (#115507) 2023-12-11 15:51:01 +00:00
test.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00
timm_models_list_cpu.txt [CI] Update the pinned timm version (#108076) 2023-09-07 11:38:13 +00:00
timm_models_list.txt tune down batch-size for res2net to avoid OOM (#122977) 2024-03-30 03:54:53 +00:00
timm_models.py [dynamo][benchmarks] Stop benchmarking compile time of dead code (#145590) 2025-01-29 22:14:47 +00:00
timm_models.yaml [dynamo][benchmarks] Stop benchmarking compile time of dead code (#145590) 2025-01-29 22:14:47 +00:00
torchao_backend.py Rename cache limit to recompile limit in configs (#143709) 2024-12-22 10:03:57 +00:00
torchbench_models_list_cpu.txt tuned best BS with inductor on cpu for E2E models (#94181) 2023-02-09 13:32:57 +00:00
torchbench_models_list.txt
torchbench.py [torchbench] Fix mobilenetv2 inductor freezing fail_accuracy (#145296) 2025-01-22 15:54:09 +00:00
torchbench.yaml [torchbench] Fix mobilenetv2 inductor freezing fail_accuracy (#145296) 2025-01-22 15:54:09 +00:00
training_loss.py [BE] fix ruff rule E226: add missing whitespace around operator in f-strings (#144415) 2025-01-08 21:55:00 +00:00

torch.compile() Benchmarking

This directory contains benchmarking code for TorchDynamo and many backends including TorchInductor. It includes three main benchmark suites:

  • TorchBenchmark: A diverse set of models, initially seeded from highly cited research models as ranked by Papers With Code. See torchbench installation and torchbench.py for the low-level runner. Makefile also contains the commands needed to setup TorchBenchmark to match the versions used in PyTorch CI.

  • Models from HuggingFace: Primarily transformer models, with representative models chosen for each category available. The low-level runner (huggingface.py) automatically downloads and installs the needed dependencies on first run.

  • Models from TIMM: Primarily vision models, with representative models chosen for each category available. The low-level runner (timm_models.py) automatically downloads and installs the needed dependencies on first run.

GPU Performance Dashboard

Daily results from the benchmarks here are available in the TorchInductor Performance Dashboard, currently run on an NVIDIA A100 GPU.

The inductor-perf-test-nightly.yml workflow generates the data in the performance dashboard. If you have the needed permissions, you can benchmark your own branch on the PyTorch GitHub repo by:

  1. Select "Run workflow" in the top right of the workflow
  2. Select your branch you want to benchmark
  3. Choose the options (such as training vs inference)
  4. Click "Run workflow"
  5. Wait for the job to complete (4 to 12 hours depending on backlog)
  6. Go to the dashboard
  7. Select your branch and commit at the top of the dashboard

The dashboard compares two commits a "Base Commit" and a "New Commit". An entry such as 2.38x → 2.41x means that the performance improved from 2.38x in the base to 2.41x in the new commit. All performance results are normalized to eager mode PyTorch (1x), and higher is better.

CPU Performance Dashboard

The TorchInductor CPU Performance Dashboard is tracked on a GitHub issue and updated periodically.

Running Locally

Raw commands used to generate the data for the performance dashboards can be found here.

To summarize there are three scripts to run each set of benchmarks:

  • ./benchmarks/dynamo/torchbench.py ...
  • ./benchmarks/dynamo/huggingface.py ...
  • ./benchmarks/dynamo/timm_models.py ...

Each of these scripts takes the same set of arguments. The ones used by dashboards are:

  • --accuracy or --performance: selects between checking correctness and measuring speedup (both are run for dashboard).
  • --training or --inference: selects between measuring training or inference (both are run for dashboard).
  • --device=cuda or --device=cpu: selects device to measure.
  • --amp, --bfloat16, --float16, --float32: selects precision to use --amp is used for training and --bfloat16 for inference.
  • --cold-start-latency: disables caching to accurately measure compile times.
  • --backend=inductor: selects TorchInductor as the compiler backend to measure. Many more are available, see --help.
  • --output=<filename>.csv: where to write results to.
  • --dynamic-shapes --dynamic-batch-only: used when the dynamic config is enabled.
  • --disable-cudagraphs: used by configurations without cudagraphs enabled (default).
  • --freezing: enable additional inference-only optimizations.
  • --cpp-wrapper: enable C++ wrapper code to lower overheads.
  • TORCHINDUCTOR_MAX_AUTOTUNE=1 (environment variable): used to measure max-autotune mode, which is run weekly due to longer compile times.
  • --export-aot-inductor: benchmarks ahead-of-time compilation mode.
  • --total-partitions and --partition-id: used to parallel benchmarking across different machines.

For debugging you can run just a single benchmark by adding the --only=<NAME> flag.

A complete list of options can be seen by running each of the runners with the --help flag.

As an example, the commands to run first line of the dashboard (performance only) would be:

./benchmarks/dynamo/torchbench.py --performance --training --amp --backend=inductor --output=torchbench_training.csv
./benchmarks/dynamo/torchbench.py --performance --inference --bfloat16 --backend=inductor --output=torchbench_inference.csv

./benchmarks/dynamo/huggingface.py --performance --training --amp --backend=inductor --output=huggingface_training.csv
./benchmarks/dynamo/huggingface.py --performance --inference --bfloat16 --backend=inductor --output=huggingface_inference.csv

./benchmarks/dynamo/timm_models.py --performance --training --amp --backend=inductor --output=timm_models_training.csv
./benchmarks/dynamo/timm_models.py --performance --inference --bfloat16 --backend=inductor --output=timm_models_inference.csv