pytorch/benchmarks/functional_autograd_benchmark
Yuanyuan Chen b2953f5643 [9/N] Apply ruff UP035 rule (#165515)
This is follow-up of #165214 to continue applying ruff UP035 rule to the code base.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165515
Approved by: https://github.com/Lucaskabela
2025-10-17 00:09:51 +00:00
..
audio_text_models.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00
compare.py
functional_autograd_benchmark.py [9/N] Apply ruff UP035 rule (#165515) 2025-10-17 00:09:51 +00:00
ppl_models.py [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126) 2024-05-27 14:49:57 +00:00
README.md [build] modernize build-frontend: python setup.py develop/install -> [uv ]pip install --no-build-isolation [-e ]. (#156027) 2025-07-09 11:24:27 +00:00
torchaudio_models.py Enable ruff rule E721 (#165162) 2025-10-13 01:48:55 +00:00
torchvision_models.py [BE] fix typos in benchmarks/ (#156077) 2025-06-17 13:12:18 +00:00
utils.py [9/N] Apply ruff UP035 rule (#165515) 2025-10-17 00:09:51 +00:00
vision_models.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00

Benchmarking tool for the autograd API

This folder contain a set of self-contained scripts that allows you to benchmark autograd with different common models. It is designed to run the benchmark before and after your change and will generate a table to share on the PR.

To do so, you can use functional_autograd_benchmark.py to run the benchmarks before your change (using as output before.txt) and after your change (using as output after.txt). You can then use compare.py to get a markdown table comparing the two runs.

The default arguments of functional_autograd_benchmark.py should be used in general. You can change them though to force a given device or force running even the (very) slow settings.

Sample usage

# Make sure you compile pytorch in release mode and with the same flags before/after
export DEBUG=0
# When running on CPU, it might be required to limit the number of cores to avoid oversubscription
export OMP_NUM_THREADS=10

# Compile pytorch with the base revision
git checkout main
python -m pip install --no-build-isolation -v -e .

# Install dependencies:
# Scipy is required by detr
pip install scipy

# Run the benchmark for the base
# This will use the GPU if available.
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output before.txt

# Compile pytorch with your change
popd
git checkout your_feature_branch
python -m pip install --no-build-isolation -v -e .

# Run the benchmark for the new version
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output after.txt

# Get the markdown table that you can paste in your github PR
python compare.py

popd

Files in this folder:

  • functional_autograd_benchmark.py is the main entry point to run the benchmark.
  • compare.py is the entry point to run the comparison script that generates a markdown table.
  • torchaudio_models.py and torchvision_models.py contains code extracted from torchaudio and torchvision to be able to run the models without having a specific version of these libraries installed.
  • ppl_models.py, vision_models.py and audio_text_models.py contain all the getter functions used for the benchmark.

Benchmarking against functorch

# Install stable functorch:
pip install functorch
# or install from source:
pip install git+https://github.com/pytorch/functorch

# Run the benchmark for the base
# This will use the GPU if available.
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output bench-with-functorch.txt