pytorch/benchmarks/functional_autograd_benchmark
Xuehai Pan c73a92fbf5 [BE][CI] bump ruff to 0.9.2: multiline assert statements (#144546)
Reference: https://docs.astral.sh/ruff/formatter/black/#assert-statements

> Unlike Black, Ruff prefers breaking the message over breaking the assertion, similar to how both Ruff and Black prefer breaking the assignment value over breaking the assignment target:
>
> ```python
> # Input
> assert (
>     len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
>
> # Black
> assert (
>     len(policy_types) >= priority + num_duplicates
> ), f"This tests needs at least {priority+num_duplicates} many types."
>
> # Ruff
> assert len(policy_types) >= priority + num_duplicates, (
>     f"This tests needs at least {priority + num_duplicates} many types."
> )
> ```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/144546
Approved by: https://github.com/malfet
2025-02-27 20:46:16 +00:00
..
audio_text_models.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00
compare.py Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
functional_autograd_benchmark.py PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
ppl_models.py [5/N][Easy] fix typo for usort config in pyproject.toml (kown -> known): sort torch (#127126) 2024-05-27 14:49:57 +00:00
README.md
torchaudio_models.py [BE][CI] bump ruff to 0.9.2: multiline assert statements (#144546) 2025-02-27 20:46:16 +00:00
torchvision_models.py [BE][CI] bump ruff to 0.9.2: multiline assert statements (#144546) 2025-02-27 20:46:16 +00:00
utils.py PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
vision_models.py [BE][Easy][3/19] enforce style for empty lines in import segments in benchmarks/ (#129754) 2024-07-17 14:34:42 +00:00

Benchmarking tool for the autograd API

This folder contain a set of self-contained scripts that allows you to benchmark autograd with different common models. It is designed to run the benchmark before and after your change and will generate a table to share on the PR.

To do so, you can use functional_autograd_benchmark.py to run the benchmarks before your change (using as output before.txt) and after your change (using as output after.txt). You can then use compare.py to get a markdown table comparing the two runs.

The default arguments of functional_autograd_benchmark.py should be used in general. You can change them though to force a given device or force running even the (very) slow settings.

Sample usage

# Make sure you compile pytorch in release mode and with the same flags before/after
export DEBUG=0
# When running on CPU, it might be required to limit the number of cores to avoid oversubscription
export OMP_NUM_THREADS=10

# Compile pytorch with the base revision
git checkout master
python setup.py develop

# Install dependencies:
# Scipy is required by detr
pip install scipy

# Run the benchmark for the base
# This will use the GPU if available.
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output before.txt

# Compile pytorch with your change
popd
git checkout your_feature_branch
python setup.py develop

# Run the benchmark for the new version
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output after.txt

# Get the markdown table that you can paste in your github PR
python compare.py

popd

Files in this folder:

  • functional_autograd_benchmark.py is the main entry point to run the benchmark.
  • compare.py is the entry point to run the comparison script that generates a markdown table.
  • torchaudio_models.py and torchvision_models.py contains code extracted from torchaudio and torchvision to be able to run the models without having a specific version of these libraries installed.
  • ppl_models.py, vision_models.py and audio_text_models.py contain all the getter functions used for the benchmark.

Benchmarking against functorch

# Install stable functorch:
pip install functorch
# or install from source:
pip install git+https://github.com/pytorch/functorch

# Run the benchmark for the base
# This will use the GPU if available.
pushd benchmarks/functional_autograd_benchmark
python functional_autograd_benchmark.py --output bench-with-functorch.txt