pytorch/benchmarks
Simon Fan 7c94652d7d [benchmarks] Add --use-warm-peak-memory (#124326)
Measuring peak memory on the first run can capture cases where compiled artifacts leak into runtime, but it also introduces a lot of noise from cudnn/triton autotuning which generally uses as much memory as it can. Setting this flag as a default will need some discussion, so I will only add it to unblock compiled backward benchmarking (where all autotuning memory use is exposed)

```
e.g. resnet50
# without --warm-peak-memory
memory: eager: 1.95 GB, dynamo: 6.68 GB, ratio: 0.29

# with --warm-peak-memory
memory: eager: 1.96 GB, dynamo: 2.06 GB, ratio: 0.95
```

![image](https://github.com/pytorch/pytorch/assets/9547562/36cd8687-a7f7-4ec6-b989-7e1263aa7d37)

This issue may also affect large models. Here's an example case of cudnn_convolution_backward autotuning allocating 30GB to tune a model otherwise using 5GB memory:
![image](https://github.com/pytorch/pytorch/assets/9547562/4e544b11-3579-4c69-811a-91d896f1ba66)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124326
Approved by: https://github.com/jansel
ghstack dependencies: #119411
2024-04-18 02:57:01 +00:00
..
distributed IntraNodeComm primitives for allgather_matmul (#118038) 2024-04-04 00:46:08 +00:00
dynamo [benchmarks] Add --use-warm-peak-memory (#124326) 2024-04-18 02:57:01 +00:00
fastrnns Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
framework_overhead_benchmark [BE]: Enable F821 and fix bugs (#116579) 2024-01-01 08:40:46 +00:00
functional_autograd_benchmark [BE]: Enable F821 and fix bugs (#116579) 2024-01-01 08:40:46 +00:00
fuser Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
gpt_fast [BE]: Optimize min/max/sum comprehensions C419 (#123960) 2024-04-12 23:54:15 +00:00
inference Allow more backend worker threads with each using a separate cuda stream (#116190) 2023-12-20 22:08:29 +00:00
instruction_counts Use strict to toggle strict options in MYPYSTRICT (#118479) 2024-01-28 19:22:22 +00:00
nested Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
operator_benchmark highlight readme code block (#120228) 2024-02-22 21:23:08 +00:00
overrides_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
profiler_benchmark Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
record_function_benchmark Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
serialization Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
sparse [BE]: Enable F821 and fix bugs (#116579) 2024-01-01 08:40:46 +00:00
static_runtime [PyTorch] fix mixed int32/int64 indices/offsets for embedding_bag_out (#120752) 2024-02-28 20:13:30 +00:00
tensorexpr [BE]: Optimize min/max/sum comprehensions C419 (#123960) 2024-04-12 23:54:15 +00:00
transformer ScoreMod API (#121845) 2024-04-06 01:10:44 +00:00
compare-fastrnn-results.py Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
compare.sh
README.md Add more child links to benchmark readme (#104627) 2023-07-06 12:11:00 +00:00
upload_scribe.py Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00

PyTorch Benchmarks

This folder contains scripts that produce reproducible timings of various PyTorch features.

It also provides mechanisms to compare PyTorch with other frameworks.

Setup environment

Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:

# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch

# Install the latest pytorch master from source.
# It should supersede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop

# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"

Benchmark List

Please refer to each subfolder to discover each benchmark suite. Links are provided where descriptions exist: