pytorch/benchmarks
2023-11-30 23:54:57 +00:00
..
cpp [BE]: Remove useless lambdas (#113602) 2023-11-14 20:06:48 +00:00
distributed Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
dynamo [Dynamo] Support torch.amp.autocast as decorator (#114845) 2023-11-30 23:54:57 +00:00
fastrnns Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
framework_overhead_benchmark Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
functional_autograd_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
fuser Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
inference Add standard deviation of metrics over runs to inference benchmark (#113309) 2023-11-09 18:38:05 +00:00
instruction_counts Fix some typos, mostly "that that" (#106901) 2023-08-10 19:46:53 +00:00
nested Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
operator_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
overrides_benchmark [BE]: Update ruff to 0.285 (#107519) 2023-08-22 23:16:38 +00:00
profiler_benchmark Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
record_function_benchmark Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
serialization Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
sparse Use weakref in storing tensors as keys (follow-up to #111470) (#112076) 2023-10-30 19:16:05 +00:00
static_runtime fix some typos (#106018) 2023-07-26 18:14:44 +00:00
tensorexpr [BE]: Remove useless lambdas (#113602) 2023-11-14 20:06:48 +00:00
transformer Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
compare-fastrnn-results.py Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00
compare.sh
README.md Add more child links to benchmark readme (#104627) 2023-07-06 12:11:00 +00:00
upload_scribe.py Apply UFMT to all files in benchmarks/ (#105928) 2023-07-26 01:18:48 +00:00

PyTorch Benchmarks

This folder contains scripts that produce reproducible timings of various PyTorch features.

It also provides mechanisms to compare PyTorch with other frameworks.

Setup environment

Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:

# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch

# Install the latest pytorch master from source.
# It should supersede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop

# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"

Benchmark List

Please refer to each subfolder to discover each benchmark suite. Links are provided where descriptions exist: