mirror of
https://github.com/zebrajr/pytorch.git
synced 2025-12-06 12:20:52 +01:00
Relands #148261 that was reverted by #150542 Pull Request resolved: https://github.com/pytorch/pytorch/pull/165882 Approved by: https://github.com/ezyang |
||
|---|---|---|
| .. | ||
| benchmarks | ||
| test_check_result | ||
| __init__.py | ||
| benchmark_runner.sh | ||
| check_results.py | ||
| expected_results.csv | ||
| log_benchmarking_time.py | ||
| README.md | ||
Instructions on how to make a new compile time benchmark
- Make a new benchmark file in /benchmarks/dynamo/pr_time_benchmarks/benchmarks/ eg.
0b75b7ff2b/benchmarks/dynamo/pr_time_benchmarks/benchmarks/add_loop.py - cd into the pr_time_benchmarks directory
cd benchmarks/dynamo/pr_time_benchmarks - Run
PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt - (Optional) flip a flag that you know will change the benchmark and run again with b.txt
PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt - Compare
a.txtandb.txtlocated within thebenchmarks/dynamo/pr_time_benchmarksfolder to make sure things look as you expect - Check in your new benchmark file and submit a new PR
- In a few days, if your benchmark is stable, bug Laith Sakka to enable running your benchmark on all PRs. If you are a meta employee, you can find the dashboard here: https://internalfb.com/intern/unidash/dashboard/pt2_diff_time_metrics