pytorch/benchmarks/dynamo/pr_time_benchmarks
Laith Sakka 39df901b2a introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432)
when a tensor has unbacked symbols it can be general enough to represent both contiguous and non contiguous tensors.
in that case we cant really evaluate is_contiguous. In many places in the code base, we check for is_contiguous to take a fast path. but the general path usually works for both contiguous and not contiguous in that case we probably want
to use definitely _contiguous API.

This is appleid for reshape in this PR and also to  tensor meta data computation, the meta data now will have an attribute that says that its contiguous when its always contiguous. We would store that only if definitely _contiguous is true  now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153432
Approved by: https://github.com/bobrenjc93
2025-05-28 03:41:26 +00:00
..
benchmarks [dynamo][pr_time_benchmark] Add dynamo benchmark to stress test inlining (#153159) 2025-05-09 00:09:19 +00:00
test_check_result Several enhancements for check_results.py (#137925) 2024-10-26 16:27:55 +00:00
__init__.py
benchmark_runner.sh [inductor] Minor compile time optimizations in DefaultHandler (#146282) 2025-02-08 18:00:40 +00:00
check_results.py refresh expected results (#150166) 2025-05-13 04:04:42 +00:00
expected_results.csv introduce definitely_contiguous and use it for reshape and tensor meta data computation. (#153432) 2025-05-28 03:41:26 +00:00
log_benchmarking_time.py Only keep ListOfLinears module in basic_modules_benchmarks and add gpu version. (#135730) 2024-09-14 16:45:52 +00:00
README.md add README.md for compile time benchmarks (#143145) 2024-12-13 05:12:26 +00:00

Instructions on how to make a new compile time benchmark

  1. Make a new benchmark file in /benchmarks/dynamo/pr_time_benchmarks/benchmarks/ eg. 0b75b7ff2b/benchmarks/dynamo/pr_time_benchmarks/benchmarks/add_loop.py
  2. cd into the pr_time_benchmarks directory cd benchmarks/dynamo/pr_time_benchmarks
  3. Run PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt
  4. (Optional) flip a flag that you know will change the benchmark and run again with b.txt PYTHONPATH=./ python benchmarks/[YOUR_BENCHMARK].py a.txt
  5. Compare a.txt and b.txt located within the benchmarks/dynamo/pr_time_benchmarks folder to make sure things look as you expect
  6. Check in your new benchmark file and submit a new PR
  7. In a few days, if your benchmark is stable, bug Laith Sakka to enable running your benchmark on all PRs. If your a meta employee, you can find the dashboard here: internalfb.com/intern/unidash/dashboard/pt2_diff_time_metrics