pytorch/benchmarks/instruction_counts/main.py
Aaron Orenstein 07669ed960 PEP585 update - benchmarks tools torchgen (#145101)
This is one of a series of PRs to update us to PEP585 (changing Dict -> dict, List -> list, etc).  Most of the PRs were completely automated with RUFF as follows:

Since RUFF UP006 is considered an "unsafe" fix first we need to enable unsafe fixes:

```
--- a/tools/linter/adapters/ruff_linter.py
+++ b/tools/linter/adapters/ruff_linter.py
@@ -313,6 +313,7 @@
                     "ruff",
                     "check",
                     "--fix-only",
+                    "--unsafe-fixes",
                     "--exit-zero",
                     *([f"--config={config}"] if config else []),
                     "--stdin-filename",
```

Then we need to tell RUFF to allow UP006 (as a final PR once all of these have landed this will be made permanent):

```
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -40,7 +40,7 @@

 [tool.ruff]
-target-version = "py38"
+target-version = "py39"
 line-length = 88
 src = ["caffe2", "torch", "torchgen", "functorch", "test"]

@@ -87,7 +87,6 @@
     "SIM116", # Disable Use a dictionary instead of consecutive `if` statements
     "SIM117",
     "SIM118",
-    "UP006", # keep-runtime-typing
     "UP007", # keep-runtime-typing
 ]
 select = [
```

Finally running `lintrunner -a --take RUFF` will fix up the deprecated uses.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145101
Approved by: https://github.com/bobrenjc93
2025-01-18 05:05:07 +00:00

48 lines
1.3 KiB
Python

"""Basic runner for the instruction count microbenchmarks.
The contents of this file are placeholders, and will be replaced by more
expressive and robust components (e.g. better runner and result display
components) in future iterations. However this allows us to excercise the
underlying benchmark generation infrastructure in the mean time.
"""
# mypy: ignore-errors
import argparse
import sys
from applications import ci
from core.expand import materialize
from definitions.standard import BENCHMARKS
from execution.runner import Runner
from execution.work import WorkOrder
def main(argv: list[str]) -> None:
work_orders = tuple(
WorkOrder(label, autolabels, timer_args, timeout=600, retries=2)
for label, autolabels, timer_args in materialize(BENCHMARKS)
)
results = Runner(work_orders).run()
for work_order in work_orders:
print(
work_order.label,
work_order.autolabels,
work_order.timer_args.num_threads,
results[work_order].instructions,
)
if __name__ == "__main__":
modes = {
"debug": main,
"ci": ci.main,
}
parser = argparse.ArgumentParser()
parser.add_argument("--mode", type=str, choices=list(modes.keys()), default="debug")
args, remaining_args = parser.parse_known_args(sys.argv)
modes[args.mode](remaining_args[1:])