pytorch/benchmarks/gpt_fast
Huy Do eb553ae3cf Fix broken gpt_fast micro benchmark after #144315 (#145235)
The benchmark is failing with the following error

```
  File "/var/lib/jenkins/workspace/benchmarks/gpt_fast/benchmark.py", line 333, in <module>
    main(output_file=args.output, only_model=args.only)
  File "/var/lib/jenkins/workspace/benchmarks/gpt_fast/benchmark.py", line 308, in main
    lst = func(device)
  File "/var/lib/jenkins/workspace/benchmarks/gpt_fast/benchmark.py", line 66, in run_mlp_layer_norm_gelu
    us_per_iter = benchmarker.benchmark(compiled_mod, (x,)) * 1000
  File "/opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper
    return fn(self, *args, **kwargs)
TypeError: benchmark() missing 1 required positional argument: 'fn_kwargs'
```

An example error is https://github.com/pytorch/pytorch/actions/runs/12862761823/job/35858912555

I also assign `oncall: pt2` as the owner of this job going forward.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145235
Approved by: https://github.com/nmacchioni
2025-01-21 17:42:24 +00:00
..
benchmark.py Fix broken gpt_fast micro benchmark after #144315 (#145235) 2025-01-21 17:42:24 +00:00
common.py PEP585 update - benchmarks tools torchgen (#145101) 2025-01-18 05:05:07 +00:00
generate.py Migrate from Tuple -> tuple in benchmarks (#144259) 2025-01-07 04:09:52 +00:00
mixtral_moe_model.py Reduce the number of layers for mixtral moe model to adapt CI memory limitation (#125608) 2024-05-06 21:52:25 +00:00
mixtral_moe_quantize.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00
model.py
quantize.py Fix unused Python variables outside torch/ and test/ (#136359) 2024-12-11 17:10:23 +00:00