CI: Run js-benchmarks with --continue-on-failure for manual builds

If we ever want to rerun benchmarks for an older commit, e.g. if we
added new benchmarks that we would want historical performance data on,
we need to consider that some benchmarks might fail because of a bug or
missing feature. For manual runs of the upstream build, pass
`--continue-on-failure` so we simply skip these failing benchmarks.
This commit is contained in:
Jelle Raaijmakers 2025-10-06 21:26:38 +02:00 committed by Jelle Raaijmakers
parent 5f12c091ac
commit 988c7bb758

View File

@ -72,14 +72,25 @@ jobs:
cd wasm-repl cd wasm-repl
tar -xvzf ladybird-wasm-${{ matrix.os_name }}-${{ matrix.arch }}.tar.gz tar -xvzf ladybird-wasm-${{ matrix.os_name }}-${{ matrix.arch }}.tar.gz
- name: 'Run the JS and Wasm benchmarks' - name: 'Run the benchmarks'
shell: bash shell: bash
run: | run: |
cd js-benchmarks cd js-benchmarks
python3 -m venv .venv python3 -m venv .venv
source .venv/bin/activate source .venv/bin/activate
python3 -m pip install -r requirements.txt python3 -m pip install -r requirements.txt
./run.py --executable=${{ github.workspace }}/js-repl/bin/js --wasm-executable=${{ github.workspace }}/wasm-repl/bin/wasm --iterations=5
run_options="--iterations=5"
if [ "${{ github.event.workflow_run.event }}" = "workflow_dispatch" ]; then
# Upstream was run manually; we might run into failing benchmarks as a result of older builds.
run_options="${run_options} --continue-on-failure"
fi
./run.py \
--executable=${{ github.workspace }}/js-repl/bin/js \
--wasm-executable=${{ github.workspace }}/wasm-repl/bin/wasm \
${run_options}
- name: 'Save results as an artifact' - name: 'Save results as an artifact'
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v4