The upstream workflow `head_sha` contains the wrong hash in case the
workflow was run manually against a provided commit hash. We use the
COMMIT file that cpack adds to the archive to determine the actual
commit hash that was used to build the binaries.
Note that this will only work for builds since the introduction of that
file, so we cannot process benchmarks for older commits, unfortunately.
This new file in the root of the archives contains the git commit hash,
to be used by e.g. the js-benchmarks webhook to determine which commit
was used to build the utilities.
This commit replaces the default backtrace logic with cpptrace, for
nicer, colored backtraces. Cpptrace runs on all of our supported
platforms excpet android. As such backtrace.h is left in place.
All the backtrace functions are made noinline to have a consistent
number of frames. A maximum depth parameter is added to dump_backtrace
with a default of 100. This should be enough, and can be easily
changed, and allows for limiting the maximum depth.
Setting the LADYBIRD_BACKTRACE_SNIPPETS environment variable enables
surrouding code snippets in the backtrace. Specifically 2 lines above
and below. This number can be changed by calling snippet_context on the
formatter. For the whole list of options of what can be done with
formatting see the cpptrace repository.
On Windows we skipped frames when verification fails and when
dump_backtrace was added the logic was wrong and would have skipped
frames we care about.
This commit also implements skipping frames on Linux.
The only time where this does not skip all frames is when the call to
backtrace gets intercepted. Then we will end up skipping one frame less
than needed.
To keep delayload on Windows a patch and overlay port is used. When
upstream accepts these changes and vcpkg bumps the version the patch
could be removed to have just the cmake define.
If the Ladybird process crashes or just ends normally, the IPC transport
connection with WebContent may be shutdown after a send sync event (for
example: WebContentClient DidRequestCookie) was sent from WebContent,
but before the Ladybird process provided the matching sync event
response (for example: WebContentClient DidRequestCookieResponse). This
can lead to a runaway WebContent process if other IPC events (for
example: WebContentServer DidPaint, or SetSystemVisibilityState, or
MouseEvent, or CloseServer, etc...) are also queued when the IPC
connection is shutdown.
At the core of the issue is that the loop waiting for the matching
send sync response will prioritize waiting for the response and remain
spinning even if the IPC connection is reporting that it was shutdown,
but only if there happens to be other unrelated events received before
the IPC shutdown is detected. These unrelated events will not be
processed because the loop is stuck waiting for the response that due
to the Ladybird process having stopped, will never be sent.
Because the shutdown of the IPC connection is not handled when other
events happen to be also present, new events may be posted for transfer
by the WebContent process if the page is very active. If many new events
are posted this could lead to a slow or very quick memory leak in the
WebContent process due to the queue growing large, sometimes all the way
to total system memory exhaustion. If no events or only a few new events
are sent, then the leak may be hard to detect.
This PR fixes the faulty IPC shutdown handling by not getting stuck if
any messages are present in the receive queue. Before returning to the
caller any remaining messages will be immediately processed.
This gets rid of a lot of pointer chasing from interpreter to executable
to identifier table to the actual identifier.
1.05x speed-up on Kraken/ai-astar.js
These will generally be cached the vast majority of the time except on
first encounter, and sprinkling [[likely]] gives us a nice boost.
1.10x speed-up on this micro-benchmark:
(() => {
var a = 3;
for (let i = 0; i < 100_000_000; ++i) { a; }
eval("");
})();
The src IDL attribute was previously implemented as an inline getter
that returned the raw attribute value. This broke spec semantics and
sites like Telegram Web that rely on document.currentScript.src to
compute Webpack’s publicPath.
According to the HTML Standard:
https://html.spec.whatwg.org/multipage/common-dom-interfaces.html#reflecting-content-attributes-in-idl-attributes
For URL-reflecting attributes:
1. If contentAttributeValue is null, then return the empty string.
2. Let urlString be the result of encoding-parsing-and-serializing
a URL given contentAttributeValue,
relative to element’s node document.
3. If urlString is not failure, then return urlString.
This patch moves the getter to HTMLScriptElement.cpp and implements
these steps.
Before this change, you could only scroll the current hovered scroll
container, even if it was at the beginning or end and thus having no
effect.
Now, if it doesn't update, it will not be classed as handled and will
move onto the next scroll container.
Instead of converting them to doubles and doing double math, just do the
arithmetic operation in i64 space instead.
This gives us a ~1.25x speed-up on this kind of micro-benchmark:
(() => {
let a = -2124299999;
for (let i = 0; i < 100_000_000; ++i) {
a + a;
}
})()
Same idea for Add, Sub, and Mul.
There's a fair bit of overflowing Int32 arithmetic in some of the
JetStream benchmarks, and this seems like an obvious improvement.
Currently we default to GITHUB_REF for the checkout of js-benchmarks,
and if we've built up a large backlog of benchmark jobs to run, this can
mean we're running against a severely outdated version of js-benchmarks.
This new behavior allows us to see a failed js-benchmarks job, fix the
bug, and restart the failed job. This is important since we're testing a
specific commit and build from the upstream JS/WASM workflow.
Reverts c8bd58c and 0df9c22. These were used to absolutize values stored
within other `StyleValue`s but we should instead store these values as
`StyleValue`s directly instead of within `{Calculated,Percentage}Or`.
This struct will in the future hold information other than a length
resolution context (e.g. context for tree counting functions) and a
single struct is easier to work with than multiple parameters.
The [[Set]] operation on objects will normally end up doing
[[GetOwnProperty]] twice, but if the object and the receiver are one on
the same, we can avoid the double work.
AFAIK this is not observable via proxies or other mechanisms.
1.10x speed-up on MicroBench/object-set-with-rope-strings.js
Pending changes to an ancestor document's layout can affect an element's
computed style e.g. an IFrame's width being changed can affect media
query evaluation and the value of the `vw` unit.
Previously we would always use the window's viewport which was incorrect
if we were within an iframe.
This is likely applicable to all uses of
`Length::ResolutionContext::for_window`.
We were using the font's point size instead of it's pixel size, we were
already computing this information earlier in the function anyway so
let's just use that.