Especially on new setups, it is handy for `headless-browser --run-tests`
to "just work". Anyone using ladybird.py will have LADYBIRD_SOURCE_DIR
set in their environment already.
The streams AO that we were calling to close the stream would assert
if it was not in a readable state. This version of close is exported
publicly in the streams specification, and properly handles this
situation.
Fixes a crash in the imported test, and happens to fix some others!
This header file is never included by any other headers, and including
it would require adding the fontconfig include paths to the swift
interop header generation, which is not desirable.
We haven't built a universal binary in over 11 months. The name is
confusing, and actually breaks esvu on macOS. The fact that nobody
has complained suggests that this is not a common use case..
On my Linux machine with 32 cores, ninja actually defaults to 34 jobs.
By defaulting ourselves to multiprocessing.cpu_count(), we actually
decrease the number of jobs used.
We don't need to pick a host compiler every time ladybird.py is invoked.
We only need to do so when configuring the Build directory.
This shaves about 10ms off an invocation of `ladybird.py build` on my
machine, going from ~170ms to ~160ms.
Not cleaning these up by rejecting or resolving the promise causes
the main thread to try to reject them at EventLoop::exit() time.
If the RenderThread has already been destroyed by then, we get into
use-after-free territory and segfault.
We already had all necessary things for pseudo elements support in place
except ability to save transition properties in Animatable. This commit
adds the missing part.
The way we echo the cache keys for output variables strips these quotes.
So when we save the caches at the end of CI, the keys are sans quotes.
This patch adds an extra cache restore key without quotes to allow jobs
to fetch their own caches again. This will become moot once all runners
are able to use the Blacksmith cache action.
In WindowProxy.[[Get]] it's not guaranteed that the current principal
global object has an associated document at the moment. This may happen
if a script is continuing to execute while a navigation has been
initiated.
Because of that, we can't blindly dereference the active document
pointer, so this patch adds a null check.
By default, we want `ladybird.py build` to build everything. We were
previously defaulting to only building the Ladybird target.
We now only fall back to the Ladybird target for commands that run a
program. So `ladybird.py run` will build and run Ladybird.
As opposed to just running subprocess.check_call, our `run_command`
utility handles e.g. ctrl+c to avoid spamming the terminal with
KeyboardInterrupt stack traces.
This will allow us to re-use this logic from within other python
scripts. The find_compiler.sh script still exists, as it is used by
some other bash scripts. The pick_host_compiler() function will now
execute find_compiler.py and store its result in $CC and $CXX.
Note that the python script supports Windows.
This is the default python version on macOS, so let's support it since
it is trivial for now. Using "str | None" as a type annotation is only
supported in python 3.10 or later.
This will be needed by other scripts.
To do so, this patch gives up on the importlib method of importing
packages. I tried extracting this helper to e.g. __init__.py, but the
python runtime was unable to find the imported symbols.
When recording the display list for a stacking context, the following
operations (relevant to this bug) happened:
* push a stacking context
* as part of that push a None-value to the scroll frame id stack
* apply filters
* apply masking
* paint recursively
This meant that mask-images were always recorded without scroll frame
id, causing them to be painted without any scroll offset. As a result
mask-images would break as soon as the website using them was scrolled.
Instead, push to the scroll frame id stack later to solve the problem:
* push a stacking context
* apply filters
* apply masking
* push a None-value to the scroll frame id stack
* paint recursively
When serializing CSS declarations we now support combining multiple
properties into a single shorthand property in some cases.
This comes with a healthy dose of FIXMEs, including work to be done
around supporting:
- Nested shorthands (e.g. background, border, etc)
- Shorthands which aren't represented by the ShorthandStyleValue type
- Subproperties pending substitution
This gains us a bunch of new test passes, both for WPT and in-tree
This exposed a few bugs which caused the following tests to behave
incorrectly:
- `tab-size-text-wrap.html`: This previously relied on a bug where we
incorrectly treated `white-space: pre` as allowing text wrapping. The
fix here is to implement the text-wrap CSS shorthand property.
- `execCommand-preserveWhitespace.html`: We don't correctly serialize
shorthand properties. This is covered by an existing FIXME in
`CSSStyleProperties::serialized()`
- `white-space-shorthand.html`: The last 5 subtests here fail as we
don't correctly handle shorthand properties in
`CSSStyleProperties::remove_property()`. This is covered by an
existing FIXME in said function.
This commit enables building and testing js.exe for windows. Needed
libraries are built in CI, and tests for those which pass were added.
Tests for LibJS which don't require javascripttestrunner were added but
the rest need to wait for that to be ported to windows.
This commit allows building js.cpp on windows. The repl functionality is
ifdef'ed out. To decrease the number of ifdefs the code that runs the
repl on Linux was moved into one place. Some globals that are unused as
a result of that are markes maybe_unused. The following commit enables
building and testing js in cmake for windows.
This commit adds the minimal export macros needed to run js.exe on
windows. A followup commit is planned to move to explicit export
entirely.
A static_assert for the size of a struct is also ifdef'ed out as the
semantics around object layout and inheritance are different on MSVC abi
and the struct IteratorRecord ends up being 40 bytes not 32.
We can't iterate over m_cached_list_of_options and call set_selected()
in the loop, since that may end up rebuilding m_cached_list_of_options,
disrupting iteration.
Make copies of the animation timeline list and animations to dispatch
events at before iterating over them. This ensures that they can't be
modified during iteration.
We move m_pending_nodes_for_style_invalidation_due_to_presence_of_has to
a local variable before iterating over it. This ensures that nothing can
be added to it while iterating.
`start_needed_transitions()` decides which animations need to be started
based on previous and current style property values. Before this change,
we were using the style value without animations applied as the
"current" value. This caused issues such as starting a new transition
from the animation’s end value when an ongoing animation was
interrupted.
These tests previously only ran on SerenityOS. They needed test input
location changes. The Stream tests also needed to explicitly set
SO_REUSEADDR for the tcp servers.
Swift jobs were failing a test due to the llvm-symbolizer not being
available in the default location next to the clang binary. swift.org
toolchains don't ship this tool, so LSAN suppressions were not being
applied, failing TestWOFF2.
This was hard to reproduce locally, because I have always had a set of
alternatives set up for the full suite of LLVM tools on my machine.
Their cache action only works on their runners. For jobs that run on
other runners, we have use the default cache action. At least until they
update their cache product to work or fallback on other runners.
This makes the build system aware of which macOS version we're
targeting, and will make it an error to newer APIs without explicitly
checking the availability.
Note that the js REPL CI job still sets the deployment target to 11.0
explicitly.
We used to subtract the maximum right margin of any containing box,
but we want to subtract the entire right margin instead. This yielded
incorrect intrusions for right floats not placed in the root.
If a property is uses discrete interpolation and TransitionBehavior is
not set to `AllowDiscrete` that property should be non-transitionable.
This is now true for properties whose animation type is not discrete,
but the animation type falls back to discrete.
Everything that's not self-hosted or macOS is now pointing to
Blacksmith.sh. Nightly jobs and JS artifact builds use 8VCPU machines,
while regular integration builds & tests use 16VCPU machines.
The people over at Blacksmith.sh have generously offered usage of their
runners for our organization, so let's try to switch over some simple
workflows. The runners should be drop-in replacements.
This commit changes `AK::TypeErasedParameter` to store integer,
floating-point and string types as well as characters and booleans
directly instead of through a type-erased `void const*`.
This is advantageous for binary size:
- The compiler no longer needs to push both the parameter value and a
pointer to it to the stack (which contains the argument array);
storing just the value is enough.
- Instead of instantiating `__format_value` for these types and taking
its address (which generates a GOT entry and an ADRP+LDR pair in
assembly), we just store a constant `TypeErasedParameter::Type` value.
- The biggest saving comes from the fact that we used to instantiate a
distinct `__format_value` for each length of string literal. For
LibJS, this meant 69 different functions! We can now just store them
as a `char const*`.
I opted to use a manual tagged union instead of `Variant`: the code
wouldn't be much shorter if we used a `Variant` since we'd have to
handle converting the standard integer types to `u64`/`i64` and custom
types to the type erased wrapper via explicit constructors anyway. And
compile time overhead is smaller this way.
This gives us some nice binary size savings (numbers are from arm64
macOS LibJS):
FILE SIZE VM SIZE
-------------- --------------
+52% +10.3Ki +52% +10.3Ki [__TEXT]
+5.2% +768 +5.2% +768 [__DATA_CONST]
-0.0% -7 -0.0% -7 __TEXT,__cstring
-3.0% -144 -3.0% -144 __TEXT,__stubs
-1.2% -176 -1.2% -176 Function Start Addresses
-11.6% -432 -11.6% -432 Indirect Symbol Table
-1.0% -448 -1.0% -448 Code Signature
-18.1% -768 -18.1% -768 __DATA_CONST,__got
-0.8% -6.83Ki -0.8% -6.83Ki Symbol Table
-1.0% -11.2Ki -1.0% -11.2Ki String Table
-0.9% -26.1Ki -0.9% -26.1Ki __TEXT,__text
-7.2% -20.9Ki -9.6% -28.9Ki [__LINKEDIT]
-1.0% -56.0Ki -1.1% -64.0Ki TOTAL
this commit also introduces GlobalFontConfig class that is now
responsible for fontconfig initialization. it seems sane, even thought
FcInit's docs state that if the default configuration has already been
loaded, this routine does nothing.
the goal is to rely on fontconfig for font directory resolution. it
doesn't seem like it would be appropritate to call to fontconfig funcs
from within the LibCore.
i'm not 100% confident that FontDatabase is the correct place.. seems
okay?
This was the only remaining data type used in display lists that wasn't
atomically ref-counted.
Now that it is, we no longer crash when scrolling on https://vercel.com/
When we try to retrieve benchmark results in the webhook call, we cannot
use the `head_sha` parameter since the workflow run might have a
different `head_sha` associated with it than the upstream workflow run.
This can happen when the JS repl binary workflow runs, a new commit is
pushed to master, followed by a JS benchmarks workflow run causing this
latter run to be associated with a different commit ID.
This extends the webhook payload to include the current run ID, which
can eventually be used by the webhook script to specifically download
the benchmark results associated with the current run.
Additionally, this changes the JS artifact download to use the upstream
run ID which seems nicer to do anyway.
In the following example:
```js
const f = (i) => ({
obj: { a: { x: i }, b: { x: i } },
g: () => {},
});
```
The body of function `f` is initially parsed as an arrow function. As a
result, what is actually an object expression is interpreted as a formal
parameter with a binding pattern. Since duplicate identifiers are not
allowed in this context (`i` in the example), the parser generates an
error, causing the entire script to fail parsing.
This change ignores the "Duplicate parameter names in bindings" error
during arrow function parameter parsing, allowing the parser to continue
and recognize the object expression of the outer arrow function with an
implicit return.
Fixes error on https://chat.openai.com/
The SharedSingleProducerCircularQueue used here has dubious value, This
queue is used to pass commands to the audio thread, such as play/pause/
seek/volume change/etc. We can make do with a simple locked vector, as
we were blocking to enqueue tasks anyways. We can also use an atomic
bool to tell the audio thread when it needs to take a lock on the task
queue, to keep the thread lock-free most of the time.
This is quite a frequent FIXME log on quite a few sites which
does not serve much value at this stage. So instead of marking it
with a FIXME extended IDL attribute, let's just comment it out.
This limitation of the underlying Unix socket implementation can
cause IPC failures on pages with tons of images and network requests.
Modify the code called from TransportSocket's send thread to limit the
number of fds to MAX_TRANSFER_FDS, and ensure that we will keep sending
as long as we have either bytes or file descriptors to send.
The spec seems to indicate in its wording that while opaque
origins only serialize to 'null', they can still be tested
for equality with one another. Probably we will need to
generate some unique ID which is unique across processes.
...to become writable.
Solves triangular deadlock problem that happened in the following case
(copied from https://github.com/LadybirdBrowser/ladybird/issues/1816):
- The WebContent process is spinning on
`send_sync_but_allow_failure` waiting for the UI process to respond
- The UI process is spinning on `send_sync_but_allow_failure`, waiting
for RequestServer to respond
- RequestServer is stuck in this loop, trying to write to the
WebContent's socket file (when I attach to RS, we are always in the
sched_yield call, so we're spinning on EAGAIN).
For me the issue was reliably reproducible on Google Maps and with this
change we no longer deadlock there.
This fixes two race conditions and ASAN crashes in the test for the
same.
The first comes from destroying the internals struct, which was
previously using the standard, thread-unsafe RefCounted CRTP. The
second is from destroying the name, which is secretly another
RefCounted object, in a thread-unsafe manner.
This function attempts to resolve `lighter` and `bolder`, which we don't
want to do when serializing - that should happen in style computation.
This has the unexpected bonus of 37 more WPT passes!
To be vendor-prefixed, an ident has to start with a '-', then have
another '-' later. If the ident simply starts with a '-' then that's
perfectly fine.
Fixes 62 in-tree WPT subtests. :^)
We don't want to reset the values of `font-variant-*` here, as that will
override whatever our parsed font-variant-css2 was, so stop doing that.
Also, font-stretch is mentioned in the spec, but it's a legacy name
alias for font-width, so we don't need to do anything for it.
Gets us 319 WPT passes!
These are not the same as parsing their properties, but are limited to a
small set of keywords each. The easiest way to handle these is parsing
them directly here.
Update some comments while I'm here.
There is an open spec issue for this, and I'm certainly not sure
what the client should be here, but using the source snapshot
from the global from reading the spec issue seems like a reasonable
enough client for now.
This can be reproduced by performing a javascript URL navigation
with any CSP policy set. For simplicity, simply edit an existing
testcase to add such a policy.
Fixes: #4853
"format(woff-variations)" and pals are supposed to expand like so:
"format(woff) tech(variations)".
However, since we don't support tech() yet, this patch just adds a small
hack where we still treat "woff-variations" as "woff" so that fonts
load and get used, even if we don't make use of the variations yet.
...when Array.prototype and Object.prototype are intact.
If `internal_set()` is called on an array exotic object with a numeric
PropertyKey, and:
- the prototype chain has not been modified (i.e., there are no getters
or setters for indexed properties), and
- the array is not the target of a Proxy object,
then we can directly store the value in the receiver's indexed
properties, without checking whether it already exists somewhere in the
prototype chain.
1.7x improvement on the following program:
```js
function f() {
let a = [];
let i = 0;
while (i < 10_000_000) {
a.push(i);
i++;
}
}
f();
```
Replace the implementation of maths in `UnsignedBigInteger`
and `SignedBigInteger` with LibTomMath. This gives benefits in terms of
less code to maintain, correctness and speed.
These changes also remove now-unsued methods and improve the error
propagation for functions allocating lots of memory. Additionally, the
new implementation is always trimmed and won't have dangling zeros when
exporting it.
An overlay port is required to add the `stdc-iec-559` and `install-pc`
patches.
The `stdc-iec-559` patch is required because Clang doesn't define
`__STDC_IEC_559__`. However, glibc and musl define it if `__GCC_IEC_559`
is not defined. The macro is taken from glibc source code.
The `install-pc` patch is required because libtommath doesn't install
the pkg-config files when building statically compromising our ability
to find it during build.
Clang: https://clang.llvm.org/c_status.html#:~:text=Yes-,
IEC%2060559%20support,-Unknown
glibc: https://sourceware.org/git/?p=glibc.git;a=blob;
f=include/stdc-predef.h
Having it as a method instead of a free function is necessary for the
next commits and generally allows for optimizations that require deeper
access into the `UnsignedBigInteger` / `SignedBigInteger`.
Also restrict the exponent to 32 bits to avoid huge memory allocations.
Having it as a method instead of a free function is necessary for the
next commits and generally allows for optimizations that require deeper
access into the `UnsignedBigInteger`.
The spec has a general rule for this, which is roughly that "If it's not
a falsey value, it's true". However, a couple of media-features are
always false, apparently breaking this rule. To handle that, we have an
array of false keywords in the JSON, instead of a single keyword. For
those always-false media-features, we can enter all their values into
this array.
Gets us 2 more WPT subtest passes.
Despite what the spec suggests, modern displays are not progressive, and
WPT expects `@media (scan: progressive)` to fail. So, return `none`
here to accurately represent that.
I've left a FIXME in case we can detect the display type from the OS
somehow in the future.
Gets us 4 WPT subtest passes.
Previously, for `foo < 30px` ranges, we'd flip them and store them as
`30px > foo` instead. That worked fine, but means the serialization is
wrong. So instead, keep them in their original form.
I experimented with giving Range two optional sub-structs instead of 4
optional members, thinking it would be smaller - but it's actually
larger, because the two Optional<Comparison>s fit snugly together. So,
the slightly-goofy all-Optionals remains.
This gets us 2 WPT passes that I'm aware of.
Instead of rejecting invalid media-feature values at parse time, we are
expected to parse them, and then treat them as Unknown when evaluating
their media query. To implement this, we first try to parse a valid
value as before. If that fails, or we have any trailing tokens that
could be part of a value, we then scoop up all the possible
ComponentValues and treat that as an "unknown"-type MediaFeatureValue.
This gets us 66 WPT passes.
Functionally this is the same before, as result is always True or False
before this point, and `True && Foo` evaluates to `Foo`. But this is
more clearly correct, instead of correct by coincidence.
This commit on its own has no observable behaviour changes, as we still
only return True or False, but for the next commit, we'll need to be
able to return the Unknown state here, and without this change we'd get
regressions.
This is very hacky and wrong, but it means there's one place to fix,
instead of one for UnresolvedStyleValue, and one for invalid
MediaFeatureValues which I'm about to implement.
The current spec defines this simply as `<ident>`, but does apparently
serialize as lowercase.
Because of this change, we no longer need to care about the deprecated
media types, as they all behave the same as unknown ones.
We still keep an enum around for KnownMediaType, to avoid repeated
string comparisons when evaluating it.
Gets us 2 WPT passes.
When restructuring a block node inside an inline parent, if the
nearest block ancestor is `display: inline-block`, ensure that
the generated anonymous wrappers also have `display: inline-block`.
This fixes layout issues with block elements nested
inside inline-block elements.
Previously floats would be placed next to the highest float that
fitted the new float on its line. However, this violates the rule
that floats should be placed under the preceding float if it does
not fit next to it.
This is functionaly the same since caller_context is the topmost
execution context on the stack, but makes it more clear that
we are directly inheriting the strict mode from the caller context
when pushing the next context on to the stack.
This test has flaked over the years, so let's add a screenshot test to
catch future regressions.
This copy of the test was taken from:
https://www.webstandards.org/files/acid2/test.html#top
Our CI infra does not support navigating to the "#top" anchor out of the
gate. So the intro section was removed from this copy so that we render
the happy face immediately.
We are required to delay the load event while any fetch is ongoing. We
currently have a very hacky re-fetch for images to go through the shared
resource request infrastructure. We must delay the load event during
this re-fetch.
This becomes an issue in an upcoming commit to import the acid2 test.
If we queue the <object> representation task multiple times in a row, we
would end up clearing the delayer after the first task completed. We
must continue delaying the load event until the last task completes.
This becomes an issue in an upcoming commit to import the acid2 test.
In a recent refactor of font styles, the new FontStyleStyleValue was
introduced; however, the `to_font_slope()` function was not changed to
understand this new type. When it tries to convert such a font style to
a keyword, it fails. We then rendered the wrong font-style.
When an element's ID is removed we only want to remove it from
`m_potentially_named_elements` if it is not also considered a
potentially named element due to it's name content attribute.
It is currently possible to hit this limit on pages with large numbers
of images. This temporary workaround prevents some WPT tests with large
numbers of images from failing.
The previous commit preserved existing behavior from `master`, but made
it clear that the script was trying to use the `run` arguments in the
build step. We currently cannot specify both build-time and run-time
arguments, so only pass the positional arguments to the run step.
There's no measurable benefit to hiding these in functions, even if they
are only used once (or not at all). Only the BuildVcpkg import is left
alone, as that function does some sys.path manipulation first.
These are meant to indicate private functions by convention, and while
we can argue these functions are "private" to this file, let's err on
the side of readablilty over every single function being prefixed.
We require pep8 conformance via flake8 in CI, and flake8 seems happy
with this patch. In the future, we should enforce use of black in CI
as well.
Formatted with:
black --line-length 120 ./Meta/ladybird.py
This job uses the windows_ci_ninja preset to build just the
components and unit tests that are known to work with ClangCL on the
amd64-pc-windows-msvc target triple.
As a nightly job, its failures are non-blocking for any PRs, though
they should be fixed eventually or the job will get turned off by
email-annoyed maintainers.
We were previously only testing for network errors, which includes e.g.
DNS resolution and connection errors. It does not include e.g. HTTP 404
responses, which is exercised by Acid 2.
This begins implementation on form-associated custom elements.
This fixes a few WPT tests which I'm importing.
Co-authored-by: Sam Atkins <sam@ladybird.org>
Which has an optmization if both size of the string being passed
through are FlyStrings, which actually ends up being the case
in some places during selector matching comparing attribute names.
Instead of maintaining more overloads of
Infra::is_ascii_case_insensitive_match, switch
everything over to equals_ignoring_ascii_case instead.
The spec isn't super clear on what disentagling a MessagePort means. But
we are required to send all pending messages before closing the port.
This is a bit tricky because the transport socket performs writes on a
background thread. From the main thread, where the disentanglement will
occur, we don't really know the state of the write thread. So what we do
here is stop the background thread then flush all remaining data from
the main thread.
* We need the full definition of IPC::File in the header.
* We need(ed) Core::System in the header. Move AutoCloseFileDescriptor's
ctor and dtor out-of-line to avoid this.
Fixes a bug that reproduces with the following steps:
1. Create an object with a getter for property "a" in its prototype,
where the getter adds an "a" property to the object itself.
2. Call the "a" getter in a loop for the first time. This triggers
caching of metadata indicating that the "a" property is located in
the prototype chain.
3. Call the "a" getter in a loop for the second time. Oops, the cache
says the getter is in the prototype chain, but the object now
also has its own "a" property that was added by the first getter
call.
We now explicitly enabling support for the minimum libraries needed
to build and run the AK testsuite. 81/82 tests are running and
passing. The exception is LexicalPath, as some path behaviour on
Windows is different than Unix, so the current tests will have lots of
platform specific failures. The implementer of LexicalPathWindows
recommended windows-specific tests here, so I will do that in a
follow up.
This will allow us to gradually build up official support for Windows,
as only targets we have explicitly marked as supported on windows will
be built and ran during CI.
When position changes, we may need to make larger structural updates
to the layout tree. A simple relayout is not sufficient.
This was a source of flakiness in the engine, and gives us at least
+28 new WPT subtest passes.
This test app was created to run test262-based tests on the SerenityOS
target, without having to port the entire test262 runner and driver
python script. We have no need for it here.
This is a utility more than it is a test in itself. We use it to run
test262 tests, which are external to this repo. The test-js runner is
still private test infrastructure though, so it stays where it is.
Before this change, we were at the mercy of hashed pointer addresses
when processing fragment relocation in LayoutState::commit().
This made inline fragment order non-deterministic, causing layouts to
shift around seemingly randomly on page reload.
By simply using OrderedHashMap, we automatically get tree order
processing here. This fixes a bunch of flaky tests on WPT.
While width and height are presentational hints on canvas, they actually
map to the CSS aspect-ratio attribute, not to CSS width and height.
For this reason, we actually need to manually mark for relayout here.
Also import a WPT test that was flaky before this change.
Returning a Vector of Slottable is not very nice here, but this
matches find_slottable (which this calls), and as far as I can
tell this is technically 'safe' at the moment in the way in
which it is / will be called.
It's also not great that like find_slottable it takes a non-const
ref, but changing that causes a bunch of other fallout.
This test was only useful when AK/PrintfImplementation.h existed. But
that was removed 11 months ago, so since then this has just been
testing std library functions not implemented by us.
Also push the onconnect event for the initial connection.
This still doesn't properly handle sending an onconnect event to a
pre-existing SharedWorkerGlobalScope with the same name for the same
origin, but it does give us a lot of WPT passes in the SharedWorker
category.
This fixes an issue where we'd serialize some floating point numbers
with excessive precision, resulting in unpleasant-looking numbers like
0.49999999999999999 and such.
At least 90 new subtests passing on WPT, possibly more. :^)
This gives us a more human-looking serialization of numbers by default,
and in case a fixed number of decimal digits is actually wanted, we
still have the 'f' specifier.
We have implemented all commands in the editing spec that potentially
reference one another, so we can now rely on the fact that any command
that gets passed to these methods has a definition. User-provided
commands still get checked by means of `queryCommandSupported()` and
friends.
No functional changes.
Previously, heap-allocated `CallableWrapper` objects were destroyed in a
very roundabout way: `AK::Function` would call a virtual `destroy()`
method, which then invoked delete manually. This was unnecessary, since
`CallableWrapper` already has a virtual destructor — deleting it through
a `CallableWrapperBase*` correctly calls the closure's destructor.
This fixes GCC `-Wfree-nonheap-object` false positive warnings (#4721)
and coincidentally removes 8 KB of vtable entries (and the corresponding
relative relocations) from LibJS.
- Avoids unnecessary conversions between StringOrSymbol and PropertyKey
on the hot path of property access.
- Simplifies the code by removing StringOrSymbol and using PropertyKey
directly. There was no reason to have a separate StringOrSymbol type
representing the same data as PropertyKey, just with the index key
stored as a string.
PropertyKey has been updated to use a tagged pointer instead of a
Variant, so it still occupies 8 bytes, same as StringOrSymbol.
12% improvement on JetStream/gcc-loops.cpp.js
12% improvement on MicroBench/object-assign.js
7% improvement on MicroBench/object-keys.js
There's discussion in the linked spec issue, but the short version is,
this algorithm will see "foo,bar," as a list of two groups, with "foo"
in the first group and "bar" in the second. However, users of this want
to get a list of three groups, with the last one being empty. So, do
that!
Some dimensions would always serialize in a canonical unit, others never
did, and others we manually would do so in their StyleValue. This
commit moves all of that into the dimension types, which means for
example that Length can apply its special rounding.
Our local serialization test now produces the same output as other
browsers. :^)
If we are doing a statically linked build, there is no need for full
`-fPIC`, just `-fpie` is enough (which lets the compiler assume that
global variables can be accessed directly without the GOT, etc.). CMake
does the right thing already when we set the `POSITION_INDEPENDENT_CODE`
property.
Selector::serialize() is used for both normal and relative selectors.
For the latter, we need to serialize their initial combinator, and for
the former, we always set the initial combinator as None anyway, so
this would be a no-op there.
Gets us 3 WPT passes.
The spec requires us to accept any ident here, not just ltr/rtl, and
also serialize it back out. That means we need to keep the original
string around.
In order to not call keyword_from_string() every time we want to match
a :dir() selector, we still attempt to parse the keyword and keep it
around.
A small behaviour change is that now we'll serialize the ident with its
original casing, instead of always lowercase. Chrome and Firefox
disagree on this, so I think either is fine until that can be
officially decided.
Gets us 2 WPT passes (including 1 from the as-yet-unmerged :dir() test).
The spec gives us a hard-coded list of functional pseudo-classes and how
to serialize them - but this list is incomplete and likely to always be
outdated compared to the list of pseudo-classes that exist. So instead,
use the generated metadata we already have to serialize their arguments
based on their type.
This fixes :dir() and :has(), which previously did not serialize their
arguments.
Gets us 26 passes (including 6 from that as-yet-unmerged :dir() test).
Submitted to WPT as https://github.com/web-platform-tests/wpt/pull/52598
but in the meantime here's a local version.
The spec for this isn't super thorough, so the tests are based on how
Chrome and Firefox behave. Specifically, Firefox returns the ltr/rtl
keyword in lowercase but Chrome keeps the original case for it.
We currently fail most of these but that will be fixed in subsequent
commits.
Major browsers seem to preserve `white-space: pre/pre-wrap` styles in a
`<div>` when deleting the current selection through an editing command.
The idiomatic way to support this is to have a command with a "relevant
CSS property" to make sure the value is recorded and restored where
appropriate, however, no such command exists.
Create a custom command (internal to Ladybird) that implements this
behavior.
This reworks EventHandler so text insertion, backspace, delete and
return actions are now handled by the Editing API. This was the whole
point of the execCommand spec, to provide an implementation of both
editing commands and the expected editing behavior on user input.
Responsibility of firing the `input` event is moved from EventHandler to
the Editing API, which also gets rid of duplicate events whenever
dealing with `<input>` or `<textarea>` events.
The `beforeinput` event still needs to be fired by `EventHandler`
however, since that is never fired by `execCommand()`.
We added a weak symbol to AK to support overriding the default assertion
handler for test cases. However, on macOS, we need to explicitly mark
the possibly-null symbol as 'it's ok for this to be undefined'.
When AK is a shared library, the flag can be applied to the link step of
the dylib, but when it's a static library, we need to apply it to the
executable that links against it.
Now that all EXPECT_CRASH related macros have been replaced in
favour of using EXPECT_DEATH related macros, CrashTest is no
longer used and can be deleted.
A challenge for getting LibTest working on Windows has always
been CrashTest. It implements death tests similar to Google Test
where a child process is cloned to invoke the expression that
should abort/terminate the program. Then the exit code of the
child is used by the parent test process to verify if the
application correctly aborted/terminated due to invoking
the expression.
The problem was that finding an equivalent way to port Crash::run()
to Windows was not looking very likely as publicly exposed Win32/
Native APIs have no equivalent to fork(); however, Windows actually
does have native support for process cloning via undocumented NT
APIs that clever people reverse engineered and published, see
`NtCreateUserProcess()`.
All that being said, this `EXPECT_DEATH()` implementation avoids
needing to use a child process in general, allowing us to remove
CrashTest in favour of a single cross-platform solution for death
tests.
When a `VERIFY()` assertion fails `ak_verification_failed` is invoked
which means the program will invoke `__builtin_trap` and immediately
quit. But with this change we have now allowed for an optional hook
into `ak_*_failed` that lets us perform some custom
action before program exits. This foundation is what we will
(ab)use to implement death tests without the process cloning
CrashTest infrastructure.
Originally part of a fix in 15103d172c, it
appears that this is no longer necessary and received a better fix in a
more recent commit. Resolves a visual regression with the ACID3 test.
The main motivation here is that the CSS Parser needs to know about
PageSelectorList so that we can parse one in
`CSSPageRule::set_selector_text()`. Including all of `CSSPageRule.h`
there would pull in a lot of other headers that aren't needed.
Previously we only matched against the first attribute with a given
local name. What we actually want to do is look at each attribute with
that local name in turn and only return false if none of them match.
Also remove a hack for HTML elements in HTML documents, where we would
refuse to match any namespaced attributes. This doesn't seem to be
based on the spec, but we had regressions without it, until now. :^)
Gets us 21 more WPT subtest passes.
The HTML spec gives us a list of HTML attributes that must have their
values compared case-insensitively by default (when the attribute
selector does not specify a case-sensitiveness). However, ifwe have a
namespace, then we are not looking for an HTML attribute, so this
should not apply.
Gets us 8 more WPT subtest passes.
This is a bit under-specced, specifically there's no definition of
CSSMarginDescriptors so I've gone with CSSStyleProperties for now. Gets
us 17 WPT subtests.
By doing that we avoid lots of `PropertyKey` -> `Value` -> `PropertyKey`
transforms, which are quite expensive because of underlying
`FlyString` -> `PrimitiveString` -> `FlyString` conversions.
10% improvement on MicroBench/object-keys.js
With this, we pass the 8 ref tests in css/selectors/selectors-4/ which
previously failed. This is not technically a full implementation, as we
are supposed to first canonicalize the language range and tag, but that
will require downloading and processing the IANA language subtag
registry:
https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry
That's significantly more work, and WPT doesn't seem to test any cases
that require that, so we can leave it for now.
Adds `matches_webm_signature()` and `parse_vint()` helpers per WPT
spec. Uses these helpers to resolve the WebM FIXME that was in
`match_an_audio_or_video_type_pattern()`.
This commit adds support in AK/Random for a high quality RNG on windows.
This requires moving the code into a cpp file not to spread windows
headers around.
This commit adds a convenience method to secure random for initializing
single types. It changes the random number generator in JS math random()
to use newer constants by the author as well as initializes it with a
higher quality seed.
For our js-benchmarks and libjs-test262 workflow runs, we already know
that they're provisioned with these repositories and can skip adding the
key and repo altogether.
We were using both wget and curl arbitrarily; use curl exclusively since
that is installed by default on our machines and containers. Fixes the
js-benchmarks workflow.
This was previously used by the WPT runner to determine the git hash
of a particular WPT run. This mechanism is no longer used, since it
doesn't work with chunked WPT runs.
Our Ranges should maintain the invariant that their offsets are always
within range of 0..length (inclusive) of their respective containers.
Note that we cannot maintain this in AbstractRange, which is the base
for StaticRange and can still have invalid offsets.
156c1083e9 introduced a text blocks cache
for better performance when searching through text on a page, but when
we partially recreate the layout tree, this cache does not get
invalidated. We now rebuild the entire text blocks cache after a layout
update.
We were calling into `Range::set_start_or_end()` indirectly through
`::set_start()` and `::set_end()`, but that algorithm only calls for an
invocation whenever the start or end of a range needs to be set to a
boundary point. If an algorithm step calls for setting the node or
offset, we should directly modify the range.
The problem with calling into `::set_start_or_end()` is that this
algorithm potentially modifies _both_ the start and end of the range,
but algorithms trying to update a range's start or end often have
explicit steps to take both the start and end into account and end up
overcompensating for the start or end offset resulting in an invalid
range (e.g. with an end offset beyond a node's length).
This makes updating a range's start/end a bit more efficient and removes
a piece of ad-hoc code in CharacterData needed to make it work before.
I was wrong when I added those notes before about this being impossible,
it's *very* possible, for example with the `@page margin` descriptor.
However, until we have a large number of these shorthands and not just a
single example, we can get away with hard-coding support for it.
Ideally we'd be able to share the code between page selectors and style
ones, but given how simple page selectors are, some code duplication is
the simpler option.
Previously, we would just assign the UnresolvedStyleValue to each
longhand, which was completely wrong but happened to work if it was a
ShorthandStyleValue (because that's basically a list of "set property X
to Y", and doesn't care which property it's the value of).
For example, the included `var-in-margin-shorthand.html` test would:
1. Set `margin-top` to `var(--a) 10px`
2. Resolve it to `margin-top: 5px 10px`
3. Reject that as invalid
What now happens is:
1. Set `margin-top` to a PendingSubstitutionValue
2. Resolve `margin` to `5px 10px`
3. Expand that out into its longhands
4. `margin-top` is `5px` 🎉
In order to support this, `for_each_property_expanding_shorthands()` now
runs the callback for the shorthand too if it's an unresolved or
pending-substitution value. This is so that we can store those in the
CascadedProperties until they can be resolved - otherwise, by the time
we want to resolve them, we don't have them any more.
`cascade_declarations()` has an unfortunate hack: it tracks, for each
declaration, which properties have already been given values, so that
it can avoid overwriting an actual value with a pending one. This is
necessary because of the unfortunate way that CSSStyleProperties holds
expanded longhands, and not just the original declarations. The spec
disagrees with itself about this, but we do need to do that expansion
for `element.style` to work correctly. This HashTable is unfortunate
but it does solve the problem until a better solution can be found.
If `value` was UnresolvedStyleValue, we'd attempt to `set_property...()`
with its resolved value, then call that again with the original
UnresolvedStyleValue. For any other kind of `value`, we'd simply call
call `set_property...()` twice with the same parameters.
This commit adds a fast path for putting values into a TypedArray of an
integer type, when the value being put in is a double. This leads to a
6% speedup on JetStream/gcc-loops.js.
Root had identical copy of what was being done in Meta/Lagom
so now we ensure this is still included globally but is
isolated to its own cmake module to make sanitizer
config easier to discover
As LibRegex was not specified in TEST_DIRECTORIES, the existing
Tests/LibRegex subdirectory was not actually included during
configuration. Also the RegexLibC test has not been needed
since migration away from Serenitys LibC was done, so
that test has been fully removed. I also renamed the
Regex.cpp test to TestRegex.cpp to match the naming
convention of most test targets.
The required and recommended compiler versions are sort of scattered
across several documents. Let's list them in a single document, and
have other documents refer to that location.
The language here intentionally recommends the same compiler versions
used in CI. The find_compiler.sh script can be updated with the
minimum known good version.
We were already silencing it at the site of the delete call, but gcc in
distribution mode is more aggressive about inlining and still sees a
delete that it doesn't like.
This implements the previously stubbed out `report_validity` method.
The specification is not very clear on how to exactly report the
validity. For now, we bring the first visible invalid control into
view and focus it. In the future, however, it would make sense to
support more complex scenarios and be more aligned with the other
implementations.
The contain-paint-stacking-context-001a.html test has been removed
for now because it has a 1px tall blue line at the top that should
not be there. With paint containment, this line is removed only in
the actual test case, but not in the reference. This is because of
the font that we use in testing and happens in Chromium as well if
the test is run with that font.
We already have fast path for built-in iterators that skips `next()`
lookup and iteration result object allocation applied for `for..of` and
`for..in` loops. This change extends it to `iterator_step()` to cover
`Array.from()`, `[...arr]` and many other cases.
Makes following function go 2.35x faster on my computer:
```js
(function f() {
let arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
for (let i = 0; i < 1000000; i++) {
let [a, ...rest] = arr;
}
})();
```
We don't use C++20 modules, and CMake will look for clang-scan-deps in
clang builds if this setting is not disabled. This is a problem when
clang-scan-deps is not installed, as it will cause CMake to fail to link
hidden visibility libraries properly.
- `Threading::Thread` is not polymorphic, there is no need for a virtual
destructor.
- `HTMLAnchorElement::has_download_preference` isn't overridden by
anything.
This warning was introduced in llvm/llvm-project#131188.
We don't override anything with definitions of this function in
`SwitchStatement` and `LabelledStatement`. Also, we can make the
`IterationStatement` abstract, there is no need to add a fallback
error-generating stub implementation of this method.
`ConservativeVector`, `RootVector` and `RootHashMap` are final types,
and their base classes have a protected destructor, so when their
destructor is called, the static and dynamic types will be the same
(can't destruct them through a pointer to a base or derived class).
Therefore, there is no need for a virtual destructor.
This fixes the newly introduced `-Wunnecessary-virtual-specifier`
Clang warning (llvm/llvm-project#131188).
When using non-BFD linkers, something about our CMake setup causes
visibility checks from GenerateExportHeader to fail when clang-scan-deps
is not found.
81b6a11 regressed correctness by always bypassing the `next()` method
resolution for built-in iterators, causing incorrect behavior when
`next()` was redefined on built-in prototypes. This change fixes the
issue by storing a flag on built-in prototypes indicating whether
`next()` has ever been redefined.
This also moves the next_serial class static into a file scope static.
The public class static was causing visibility issues with certain Linux
builds when hidden visibility was enabled. However, the current design
makes more sense anyway :^).
On Windows, `set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)` is currently
used. But that will need to change as LibWeb exceeds the maximum
symbols that can be exported from a single executable/dll
(See LNK1189 error). So to prevent bitrot and Unix/Windows getting
out of sync, this function that generates the export header for the
target also sets the symbol visibility to hidden on Unix to ensure
Link errors also occur on Unix if the the required library components
are not annotated with the FOO_API macro.
Clang's `x86_64-pc-windows-msvc` target requires
`[[msvc::no_unique_address]]`, which is properly set in the
`NO_UNIQUE_ADDRESS` macro in `AK/Platform.h`. Without this, building
on Windows fails due to `-Wunknown-attributes`.
The play_or_cancel_animations_after_display_property_change() helper
was being called by Node::inserted() and Node::removed_from() and then
recursing into the shadow-including subtree.
This had quadratic complexity since inserted() and removed_from() are
themselves already invoked recursively for everything in the
shadow-including subtree.
Only one caller of this API actually needed the recursive behavior,
so this patch moves that responsibility to the caller and puts the logic
in style recomputation instead.
1.02x speedup on Speedometer's TodoMVC-jQuery.
Currently, enabling search is a bit hokey. You have to click a checkbox
to get the search engines to show up, then you have to click a drop down
to select an engine. Let's go with a simpler approach; now you only have
to interact with the drop down.
By following the spec to the letter, our mapped arguments objects ended
up with many extra GC allocations:
- 1 extra Object for the internal [[ParameterMap]].
- 2 extra NativeFunctions for each mapped parameter accessor.
- 1 extra Accessor to hold the aforementioned NativeFunctions.
This patch removes all those allocations and lets ArgumentsObject model
the desired behavior in custom C++ instead of using script primitives.
1.06x speedup on Speedometer's TodoMVC-jQuery.
This is a remnant from SerenityOS. Let's avoid confusion as to why we
negate errno when we call Error::from_syscall just to negate it again
when we store the error code.
Instead of using UTF-8 iterators to traverse the HTMLTokenizer input
stream one code point at a time, we now do a one-shot conversion up
front from the input encoding to a Vector<u32> of Unicode code points.
This simplifies the tokenizer logic somewhat, and ends up being faster
as well, so win-win.
1.02x speedup on Speedometer 2.1
This commit makes windows errors store the code instead of being a
formatted string literal. This will allow comparing against the error,
and checking what it is. The formatting is now done in the formatter,
and moved to an implementation file to avoid polluting headers with
windows.h. The kind of the error is now determined by an enum Kind which
will allow matching against the error kind.
Co-Authored-By: Andrew Kaster <andrew@ladybird.org>
In the previous fix, we were still drawing IDAT data to the reference
frame even when no fcTL was present. This would cause rendering issues
when subsequent frames use APNG_BLEND_OP_OVER blending mode, as they
would composite over the incorrect reference frame. This commit adds a
simple check to properly skip any frame without an fcTL chunk.
We now properly handle OS/2 format BMPs that use 3 bytes per color
entry instead of 4. While OS/2 2.x officially specified 4 bytes per
color, some tools still produce files with 3-byte entries. We can
identify such files by checking the available color table space.
This extends ICO loader to support Windows cursor files. There is no
point in creating a separate loader for this, as the ICO format is
very similar to the CUR format. The only differences are bytes used to
identify the file and a presence of a hotspot in the CUR header.
Color masks should only be used when the compression type is either
BITFIELDS or ALPHABITFIELDS. They were always read before and produced
corrupted images when there was random data in the mask fields.
Other browsers don't think that BMP files with more than 1024 colors are
invalid. They clamp the palette instead, and now we do the same. This
allows us to load more BMPs.
We're able to efficiently draw repeated bitmaps through Skia, but for
backgrounds we only did so if the background was `repeat-x` _and_
`repeat-y`, and not if just one was set. This meant that for backgrounds
that were only repeating in one direction, we were taking the slow path.
Turns out that this slow path also produced graphical artifacts when
zooming in and out, so let's not do that :^)
Otherwise, the arrow painted next to the <details> element does not
update.
Using a screenshot test here because apparently the direction of the
arrow has no effect on the layout or paint trees.
By piggybacking on the already-optimized implementation in
StringBuilder, we can get simdutf for the AllowInvalidCodeUnits::Yes
case here as well.
1.03x speedup on Speedometer's TodoMVC-jQuery.
We were unnecessarily discarding the shadow trees of various elements
when they were removed or detached from the DOM.
This especially caused a *lot* of churn when creating input elements via
setting .innerHTML on something. We ended up building each input
element's shadow tree 3 times instead of 1.
The original issue that we were trying to solve by discarding shadow
trees appears to have been solved elsewhere, and nothing else seems to
break by just allowing them to remain in place.
1.05x speedup on Speedometer's TodoMVC-jQuery.
We would only correct for the left margin of the containing block of the
child box that we are laying out, but we actually need to correct for
all left margins up until the containing block that contains both the
float and the child box.
Fixes#4639.
Saves us two NativeFunction allocations per each `await`.
With this change, following function goes 80% faster on my computer:
```js
(async () => {
const resolved = Promise.resolve();
for (let i = 0; i < 5_000_000; i++) {
await resolved;
}
})();
```
- Create less GC pressure by making each `await` in async function skip
iteration result object allocation.
- Skip uncached `Object::get()` calls to extract `value` and `done` from
the iteration result object.
With this change, following function goes 30% faster on my computer:
```js
(async () => {
const resolved = Promise.resolve();
for (let i = 0; i < 5_000_000; i++) {
await resolved;
}
})();
```
https://tc39.es/ecma262/#sec-jobs specifies that we should only be
running queued promise jobs and host-defined cleanup when the
execution context stack is empty. It is asserted to _not_ be empty
the line above, so remove it.
No impact on test262 or our test suites, Interpreter::run_executable
is already (incorrectly) performing this unconditionally.
run_promise_jobs also happens to do nothing when LibJS is embedded
into LibWeb.
This holds the boilerplate that's needed by any CSSStyleDeclaration
subclass that holds Descriptors. CSSFontFaceDescriptors now only has to
worry about initialization and its own exposed properties.
Currently the window resize events are used to resize the webview.
But when extra window items appear, for example: search or DevTools
Bar then the webview is clipped out of the window. This also happens
when the tab bar is opened and closed. Listening to layout events
resolves these clipping issues completely.
A scrollbar contains a mouse position only if its gutter rect contains
that position. We were incorrectly deciding a position was contained if
it fell to the right of a vertical scrollbar, or below a horizontal
scrollbar.
The provided patchelf from vcpkg is only version 0.14.3, which is too
old to produce working binaries on Fedora 42. Using that old version
causes hard to debug issues where applications segfault during startup.
This is the "intended" way of parallelism with wpt, but instead of
requiring N different systems (or VMs), this does it all on one system
with the power of namespaces.
After f7a3f785a8, sibling nodes' styles
were no longer invalidated after a node was removed. This reuses the
flag for `:first-child` and `:last-child` to indicate that a node's
style might be affected by any structural change in its siblings.
Fixes#4631.
Resolves the `:only-child` ACID3 failure as documented in #1231.
Instead of creating a second ExecutionContext in BoundFunction.[[Call]],
we now implement BoundFunction::get_stack_frame_size() and combine
information from the target + the bound arguments list.
This allows BoundFunction.[[Call]] to reuse the already-established
ExecutionContext for the callee.
1.20x speedup on MicroBench/bound-call-04-args.js
Instead of monomorphic (1 shape), GetById inline caches are now
polymorphic (4 shapes).
This improves inline cache hit rates greatly on most web JavaScript.
For example, Speedometer 2.1 sees 88% -> 97% cache hit rate improvement.
1.71x speedup on MicroBench/pic-get-own.js
1.82x speedup on MicroBench/pic-get-pchain.js
Similar to other ad-hoc behavior in this method, we need to handle
requests which are not associated with a style sheet. For example:
<script>
const element = document.createElement('div');
element.style['background'] = 'url(https://foo.com/img.png)';
document.body.appendChild(element);
</script>
Will not be associated with a style sheet.
This is needed to ensure we fire a PerformanceResourceTiming event for
this resource load. This is not currently testable in CI, as this event
is also gated by HTTP/S requests.
At this point, I've repeatedly felt the desire to be able to log
stacktraces to be able to see more easily what kind of call-sites exist
for a given piece of code. So this commit exposes `dump_backtrace()` in
the header so it can be used for this purpose.
This `translate_by` function is invoked with the cumulative scroll
offset during display list execution. Applying the offset to the gutter
was missed in 66e422b4f1.
Importing these tests now because they are for input-element types that
have requirements related to the constraint-validation API — which we’ve
been implementing recently.
This commit only imports tests, without any changes to our code.
This is *extremely* common on the web, but barely shows up at all in
JavaScript benchmarks.
A typical example is setting Element.innerHTML on a HTMLDivElement.
HTMLDivElement doesn't have innerHTML, so it has to travel up the
prototype chain until it finds it.
Before this change, we didn't cache this at all, so we had to travel
the prototype chain every time a setter like this was used.
We now use the same mechanism we already had for GetBydId and cache
PutById setter accesses in the prototype chain as well.
1.74x speedup on MicroBench/setter-in-prototype-chain.js
Browsers such as Chrome and Firefox apply an arbitrary scale to the
current font size if `normal` is used for `line-height`. Firefox uses
1.2 while Chrome uses 1.15. Let's go with the latter for now, it's
relatively easy to change if we ever want to go back on that decision.
This also requires updating the expectations for a lot of layout tests.
The upside of this is that it's a bit easier to compare our layout
results to other browsers', especially Chrome.
This is a normative change in the ECMA-262 spec. See:
https://github.com/tc39/ecma262/commit/0fb1859
As noted in the PR for this change, this is not actually testable via
either test262 or WPT.
`var` bindings are never in the temporal dead zone (TDZ), and so we
know accessing them will not throw.
We now take advantage of this by having a specialized environment
binding value getter that doesn't check for exceptional cases.
1.08x speedup on JetStream.
Before this change, setting a global would end up as SetLexicalBinding.
That instruction always failed to cache the access if the global was a
property of the global object.
1.14x speedup on Octane/earley-boyer.js
2.04x speedup on MicroBench/for-of.js
Note that MicroBench/for-of.js was more of a "set global" benchmark
before this. After this change, it's actually a for..of benchmark. :^)
We were spending a lot of time removing each property name from the
iterator's underlying HashMap while iterating over it. This wasn't
actually necessary, so let's stop doing it and instead just iterate
over the property names with a stored HashTable iterator.
1.10x speedup on MicroBench/for-in-indexed-properties.js
Before this change, we would call [[OwnPropertyKeys]] on the target
objects, then convert the returned keys from Value into PropertyKey.
Then, when actually iterating, we'd convert them back into Value again.
This was particularly costly for numeric property keys, since we had
to go through string-from-number construction.
Now, we simply keep the original values returned by [[OwnPropertyKeys]]
around and use them for the enumeration.
1.09x speedup on MicroBench/for-in-indexed-properties.js
1.01x speedup on MicroBench/for-in-named-properties.js
This is a GC-aware wrapper around AK::HashMap. Entry values are treated
as GC roots, much like the GC::RootVector we already had.
We also provide GC::OrderedRootHashMap as a convenience.
We were previously unable to use simdutf for base64 decoding operations
other than "loose". Upstream has added support for the "strict" and
"stop-before-partial" operations, so let's make use of them!
I was investigating an optimization in this area, and while it
didn't seem to have a noticable improvement, it still seems
useful to apply this change.
ParsedFontFace and FontLoader now both keep track of which
CSSStyleSheet (if any) was the source of the font-face, so the URLs can
be completed correctly.
This is the simplest fix I could find that resolves a buggy interaction
between this and the CSS fetch algorithms, which also doesn't regress
anything. (As far as I can tell.)
Convert FontLoader to use fetch_a_style_resource(). ResourceLoader used
to keep its downloaded data around for us, but fetch doesn't, so we use
Gfx::Typeface::try_load_from_temporary_memory() so that the font has a
permanent copy of that data.
Typeface::try_load_from_externally_owned_memory() relies on that
external owner keeping the memory around. However, neither WOFF nor
WOFF2 do so - they both create separate ByteBuffers to hold the TTF
data. So, rename them to make it clearer that they don't have any
requirements on the byte owner.
Even though this code was already optimized to re-use a single result
object, returning { value, done } directly in output parameters still
provides a substantial speedup.
1.21x speedup on MicroBench/for-in-indexed-properties.js
Apply a little ensure_capacity() to avoid excessive rehashing of the
property key table when enumerating a large number of properties.
1.23x speedup on MicroBench/for-in-indexed-properties.js
Shared workers are essentially just workers that may be accessed from
scripts within the same origin. There are plenty of FIXMEs here (mostly
building on existing worker FIXMEs that are already in place), but this
lets us run the shared worker variants of WPT tests.
We currently rely on UI-specific APIs to interact with the system
clipboard from AppKit and Qt. We do not have access to these from
headless-browser.
We should ultimately implement these APIs without relying on the UI as
a middle-man. For now, store a clipboard item so that we may exercise
the clipboard WPT tests.
We currently have a single IPC to set clipboard data. We will also need
an IPC to retrieve that data from the UI. This defines system clipboard
data in LibWeb to handle this transfer, and adds the IPC to provide it.
Instead of going through String::formatted(), we now have a specialized
code path for base-10 serialization directly to UTF-8.
This is roughly 5-10x faster than the previous implementation, depending
on how many digits we end up outputting.
1.07x speedup on MicroBench/for-in-indexed-properties.js
"return" method is not defined on any of builtin iterators, so we could
skip it, avoiding method lookup.
I measured 10% improvement in array-destructuring-assignment.js micro
benchmark with this change.
...by avoiding `{ value, done }` iterator result value allocation. This
change applies the same otimization 81b6a11 added for `for..in` and
`for..of`.
Makes following micro benchmark go 22% faster on my computer:
```js
function f() {
const arr = [];
for (let i = 0; i < 10_000_000; i++) {
arr.push([i]);
}
let sum = 0;
for (let [i] of arr) {
sum += i;
}
}
f();
```
The editing command that relies the most on this, `insertLinebreak`,
did not perform a layout update after inserting a `<br>` which caused
this algorithm to always return false. But instead of actually building
the layout tree needlessly, we can check the DOM tree instead.
Before this change, IDAT data was mistakenly always included in the
animation. Now we only include frames with explicit fcTL chunks.
As per the PNG spec (third edition):
"The static image may be included as the first frame of the animation
by the presence of a single fcTL chunk before IDAT. Otherwise, the
static image is not part of the animation."
We also fall back to the IDAT data when APNG has acTL but no fcTL
chunks. Test image is 062.png from fDAT-inherits-cICP.html from WPT.
The assertion in `WebSocketImplCurl::did_connect()` keeps failing for
multiple websockets when loading `https://www.speedtest.net/` since
commit 14ebcd4. This fixes that by checking and returning false if
something went wrong and letting the caller function handle it.
Introduce special instruction for `for..of` and `for..in` loop that
skips `{ value, done }` result object allocation if iterator is builtin
(array, map, set, string). This reduces GC pressure significantly and
avoids extracting the `value` and `done` properties.
This change makes this micro benchmark 48% faster on my computer:
```js
const arr = new Array(10_000_000);
let counter = 0;
for (let _ of arr) {
counter++;
}
```
Expose a method on built-in iterators that allows retrieving the next
iteration result without allocating a JS::Object. This change is a
preparation for optimizing for..of and for..in loops.
This is a LibWeb special. We keep running into cases where we end up
with one or more Platform or event loop spin_until() calls on the stack
after the event loop has been cancelled and the WebContent process has
been asked to exit.
To prevent too much nonsense from exiting processes early from affecting
our other, more well-behaved processes, put this special logic in the
critical path of such Web-specific event loop spins.
This method was removed in e015a43b51
However, it was not exactly *unused* as the commit message would say.
This method was the only thing that allowed spin_until to exit when
the event loop was cancelled. This happens normally when IPC connections
are closed, but also when the process is killed.
The logic to properly handle process exit from event loop spins needs to
actually notify the caller that their goal condition was not met though.
That will be handled in a later commit.
We currently store Web::Fetch::Infrastructure::Response objects in the
HTTP cache. They are associated with their original realm, but when we
use a cached response, we clone it into the target realm. For example,
two <iframe> objects loading the same HTML will be in different realms.
When we clone the response, we must use the target realm throughout the
entire cloning process. We neglected to do this for the cloned response
body stream, which is cloned via teeing. The result was the the stream
for the "cloned" response was created in the original realm, causing
issues down the line when reading from that stream tried to handle read
promises on behalf of the original realm. There are protections in place
to prevent this from happening, and the cached response read would never
complete.
This is a normative change in the ECMA-262 spec. See:
https://github.com/tc39/ecma262/commit/de62e8d
This did not actually seem to affect our implementation as we were not
using generators here to begin with. So this patch is basically just
adding spec comments.
Attach a 'job' to the main thread event loop, trusting that the event
loop implementation will cancel it when asked to quit. This is something
that our Unix implementation does, but isn't strictly part of the
contract of EventLoopImplementation.
This reverts commit 36bb2824a6.
Although this was faster on my M3 MacBook Pro, other Apple machines
disagree, including our benchmark runner. So let's revert it.
The spec calls for a couple of very specific whitespace padding
techniques whenever we canonicalize whitespace during the execution of
editing commands, but it seems that other browsers have a simpler
strategy - let's adopt theirs!
This is a simple trick to generate better native code for access to
registers, locals, and constants. Before this change, each access had
to first dereference the member pointer in Interpreter, and then get to
the values. Now we always have a pointer directly to the values on hand.
Here's how it looks:
class StackFrame {
public:
Value get(Operand) const;
void set(Operand, Value);
private:
Value m_values[];
};
And we just place one of these as a window on top of the execution
context's array of values (registers, locals, and constants).
This is implemented by these related changes:
* The Skia alpha type 'Opaque' is selected for surfaces that were
created with the intention of not having an alpha channel.
Previously we were simply creating one with alpha.
* Clearing now happens through Skia's `clear()` which always uses the
source color's value for the result, instead of setting all values
to 0.
* CanvasRenderingContext2D selects a different clearing color based on
the `alpha` context attribute's value.
Getting the running_execution_context() already verifies that the
execution context stack is non-empty, we don't need to do it separately
here as well.
The old accumulator register is really only used to pass the end
completion to the caller of run_bytecode() nowadays. As such, we don't
need to cache a pointer to it for fast access. One less thing to do
on run_bytecode() entry.
This way it's always automatically correct, and we don't have to
manually flush it in push_execution_context().
~7% speedup on the MicroBench/call* tests :^)
Most of the time there are no queued promise jobs to run after exiting
a stack frame. By moving the check inline, leaving a function call gets
a measurable speedup in the common case.
NonnullRefPtr almost always has a non-null pointer internally, that's
what the class is for, after all! The one edge case where it has null
internally is after you move() it. But it's always a bug to use an
NNRP after moving from it, and we have clang-tidy yelling at us if
that ever happens.
Demoting this removes a gazillion overly paranoid null checks.
These are not associated with a javascript realm, so to avoid
confusion about which realm these need to be created in, make
all of these objects a GC::Cell, and deal with the fallout.
Previously, the `|=` would not compare strings containing `-`
characters correctly because it would only compare the element
attribute up to the first `-` character.
This concept is rarely used in codebase and very much error-prone
if you forget to check it.
Instead, make it so that operations that would produce invalid integers
return an error instead.
Instead of letting every [[Call]] implementation allocate an
ExecutionContext, we now make that a responsibility of the caller.
The main point of this exercise is to allow the Call instruction
to write function arguments directly into the callee ExecutionContext
instead of copying them later.
This makes function calls significantly faster:
- 10-20% faster on micro-benchmarks (depending on argument count)
- 4% speedup on Kraken
- 2% speedup on Octane
- 5% speedup on JetStream
For attributes like Element.ariaControlsElements, which are a reflection
of FrozenArray<Element>, we must return the same JS::Array object every
time the attribute is invoked - until its contents have changed. This
patch implements caching of the reflected array in accordance with the
spec.
This silenced warning was added a long time ago when we were first
bringing up Lagom: a619943001
The commit message indicates we were seeing errors due to this warning
in many places. We now compile just fine with this warning enabled, so
let's remove its silencer to be a little more consistent between clang
and GCC.
Object defines an is_error virtual method to be overridden by Error for
fast-is. This is the same name as the Error.isError constructor method.
Rename the former to avoid conflicts, as GCC 15 just started warning on
this.
We don't yet support out-of-process subframes. Explicitly disable even
attempting to isolate subframes. Otherwise, navigating a subframe to a
non-same-site URL would actually cause the top-level frame to navigate
with our current implementation.
This allows us to get rid of instructions that move arguments to locals
and allocate smaller JS::Value vector in ExecutionContext by reusing
slots that were already allocated for arguments.
With this change for following function:
```js
function f(x, y) {
return x + y;
}
```
we now produce following bytecode:
```
[ 0] 0: Add dst:reg6, lhs:arg0, rhs:arg1
[ 10] Return value:reg6
```
instead of:
```
[ 0] 0: GetArgument 0, dst:x~1
[ 10] GetArgument 1, dst:y~0
[ 20] Add dst:reg6, lhs:x~1, rhs:y~0
[ 30] Return value:reg6
```
Previously we blocked using locals for function arguments whenever
`arguments` was mentioned in function body, however, this is not
necessary in strict mode, where mutations to the arguments object are
not reflected in the function arguments and vice versa.
It turns out it was a mistake to make this a virtual since
ServiceWorkerAgents are effectively the exact same as
DedicatedWorkerAgents and SharedWorkerAgents just with [[CanBlock]]
set to false.
This helps unwind a niggly depedency where the VM owns and constructs
the Heap and the Agent. But the agent wants to have customized
construction that depends on the heap. Solve this by defering
the initialization of the Agent to after we have constructed the
VM and the heap.
To allow for adding the concept of a WorkerAgent to be reused
between shared and dedicated workers. An event loop is the
commonality between the different agent types, though, there
are some differences between those event loops which we customize
on the construction of the HTML::EventLoop.
This is to differentiate the agent representation for the parent
process for the WorkerAgent in the child process which is actually
hooked up to the javascript VM.
I am not sure if this is a good name, but I can't really think of
anything better which is consistent with the names used by the rest
of the codebase.
Sometimes fixed positioned boxes would extend the viewport's scrollable
overflow, which according to the spec should never happen. There are
some nuances to this, such as properly determining the fixed positioning
containing block for a fixed position box, but for now this prevents
some pages from being overly scrollable.
Fixes horizontal scrollability of https://tweakers.net.
If attachment fails for whatever reason (e.g the host element is not
allowed to be a host), the HTML spec tells us to insert the template
element anyway and proceed.
Before this change, we were recomputing the insertion location at this
point, which caused it to be *inside* the template element. Inserting
the template element into itself didn't work, and so the DOM would end
up incorrect.
The fix here is to simply use the insertion point we determined earlier
in the same function, before putting a template element on the stack of
open elements. We already do this elsewhere.
Fixes at least 228 subtests on WPT. :^)
We are meant to store a weak reference to the element indicated by this
attribute, rather than a GC-protected strong reference. This also hoists
the "get the attr-associated element" AO into its own function, rather
than being hidden in IDL, to match "get the attr-associated elements".
There are ARIA attributes, e.g. ariaControlsElements, which refer to a
list of elements by their ID. For example:
<div aria-controls="item1 item2">
The div.ariaControlsElements attribute would be a list of elements whose
ID matches the values in the aria-controls attribute.
The property-reflection.html test was partially split into a second file
recently, property-reflection-imperative-setup.html. Let's re-import to
ensure we have the latest. See:
https://github.com/web-platform-tests/wpt/commit/2518df1
This is a workaround for the deprecation of the cache v1 REST API that
was replaced with a new protobuf RPC based API this month. vcpkg was
using the private cache backend API without the knowledge of the GitHub
actions team, and was thus broken by the deprecation.
While we wait for Microsoft to talk to Microsoft to get this feature
restored, we can use the raw actions/cache step to get almost the same
cache behavior. The only difference is that the cache will be less fine
grained than the per-package cache that VCPKG_BINARY_SOURCES of x-gha
was giving us before.
Math functions like abs(), clamp(), round(), etc, can be used by
themselves in property values, without wrapping them in calc().
Before this change, we were neglecting to run calc simplification on the
generated calculation node trees. By doing that manually after parsing a
standalone math function, we score at least a couple hundred WPT points.
Whenever we introduce a block element in a container that at that point
has only had inline children, we create an anonymous wrapper for all the
inline elements so we can keep the invariant that each container
contains either inline or non-inline children. For some reason, we
ignore all the out-of-flow nodes since they are layed out separately and
it was thought that this shouldn't matter.
However, if we are dealing with inline blocks and floating blocks, the
order of the inline contents _including_ out-of-flow nodes becomes very
important: floating blocks need to take the order of nodes into account
when positioning themselves.
Fix this by simply hoisting the out-of-flow nodes into the anonymous
wrapper as well.
Fixes the order of blocks in #4212. The gap is still not present.
Instead, use the generic create_independent_formatting_context_if_needed
so that unusual situations like image-as-table-caption don't crash.
This logic clearly needs more work, but let's at least do better than
crashing. This gives us 26 new subtest passes on WPT.
We were incorrectly deciding that abspos elements shouldn't treat many
max-width and max-height values as `none`. My best understanding is that
this was a hack in 2023 for an issue that has been solved since then.
By removing the incorrect short-circuit, we stop at least one WPT test
from crashing due to infinite recursion and get ourselves +34 passes.
By doing that we avoid doing separate allocation for each such vector,
which was really expensive on js heavy websites. For example this change
helps to get EC allocation down from ~17% to ~2% on Google Maps. This
comes at cost of adding extra complexity to custom execution context
allocator, because EC no longer has fixed size and we need to maintain
a list of buckets.
This is better because:
- Better data locality
- Allocate vector for registers+constants+locals+arguments in one go
instead of allocating two vectors separately
Fixes a bunch of websites breaking because we now verify jump offsets by
trying to remove 0-offset jumps.
This has been broken for a good while, it was just rare to see Repeat
inside alternatives that lended themselves well to tree alts.
The about:settings page has gotten beefy enough that it is a bit of a
paint to manage in a single HTML file. This patch breaks the settings
into several modules to make this page easier to mantain.
Modules have some nice benefits over classic scripts, such as enabling
strict mode by default, and allowing each module to function without
polluting the global object.
We had a bit of an awkward setup where we want the Application to be a
SettingsObserver, but neither the Settings object nor the Application
itself was fully initialized by the time the observer was created. So
we invented a deferred observer initializer specifically for the
Application.
Instead, let's just create a dedicated observer subclass that is owned
by the Application. We can then create it once we have the singleton
Application appropriately set up.
JavaScript module requests (in a non-worker context) always have CORS
enabled. However, CORS requests are only allowed for same-origin or
HTTP/S requests. This patch extends this to allow resource:// requests
from opaque origins (e.g. about: URLs).
We must also set the Access-Control-Allow-Origin header to "null" to
ensure that the response is accepted by the CORS checks. This does not
affect requesting resource:// URLs from resource:// URLs as those are
same-origin and skip CORS checks.
This ultimately enables requesting resource:// JS modules from the
about:settings page.
The special case for anonymous table wrappers actually ended up hurting
correctness by preventing the full ancestor chain from being marked for
for intrinsic size cache invalidation.
Caused Layout/input/table/propagate-style-update-to-wrapper.html to
flake on CI, and was easy to reproduce locally with sanitizers.
The fix here is simply to remove the special handling of anonymous table
wrapper parents, since *all* parents are invalidated automatically
anyway!
Began flaking in fa9c463ffd.
Previously we were counting the total number of *nodes* in the tree for
the chain cost, which greatly underestimated its cost when large
bytecode entries were present,
This commit switches to estimating it using the total bytecode *size*,
which is a closer value to the true cost than the tree node count.
This corresponds to a ~4x perf improvement on /<script|<style|<link/ in
speedometer.
Previously, if a transition property was not wrapped in a list, it
would be replaced with the default value in
`StyleComputer::compute_transitioned_properties`.
For the slight cost of counting code points when converting between
encodings and a teeny bit of memory, this commit adds a fast path for
all-happy utf-16 substrings and code point operations.
This seems to be a significant chunk of time spent in many regex
benchmarks.
When playing games with cross-realm construction, we need to make sure
that any calls to ensure_web_prototype for LegacyNamespace objects use
the correctly namespaced prototype name.
In 15103d172c we applied any remaining vertical float clearance to the
BFC's current Y offset for the next block to layout, because a `<br
style="clear: left">` could introduce clearance that would otherwise be
ignored. However, a `<div>` that floats _and_ clears `right` also
introduces this clearance and it is obvious that this should not push
down any subsequent blocks to layout in the current BFC.
Turns out, we don't need this change anymore. Some other later change
also fixed the underlying issue, and by getting rid of the original fix
we can now render https://en.wikipedia.org/wiki/SerenityOS correctly
again.
Fixes#4418.
We already store the resulting margin box rect for floating boxes
relative to their root coordinate space, so don't calculate them again.
No functional changes.
This allows the media paintable to be redrawn when the media element is
paused. We change the color of some components on hover, but if the
media was paused, that would not be rendered. This wasn't an issue while
the media is playing because the update to the timeline would take care
of redrawing the paintable.
Keep track of which CSSRule owns a CSSRuleList, and then use that to
produce a stack of RuleContexts for the CSS Parser to use.
There are certainly other places we should do this!
The spec algorithm changed at some point to support nested declarations,
but I only just noticed. The subtest regression is one we were passing
incorrectly.
We have two different code paths that implement the "parse a CSS
declaration block" algorithm, for properties and descriptors. COmbining
them isn't straightforward, and doesn't seem especially useful.
There appears to be no reason for Bitmap::get_pixel() to be split into
three layers of methods. Make the code simpler by inlining everything
into a single method.
Bitmap::get_pixel() was only handling two out of the four possible pixel
formats, asserting when called with the other two. The asserting code
path was triggered when loading JPEG XL images, causing crashes on pages
like https://jpegxl.info/resources/jpeg-xl-test-page or
https://html5test.co/.
Type changes are now signaled to radio buttons. This causes other radio
buttons in the group to be unchecked if the input element is a checked
radio button after the type change.
This will change behaviour for moved-from `Optional<T>`s, since they
will now no longer clear their value if `T` is trivial. However, a
moved-from value should be considered to be in an unspecified state.
Use `Optional<T>::clear` or `Optional<T>::release_value` instead.
If a type is non-move constructible but move assignable,
its container type may still be move assignable.
Similairly, if a type is non-copy constructible but copy assignable,
its container type may still be copy assignable.
The implicit default CMAKE_OSX_SYSROOT was a workaround in CMake for
macOS ~10.8. As of CMake 4.x, CMake expects macOS host compilers to have
their own default sysroot detection logic. However, upstream llvm does
not actually do this, only Apple Clang does. To keep supporting homebrew
clang and manually compiled clang from llvm/llvm-project, we need to
set the sysroot explicitly.
The behavior difference and lack of default detection logic in the clang
driver is tracked at https://gitlab.kitware.com/cmake/cmake/-/issues/26863
By doing that we consistently use Identifier node for identifiers and
also enable mechanism that registers identifiers in a corresponding
ScopePusher for catch parameters, which is necessary for work in the
upcoming changes.
similar-origin window agents have the [[CanBlock]] flag set to false.
Achieve this by hooking up JS's concept with an agent to HTML::Agent.
For now, this is only hooked up to the similar-origin window agent
case but should be extended to the other agent types in the future.
Currently, compute_scrollbar_data does not adjust the position of the
scrollbar thumb based on the actual scroll offset. This is because we
perform this offset in most cases inside the display list executor, in
order to allow us to avoid recomputing the display list.
However, there are cases where we do want the thumb rect with an offset
inside PaintableBox. We currently use scroll_thumb_rect to perform that
computation.
In an upcoming patch, we will need both this offset thumb rect and the
scrollbar gutter rect. So this patch moves the computation of the offset
to compute_scrollbar_data, performed behind an optional parameter.
When a scrollbar is not interacting with the mouse, we now draw the
scrollbar slightly slimmer. When the mouse enters the space occupied
by the scrollbar, we enlarge it for easier mouse interactivity.
This was completely busted (where it would generate a variable inside a
block, and try to access it outside the block); this commit fixes this
in the least annoying way possible.
...instead of specially handling JS::Completion.
This makes it possible for LibWeb/LibJS to have full control over how
these things are made, stored, and visited (whenever).
Fixes an issue where we couldn't roundtrip a JS exception through Wasm.
There's a specific (and thankfully very common!) scenario where we can
actually skip calculating the automatic minimum size for flex items.
In single-line (no wrapping) flex containers, if the sum of all item
flex base sizes is <= the flex container's main size, we know that
none of the items will be shrunk by the layout algorithm.
And so for any flex item with definite main size AND automatic minimum
main size, we can treat the automatic minimum size as 0.
We were missing the step to use realm's global object if thisValue
was nullish. This is very trivial to fix, as `impl_this` already
handles everything as it should, allowing us to also remove the
special casing for WindowProxy.
We were calculating the fit-content cross size and then throwing it
away and doing it again. You might think this wasn't so bad since
fit-content relies on cacheable intrinsic sizes *buuuuut* since we're
actually modifying the constraints for the second call, we were indeed
doing completely wasted work here.
For unscrolled viewports (already at 0, 0), we can short-circuit here
and avoid doing a synchronous relayout of the page.
This avoids a bunch of synchronous layouts on Speedometer3's NewsSite
(Nuxt version) subtests.
For unscrolled elements (already at 0, 0) that aren't eligible to be the
Document.scrollingElement, we can short-circuit here and avoid doing a
synchronous relayout of the page.
This avoids a bunch of synchronous layouts on Speedometer3's NewsSite
subtests.
Instead of using the first font from the FontCascadeList for all glyphs
in a text, we perform a text shaping process that finds a suitable font
for each glyph and returns a list of glyph runs, where each glyph run
represents consecutive glyphs using the same font.
Canvas text painting needs to support per-glyph font fallbacks, which
means we can't hand over responsibility for text shaping to Skia and
instead need to extract glyph paths from our own shaped GlyphRun.
Instead of indiscriminately clearing the cache for all anonymous boxes,
we now only clear it for those that were generated by a non-anonymous
box in need of layout update.
This increases the cache hit rate and allows us to avoid more work.
These elements are quite special, so let's treat them like we do for
substantial CSS display changes and rebuild the layout tree starting
from the parent element instead of self.
Before this change, we were going through the chain of base classes for
each IDL interface object and having them set the prototype to their
prototype.
Instead of doing that, reorder things so that we set the right prototype
immediately in Foo::initialize(), and then don't bother in all the base
class overrides.
This knocks off a ~1% profile item on Speedometer 3.
Many elements never end up needing this string, so instead of eagerly
generating it in the Element constructor, let's defer it until someone
actually requests it.
Knocks off a ~1% profile item on Speedometer3's jQuery test.
Instead of using the more generic define_native_accessor() here,
we poke directly at indexed property storage for the parameter map.
We can also construct the NativeFunction objects directly, without
giving them names like "get 0" etc, since these are not observable
by userspace.
Finally, by using default property attributes (not observable anyway),
we get simple indexed storage instead of generic (hash map) storage.
Instead, let JS::NativeFunction store the AK::Function directly, and
take care of conservatively marking its captured data.
This avoids an extra GC allocation for every JS::NativeFunction.
This will cause an exception to be thrown if user attempts to read from
the response stream of a failed request.
This is unfortunately not testable in CI. It requires a network response
(i.e. not a file:// URL). We also cannot import relevant WPT tests; they
exercise this condition with a python-generated response.
The HTTPS server used by WPT will close TLS connections without sending
a "close notify" alert. For responses that did not have a Content-Length
header, curl treats this as an error.
The basic idea is that style sheets can block script execution under
some circumstances. With this commit, we now handle the simplest cases
where a parser-inserted link element gets to download its style sheet
before script execution continues.
This improves performance on Speedometer 3 where JavaScript APIs that
depend on layout results (like Element.scrollIntoView()) would get
called too early (before the relevant CSS was downloaded), and so we'd
perform premature layout work. This work then had to be redone after
downloading the CSS anyway, wasting time.
Note that our Text/input/link-re-enable-crash.html test had to be
tweaked after these changes, since it relied on the old, incorrect,
behavior where scripts would run before downloading CSS.
No code now relies on using URL's valid state.
A URL can still be _technically_ invalid through use of the URL
constructor or by directly changing URL fields.
However, all URLs should be constructed through the URL parser,
and we should ideally be getting rid of the default constructor
at some stage.
Also, any code which is manually setting URL fields need to be
aware that this is full of pitfalls since there are many different
forms of canonicalization which is bypassed by not going through
the URL parser.
Creating a URL should almost always go through the URLParser to
handle all of the small edge cases involved. This reduces the
need for URL valid state.
Before this change, we were waiting for Document to lazily evaluate
sheet media and media rules. This often meant that we'd get two
full-document style invalidations: one from adding a new style sheet,
and one from the media queries changing state.
By evaluating the rules eagerly on insertion, we coalesce the two
invalidations into one. This reduces the number of full-document
invalidations on Speedometer 3 from 51 to 34.
We now cache the containing block (box) once at the start of layout,
which allows Layout::Node::containing_block() to return instantly
instead of doing tree traversal.
Removes a 0.7% profile item on Speedometer 3.
We already had a really nice hash that had a single issue, this commit
fixes that and makes it *the* hash for the hash table, so we avoid
double-hashing and making a long chain.
This is an easy 10% perf gain.
These are slightly unfortunate as we're crossing the library boundary,
but there's precedent with Object::is_dom_node(), and these are just
knocking down a few more items that were showing up in profiles.
The remaining AOs can stay where they are. This patch just sorts them in
spec order to match the other AO files. Section 8.1 is last, however, as
these are all templates and need the declarations of other AOs above.
The main streams AO file has gotten very large, and is a bit difficult
to navigate. In an effort to improve DX, this migrates TransformStream
AOs to their own file.
The main streams AO file has gotten very large, and is a bit difficult
to navigate. In an effort to improve DX, this migrates WritableStream
AOs to their own file.
The main streams AO file has gotten very large, and is a bit difficult
to navigate. In an effort to improve DX, this migrates ReadableStream
AOs to their own file. And the helper classes used for the tee and pipe-
to operations are also in their own files.
Corresponds to dba6e7b6c2
and 4c0401186c
The spec algorithms now use "the relevant global object's associated
document", so remove the concept of the History object itself having an
associated document. The spec has also combined the implementations for
forward/back/go so I've matched that too.
Corresponds to 03ab71775b
I've also split the `Document::has_focus()` method for clarity. Actually
implementing the "has focus steps" turns out to be quite involved so
I've left it for now.
This fixes Layout/input/dialog-open-modal.html which began flaking
super hard after the preceding commits that reduced style invalidation
for focus-related pseudo class selectors.
We achieve this by keeping track of all checked pseudo class selectors
in the SelectorEngine code. We also give StyleComputer per-pseudo-class
rule caches.
For example, Google uses ISO-8859-1 encoding. This patch allows us to
decode such responses, falling back to UTF-8 if a Content-Type was not
specified or could not be parsed. We should also now handle if decoding
fails, rather than crashing inside JsonParser.
Previously, `CSSStyleSheet.replace()` and `CSSStyleSheet.replaceSync()`
parsed the given CSS text into a temporary stylesheet object, from
which a list of rules was extracted. Doing this had the unintended
side-effect that a fetch request would be started if the given CSS text
referenced any external resources. This fetch request would cause a
crash, since the temporary stylesheet object didn't set the constructed
flag, or constructor document. We now parse the given CSS text as a
list of rules without constructing a temporary stylesheet.
There are parts of the codebase where properly const-correctifying the
the code would be a giant spaghetti mess, so add this loud workaround
to defer the refactoring for later.
This was actually an older change to the Streams spec that we missed
when we implemented TransformStreams. This fixes a crash in the imported
WPT tests.
See: https://github.com/whatwg/streams/commit/007d729
These callbacks are evaluated synchronously via JS::Call. We do not need
to construct an expensive RootVector container just to immediately
invoke the callbacks.
Stylistically, this also helps indicate where the actual arguments start
at the call sites, by wrapping the arguments in braces.
This is preparation for removing the endianness override, since it was
only used by a single client: LibTextCodec.
While here, add helpers and make use of simdutf for fast conversion.
When we need the callback to return a promise, we can use this alternate
invoker to construct the WebIDL::Promise for us. Currently, the Streams
API will use WebIDL::invoke_callback to create a JS::Promise, and then
wrap that result in a resolved WebIDL::Promise. This results in rejected
JS::Promise instances not being propagated.
There is an open issue to clarify exactly what realm is to be used when
creating promises. There are surely many more places we will need to
update to use the correct realm (which will be the realm of `this`'s
relevant global object).
Memory stream is a more suitable container for the socket send queue,
as using it results in fewer allocations than trying to emulate a stream
using a Vector.
Chaining workflows does not cause the subsequently spawned workflow runs
to use the same event, but rather it uses the latest head SHA based on
the branch it runs on. This would cause the JS benchmarks jobs to not be
able to find artifacts (if a new JS repl workflow was started before the
previous one could finish) and/or assign the wrong commit SHA to the
benchmark results.
Since `github.event` contains information about the original workflow
run that spawned the JS benchmarks jobs, we can take the commit SHA from
there and use it to download the correct artifact.
If the user clicked directly on the input inside a label, then it
already received a click event. Dispatching a second one via the label
is redundant, and means that if the input is a checkbox, it gets its
value toggled twice.
This is very frequently returned by Object.prototype.toString(),
so we benefit from caching it instead of recreating it every time.
Takes Speedometer2.1/EmberJS-Debug-TodoMVC from ~4000ms to ~3700ms
on my M3 MacBook Pro.
We had concurrency set on the JS artifacts and JS benchmarks workflows
causing them to not run in parallel for the same combination of
(workflow, OS name). You'd expect that this causes a FIFO queue to exist
of the jobs to run sequentially, but in reality GitHub maintains a
single job to prioritize and cancels all others. We don't want that for
our artifacts and benchmarks: we want them to run on each push.
For example, a new push could have workflows getting cancelled because
someone restarted a previously failed workflow, resulting in the
following message:
"Canceling since a higher priority waiting request for [..] exists"
By removing the concurrency setting from these workflows, we make use of
all available runners to execute the jobs and potentially run some of
them in parallel. For the benchmarks however, we currently only have one
matching self-hosted runner per job, and as such they are still not run
in parallel.
We now don't absolutize the URL during parsing, keeping it as a CSS::URL
object which we then pass to the "fetch an external image for a
stylesheet" algorithm. Our version of this algorithm is a bit ad-hoc,
in order to make use of SharedResourceRequest. To try and reduce
duplication, I've pulled all of fetch_a_style_resource() into a static
function, apart from the "actually do the fetch" step.
The CSS `fetch_foo()` functions resolve the URL relative to the
CSSStyleSheet if one is provided. So, style values that do so need to
know what CSSStyleSheet they are part of so that, for example, `url
(foo.png)` is loaded relative to the style sheet's URL instead of the
document's one.
That all works without this change because we currently absolutize URLs
during parsing, but we're in the process of stopping that.
This commit adds the infrastructure for telling style values what their
CSSStyleSheet is.
See the spec issue in the comments for details. There are situations
where we will need to call this but don't have a CSSStyleSheet to pass
in, but we always have a Document, so use that where possible.
These actually always return a value, despite the `CSSStyleSheet*`
return type. So, make that clearer by returning `GC::Ref<CSSStyleSheet>`
instead. This also means we can remove some ad-hoc error-checking code.
The spec is unclear about when exactly we should parse the style sheet.
Previously we'd do so before calling this algorithm, which was
error-prone, as seen by the bug fixed by the previous commit. The spec
for step 1 of "create a CSS style sheet" says:
1. Create a new CSS style sheet object and set its properties as
specified.
The definitions linked are UA-defined enough that it seems reasonable to
put the parsing here. That simplifies the user code a little and makes
it harder to mess up. It does raise the question of what to do if
parsing fails. I've matched our previous behaviour by just logging and
returning in that case.
While I'm modifying this method, I've also converted the bool params to
enums so they're a little clearer to read.
Before this change, we assigned the style sheet's location *after* its
content rules were parsed and added to it. This meant any `@import`s
would try to fetch their style sheet before they knew the URL they
should fetch it relative to.
Previously if we encountered any attributes with a namespace other than
`xml` or `xmlns`, we treated it as a parse error. Now, allow it as long
as it's been declared in the current context.
We also handle errors more gracefully - instead of exploding if setting
the namespace fails, treat it as an error and carry on.
Names like ":hi", "wow:", or "a🅱️c" are invalid, so early-out instead
of searching our namespace stack for matching prefixes.
And also rename the function because it's relevant to attributes too.
With this change we ensure that Skia surfaces are allowed to be created
or destroyed only by one thread at a time. Not enforcing this before
resulted in "Single owner failure." assertion crashes in debug mode on
pages with canvas elements.
A couple of minor things I came across while debugging BYOB streams.
Adjust some spec text to match the latest spec, and use GC::Ref instead
of a raw pointer where applicable.
This adds support for async iterators of the form:
async iterable<value_type>;
async iterable<value_type>(/* arguments... */);
It does not yet support the value pairs of the form:
async iterable<key_type, value_type>;
async iterable<key_type, value_type>(/* arguments... */);
Async iterators have an optional `return` data property. There's not a
particularly good way to know what interfaces implement this property.
So this adds a new extended attribute, DefinesAsyncIteratorReturn, which
interfaces can use to declare their support.
I don't quite see what spec text requires this, but it is explicitly
checked by WPT. We used to pass this test, but that regressed after
commit 3c6010c663.
This required a bit of manual manipulation. These tests dynamically
fetch generated IDL files, e.g.:
https://github.com/web-platform-tests/wpt/blob/master/interfaces/streams.idl
Our WPT importer is not able to detect the IDL files that need to be
imported, so dom.idl and streams.idl was copied over manually. Further,
idlharness.js would create URLs of the form "file://interfaces/dom.idl".
So idlharness.js was adapted to create a URL relative to the test file.
If we don't do this, then we endlessly spin trying to read data which
ends up in a deadlock.
The description for SSL_ERROR_ZERO_RETURN states:
> The TLS/SSL connection has been closed. If the protocol version is SSL
> 3.0 or TLS 1.0, this result code is returned only if a closure alert
> has occurred in the protocol, i.e., if the connection has been closed
> cleanly. Note that in this case SSL_ERROR_ZERO_RETURN does not
> necessarily indicate that the underlying transport has been closed.
This widens the assertion from only checking if the WritableStream's
state is Errored or Erroring to asserting that the WritableStream is not
in a Writable state.
By the time we're executing bytecode, we know the the bytecode will be
flattened. This means we can use ReadonlySpan to look into it instead of
DisjointChunks::spans(), which allocates.
This removes another Match member that required destruction. The "API"
for accessing the strings is definitely a bit awkward. We'll think of
something nicer eventually.
Check if box has associated layout node is not mentioned in the spec,
but it is required to match behavior of other browsers that do not
invoke intersection observer steps for boxes without layout node.
This introduces a matrix for the js-benchmarks workflow and runs both
the Linux x86_64 and macOS arm64 JS repl builds against our benchmarks
repository.
The workflow-webhook action that was being used didn't work on macOS or
machines without Docker, so let's create the payload ourselves, sign it
and send it over using plain old `curl`.
The spec states:
Public API must not be used: while reading or writing, or performing
any of the operations below, the JavaScript-modifiable reader,
writer, and stream APIs (i.e. methods on the appropriate prototypes)
must not be used. Instead, the streams must be manipulated directly.
This migrates the default request request we were using to a custom read
request which does not involve extra promises.
I'm not sure about an analogous change with the way we write chunks to
the receiving end. There isn't a "WriteRequest" utility to be used here,
and no matter what AO we use, promises will be involved. Our current
implementation at least does not seem to affect any tests.
This change adds “default step” and “step scale factor” handling for all
remaining HTMLInputElement input types for which the spec defines such
and that we didn’t yet have handling for.
EventTarget::dispatch_event, per comments, does ad-hoc solution for UA,
and I don't know why it checks if `this` is window or `element`, but
web platform tests would fail, because `this` would actually be a
`Document` type.
Before this change, we'd skip storing the new ComputedProperties in
Element::recompute_style() if there was no invalidation needed.
This caused us to lose the information about which properties are
inherited and/or important (which is also carried by ComputedProperties,
but doesn't affect invalidation).
Consequently, we'd then fail to recompute inherited styles, since that
mechanism depends on this data.
The fix is simply to always store the new ComputedProperties.
This near enough matches what CI does to build fuzzers, with the
differences being the explicit -GNinja and setting CMAKE_OSX_SYSROOT,
as CMake 4 no longer does that for us.
This fixes one source of flakiness on WPT (of many) where we wouldn't
recompute style after programmatically altering the contents of a style
sheet, but instead had to wait for something else to cause invalidation.
Our engine already keeps track of the home realm for all objects.
This is stored in Shape::realm(). We can use that instead of having
a dedicated member in ESFO for the same pointer.
Since there's always a home realm these days, we can also remove some
outdated fallback code from the days when having a realm was not
guaranteed due to LibWeb shenanigans.
If we have two PrimitiveString objects that are both backed by UTF-16
data, we don't have to convert them to UTF-8 for equality checking.
Just compare the underlying UTF-16 data. :^)
With this change we save a copy of of scroll state at the time of
recording a display list, instead of actual ScrollState pointer that
could be modifed by the main thread while display list is beings
rasterized on the rendering thread, which leads to a frame painted with
inconsistent scroll state.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/4288
When the cursor was positioned at the end of text,
attempting to move it left(using the left arrow key)
would fail because align_boundary() was rejecting
the end-of-text position as a valid boundary.
Our existing implementation of stream piping was extremely ad-hoc. It
did nothing to handle closed/errored streams, and did not read from or
write to streams in a way required by the spec.
This new implementation uses a custom JS::Cell to drive the read/write
loop.
This will be needed by Streams. To support this, we now store callbacks
in a hash table, keyed by an ID. Callers may use that ID to remove the
callback at a later point.
While debugging a spec-compliant implementation of ReadableStreamPipeTo,
I spent a lot of time inspecting promise internals. This is much less
noisy if we halve the number of temporary promises.
It is received from user JS as a double and is only used as a double in
all subsequent calculations. This bug would cause UBSAN errors in an
upcoming imported WPT test, which passes Infinity as the HWM.
Note there is an equivalent HWM for ReadableStream, which already stores
the value as a double.
When a message is posted to multiple ports at once, the order in which
the callbacks for these messages are invoked is non-deterministic.
To account for this, the test has been rewritten to accumulate logs
for each port separately, and then print them grouped by port.
This fixes a really nasty EventLoop bug which I debugged for 2 weeks.
The spin_until([&]{return completed_tasks == total_tasks;}) in
TraversableNavigable::check_if_unloading_is_canceled spins forever.
Cause of the bug:
check_if_unloading_is_canceled is called deferred
check_if_unloading_is_canceled creates a task:
queue_global_task(..., [&] {
...
completed_tasks++;
}));
This task is never executed.
queue_global_task calls TaskQueue::add
void TaskQueue::add(task)
{
m_tasks.append(task);
m_event_loop->schedule();
}
void HTML::EventLoop::schedule()
{
if (!m_system_event_loop_timer)
m_system_event_loop_timer = Timer::create_single_shot(
0, // delay
[&] { process(); });
if (!m_system_event_loop_timer->is_active())
m_system_event_loop_timer->restart();
}
EventLoop::process executes one task from task queue and calls
schedule again if there are more tasks.
So task processing relies on one single-shot zero-delay timer,
m_system_event_loop_timer.
Timers and other notification events are handled by Core::EventLoop
and Core::ThreadEventQueue, these are different from HTML::EventLoop
and HTML::TaskQueue mentioned above.
check_if_unloading_is_canceled is called using deferred_invoke
mechanism, different from m_system_event_loop_timer,
see Navigable::navigate and Core::EventLoop::deferred_invoke.
The core of the problem is that Core::EventLoop::pump is called again
(from spin_until) after timer fired but before its handler is executed.
In ThreadEventQueue::process events are moved into local variable before
executing. The first of those events is check_if_unloading_is_canceled.
One of the rest events is Web::HTML::EventLoop::process sheduled in
EventLoop::schedule using m_system_event_loop_timer.
When check_if_unloading_is_canceled calls queue_global_task its
m_system_event_loop_timer is still active because Timer::timer_event
was not yet called, so the timer is not restarted.
But Timer::timer_event (and hence EventLoop::process) will never execute
because check_if_unloading_is_canceled calls spin_until after
queue_global_task, and EventLoop::process is no longer in
event_queue.m_private->queued_events.
By making a single-shot timer manually-reset we are allowing it to fire
several times. So when spin_until is executed m_system_event_loop_timer
is fired again. Not an ideal solution, but this is the best I could
come up with. This commit makes the behavior match EventLoopImplUnix,
in which single-shot timer can also fire several times.
Adding event_queue.process(); at the start of pump like in EvtLoopImplQt
doesn't fix the problem.
Note: Timer::start calls EventReceiver::start_timer, which calls
EventLoop::register_timer with should_reload always set to true
(single-shot vs periodic are handled in Timer::timer_event instead),
so I use static_cast<Timer&>(object).is_single_shot() instead of
!should_reload.
This fixes the problem when none of the timers or notifiers get
executed if wake() is called frequently.
Note that calling WaitForMultipleObjects repeatedly until it fails
will not work because rapidly firing timer can get all the attention.
That's why I check every event individually with WaitForSingleObject.
This behavior matches EventLoopImplementationUnix.
and unregister_timer in EventLoopManagerWindows
Destructors for thread local objects are called before destructors of
global not thread local objects.
This is a partial stack of the problem, thread_data is already
destroyed at this point:
>WebContent.exe!Core::ThreadData::the
WebContent.exe!Core::EventLoopManagerWindows::unregister_notifier
WebContent.exe!Core::EventLoop::unregister_notifier
WebContent.exe!Core::Notifier::set_enabled
WebContent.exe!Core::LocalSocket::~LocalSocket
WebContent.exe!Requests::RequestClient::~RequestClient
WebContent.exe!Web::`dynamic atexit destructor for 's_resource_loader'
Bring back d6080d1fdc with a missing check
whether underlying socket is closed, before accessing `fd()` that is
optional and empty in case of closed socket.
This allows us to remove the BoundFunction::m_name field, which we
were initializing with a formatted FlyString on every function binding,
despite never using it for anything.
With this change TransportSocket becomes capable of sending large
messages without relying on workarounds, such as sending the message as
a shared memory file descriptor when it can't fully fit into the socket
buffer.
It's implemented by combining all enqueued messages into two buffers:
one for bytes and another for fds, and repeatedly attempts to write them
in smaller chunks, waiting for the socket to become writable again if
the receiver needs time to consume the data.
Another significant improvement brought by this change is that we no
longer drop messages queued for sending if the socket doesn't become
writable after a 100ms timeout. Instead, we return the message to the
send buffer and continue waiting for the socket to become writable.
FJCVTZS (Floating-point Javascript Convert to Signed fixed-point,
rounding toward Zero) does exactly what we need for ToInt32 in the
JavaScript specification.
This isn't world-changing, but it does give a ~2% boost on compute-
heavy benchmarks like JetStream, so we should obviously use it.
The fast path of to_i32() can be neatly inlined everywhere, and we still
have to_i32_slow_case() for non-trivial conversions.
For to_u32(), it really can just be implemented as a static cast to i32!
See the linked spec issue for more details. The MediaList can be null
internally, and this was upsetting GCC as it meant our bindings code
was dereferencing a null pointer.
The regression in the "conditional-CSSGroupingRule" test is we now fail
the "inserting an `@import`" subtests differently and the subtests
aren't independent. Specifically, we don't yet implement the checks in
`CSSRuleList::insert_a_css_rule()` that reject certain rules from being
inserted. Previously we didn't insert the `@import` rule because we
failed to parse its relative URL. Now we parse it correctly, we end up
inserting it.
When `CSSRuleList::remove_a_css_rule()` is called, the removed rule has
its parent style sheet set to null. We shouldn't try to fetch an import
in this case.
It's possible to parse an `@import` rule that isn't attached to a
document. We only actually need it to have one when fetching the linked
style sheet, and that should only happen when the CSSImportRule is
attached to a document. So, we can just accept a null pointer when
constructing it.
We relied on that Document to get the Realm, so pass that in as a
separate parameter.
This is ad-hoc, and the spec doesn't seem to tell us what to actually do
here. Without this, following the spec steps for loading relative
`@import` URLs from a `<style>` tag always fails, because that uses the
parent style sheet's location as the base URL.
Our previous approach to `<url>` had a couple of issues:
- We'd complete the URL during parsing, when we should actually keep it
as the original string until it's used.
- There's nowhere for us to store `<url-modifier>`s on a `URL::URL`.
So, `CSS::URL` is a solution to this. It holds the original URL string,
and later will also hold any modifiers. This commit parses all `<url>`s
as `CSS::URL`, but then converts it into a `URL::URL`, so no user code
is changed. These will be modified in subsequent commits.
For `@namespace`, we were never supposed to complete the URL at all, so
this makes that more correct already. However, in practice all
`@namespace`s are absolute URLs already, so this should have no
observable effects.
To prepare for introducing a CSS::URL type, we need to qualify any use
of LibURL as `::URL::foo` instead of `URL::foo` so the compiler doesn't
get confused.
Many of these uses will be replaced, but I don't want to mix this in
with what will likely already be a large change.
Instead of wrapping all non-movable members of TransportSocket in OwnPtr
to keep it movable, make TransportSocket itself non-movable and wrap it
in OwnPtr.
~2% of the Speedometer 2.1 profile was just repeatedly performing the
shape transitions to add these two properties. We can avoid all that
work by caching a premade shape.
This ends up making RegexStringView smaller, which means less stuff to
copy when forking in the regex engine.
Thanks to Leon for suggesting the [[no_unique_address]] trick!
We can use the placeholder syntax to specify the precision dynamically.
Note that `fraction_digits` is a double, which we do not support as a
precision argument. It's safe to cast to an integer here because we
guaranteed above that the value is in the range [0, 100], and is not
fractional.
It's generally considered a security issue to use non-format string
literals. We would likely just crash in practice, but let's avoid the
issue altogether.
We were handling removing the style sheet from the shadow root, but not
appending to it. Fixing this also revealed a bug that a removed link
element would always try to remove from the document's list, as the
root is no longer the shadow root it's in. The fix is to use the passed
in old root to remove the style sheet from.
Fixes the cookie banner on https://nos.nl/
By doing this we also make MessagePort, that relies on IPC transport,
to send messages from separate thread, which solves the problem when
WebWorker and WebContent could deadlock if both were trying to post
messages at the same time.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/4254
The way this test was written didn't guarantee a deterministic message
order, resulting in different output in Chromium and Firefox. This
change slightly rearranges the message order to make it deterministic.
This change is necessary as a prepartion for upcoming change that makes
MessagePort post messages from a separate thread, which would've
revealed the non-deterministic message order.
Reimplements c3121c9d at the transport layer, allowing us to solve the
same problem once, in a single place, for both the LibIPC connection and
MessagePort. This avoids exposing a workaround for a macOS specific Unix
domain socket issue to higher abstraction layers.
We don't need the [[Fields]] and [[PrivateMethods]] slots for most ESFO
instances, so let's reduce their impact on class size!
This shrinks ESFO from 200 bytes to 160 bytes, allowing more allocations
before we have to collect garbage.
We were doing way too much computation every time an ESFO was
instantiated. This was particularly sad, since the results of these
computations were identical every time!
This patch adds a new SharedFunctionInstanceData object that gets
shared between all instances of an ESFO instantiated from some kind of
AST FunctionNode.
~5% speedup on Speedometer 2.1 :^)
These generate what seems to be nonsense warnings on Function and
ByteBuffer; they *should* be investigated at some point, but they don't
provide anything useful at this point.
This is an editorial change in the ECMA-402 spec. See:
https://github.com/tc39/ecma402/commit/a2beb66
We implement this change by introducing a virtual interface that all
Intl "service" objects must implement. A future patch will make use of
the virtualized RelevantExtensionKeys and ResolutionOptionDescriptors
accessors, and we will need to be able to use those slots from a generic
instance type.
This is an editorial change in the ECMA-402 spec. See:
https://github.com/tc39/ecma402/commit/e3f7260
Note the other changes in this commit do not apply to our implementation
as we defer to ICU for the affected steps.
There's a bit of a UTF-8 assumption with this change. But nearly every
caller of these methods were immediately creating a String from the
resulting ByteString anyways.
Don't use a Vector to form the transformed string. We can construct the
string immediately and store the result in its buffer, and thus avoid a
double allocation.
In a synthetic benchmark, lowercasing a 500 character ASCII string
1 million times reduced from 550ms to 65ms on my machine.
* Use `any_of` instead of manual loops
* Don't check if a code point is upper/lowercase twice. The check we
are using is already present inside the case converter.
* Move StringImpl's implementation into ByteString. ByteString is its
only user, so let's avoid some jumping around. The other ASCII case
methods on StringImpl will soon also be removed.
With this change, the responsibility for prepending messages with their
size and ensuring the entire message is received before returning it to
the caller is moved to TransportSocket. This removes the need to
duplicate this logic in both LibIPC and MessagePort.
Another advantage of reducing message granularity at IPC::Transport
layer is that it will make it easier to support alternative transport
implementations (like Mach ports, which unlike Unix domain sockets are
not stream oriented).
We were not properly handling the case that prefix code point was the
empty string (which we represent as an OptionalNone). While this
still resulted in the correct pattern string being generated, an
incorrect regular expression was being generated causing matching
to fail.
This has no functional difference as far as I can tell, but for
clarity explicitly do not attempt to do this, which has the nice
side effect of not checking for whitespace known to not exist.
It turns out that the problem here was simply that we were trimming
trailing whitespace when we did not need to, which was meaning that
the port number of '80 ' was being converted to the empty string
per URLPattern elision as the port matches the http scheme.
We cached the length identifier for GetLength, but not
GetLengthWithThis. This caused an `has_value()` verification failure
when accessing super.length. Found by Fuzzilli.
This commit disallows "default" as a font-family name, when the name is
not quoted because unquoted names are treated as custom-idents, for
which the name "default" is not allowed.
Shorthand subproperties that match their initial values are now
excluded from serialization, by default.
Properties where this behavior is not desired, like `gap`, are
special-cased.
This also rearranges the code to follow the spec better: We create an
empty FontFace first and then fill it in, instead of creating it
fully-formed at the end.
Read the descriptor style values instead of producing a ParsedFontFace
first, as this means we know if a descriptor is actually present, or
has been defaulted to an initial value. This lets us correctly skip the
unicode-range if it was not explicitly set.
Firefox and Chromium both serialize using the "font-stretch" name,
(which is an alias for font-width) which follows the outdated cssom
spec, so I've done so too to match them.
The one thing that we still do differently in this test is that those
browsers check explicitly if `font-stretch` was set, and ignore when
`font-width` is.
I've also inlined the `serialize_a_local()` function to the one place
it's used. The style value to_string() method was already wrapping the
string in quotes, so calling serialize_a_string() on it was producing
`local("\this mess\"")`. It's clearer what's happening when the code
isn't split up.
This is a slightly mystified attempt to recover the performance
regression seen on our JS benchmark runner after 3c2a2bb39f
and c7bba505ea.
With this change, c7bba505ea is effectively reverted from the
perspective of x86_64.
Corresponds to URL spec change:
https://github.com/whatwg/url/commit/cc8b776b
Note that the new test failure being introduced here is an unrelated
WPT test change bundled in the resources test file update that I am
not convinced is correct.
Our handling of 'optional' return values was previously not correct
in that we would always call 'create_data_property' for every
single member of the returned dictionary, even if that property did
not have a value (by falling back to JS::js_null).
This was resulting in a massive number of test failures for URL
pattern which was expecting 'undefined' as the member value, instead
of 'null'.
The URL spec represents its path as a:
Variant<String, Vector<String>>
A URL is defined has having an opaque path if it has a single String,
the URL path otherwise containing a list of path components.
We (like in an older version of the spec) track this through a boolean
and only use a Vector with a single component for opaque paths.
This means it was incorrect to simple assign the path to a list with
a single empty string without setting that URL as opaque, which
meant that the path serialization was producing incorrect results.
It may make sense changing the API so this situation is a little more
clear. But for now, we simply need to set the opaque path boolean
to true here.
This provides the infrastructure for taking a part list from the
pattern parser and generating the actual regexp object which is
used for matching against URLs from the pattern.
Compiling a URLPattern component will generate a 'parts list' which
is used for generating the regular expression that is used for
matching against URLs.
This parts list is also used to generate (through this function) a
pattern string. The pattern string of a URL component is what is
exposed on the USVString getters of the URLPattern class itself.
As an example, the following:
```
let pattern = new URLPattern({ "pathname": "/foo/(.*)*" });
console.log(pattern.pathname);
```
Will log the pattern string of: '/foo/**'.
These control how a pattern string is generated, which can vary for
different components and is also impacted by the 'ignoreCase' option
that can be provided in the URLPattern constructor.
This is to save a future name conflict that will appear between
the options IDL dictionary and the options struct that are both
present in the spec.
It is also a nicer interface for now given there is only a single
option at the moment.
It seems both aarch64 and x86_64 are extremely sensitive to the use of
bitfields here. Unfortunately, aarch64 gains a huge speedup from them
while x86_64 sees a very noticeable slowdown.
Since we're talking about 5%+ swings in both directions here, let's go
for the best of both worlds and use ifdefs in the Operand memory layout.
This allows the user to store custom search engines via about:settings.
Custom engines will be displayed below the builtin engines in the drop-
down to select the default engine.
A couple of edge cases here:
1. We currently reject a custom engine if one with the same name already
exists. In the future, we should allow editing custom engines.
2. If a custom engine which was the default engine is removed, we will
disable search rather than falling back to any other engine.
This is to prepare for custom search engines. If we use AK::format, it
would be trivial for a user (or bad actor) to come up with a template
search engine URL that ultimately crashes the browser due to internal
assertions in AK::format. For example:
https://example.com/crash={1}
Rather than coming up with a complicated pre-format validator, let's
just not use AK::format. Custom URLs will signify their template query
parameters with "%s". So we can do the same with our built-in engines.
When it comes time to format the URL, we will do a simple string
replacement.
In order to support custom search engines, we will need to store the
engine properties as String to hold user-provided data.
This also caused a compile error trying to assign Optional<SearchEngine>
to Optional<SearchEngine const&>, so there's a bit of extra churn here.
We know that the payload is always 0 for these three Value types, and so
we can implement checking for them as full 64-bit compares against
constant values instead of checking just the tag.
This avoids shifting the tag 48 bits to the right before comparing it.
Since these are used all over the place, it actually leads to a nice
code size reduction.
Before this change, we were inlining this function after every
handler for instructions that could throw.
Forcing it out-of-line shrinks the main bytecode interpreter by 15%
and yields a decent 2.5% speedup on JetStream/gcc-loops.cpp.js
Fixes bug when we always return null from getElementById() on
unconnected roots because id to element cache is only maintained for
connected roots.
Fixes broken Perf-Dashboard suite in Speedometer 3.
This packs the bytecode much better and gives us a decent performance
boost on throughput-focused benchmarks.
Measured on my M3 MacBook Pro:
- 4.7% speedup on Kraken
- 2.3% speedup on Octane
- 2.7% speedup on JetStream1
We can use the index's invalid state to signal an empty optional.
This makes Optional<StringTableIndex> 4 bytes instead of 8,
shrinking every bytecode instruction that uses these.
This is a homegrown implementation that wasn't actually used in
dependent classes. If this is needed in the future, using OpenSSL would
probably be a better option.
This change ensures that instead of immediately deallocating the message
buffer after sending, we retain it in an acknowledgement wait queue
until an acknowledgement is received from the peer. This is necessary
to handle a behavior of the macOS kernel, which may prematurely
garbage-collect file descriptors contained within the message buffer
before the peer receives them.
The acknowledgement mechanism assumes messages are received in the same
order they were sent so, each acknowledgement message simply indicates
the count of successfully received messages, specifying how many entries
can safely be removed from the acknowledgement wait queue.
By specializing this template and using the special empty JS::Value as a
marker for the `void` state, we shrink this very common class from 16
bytes to 8 bytes.
This allows bytecode instruction handlers to return their result in a
single 64-bit register, allowing tighter code generation.
This is no longer needed since `IPv6Address::from_string` supports
square brackets. After the update to curl, `CURLOPT_RESOLVE` now
supports replacing IPv6 hosts as well.
This supports IPv6 strings that start with `[` and end with `]` in
accordance with RFC3986 which states:
A host identified by an Internet Protocol literal address, version 6
[RFC3513] or later, is distinguished by enclosing the IP literal
within square brackets ("[" and "]"). This is the only place where
square bracket characters are allowed in the URI syntax.
This prevents the variables declared inside a class static initializer
to escape to the nearest containing function causing all sorts of memory
corruptions.
These were *extremely* hot in profiles (noticed when looking at
disassembly).
Now that we've made the special empty JS::Value much harder to create
accidentally, we can feel better about turning these into ASSERT and
catching them in debug builds.
The special empty value (that we use for array holes, Optional<Value>
when empty and a few other other placeholder/sentinel tasks) still
exists, but you now create one via JS::js_special_empty_value() and
check for it with Value::is_special_empty_value().
The main idea here is to make it very unlikely to accidentally create an
unexpected special empty value.
Similar to the existing macros for compile options and link options,
this macro wraps the command line definitions for swiftc in a way that
avoids warnings about conditional compilation flags not having values.
Before, If the cache was empty we would try and evict non-existant
entries and crash. So the fix is to make sure that we don't saturate
the cache with a single parse result.
When determining the content/margin box rects within their ancestor's
coordinate space, we were returning early if the passed in values
already belonged to the requested ancestor. Unfortunately, we had
already applied the used values' offset to the rect, which is the offset
to the ancestor's ancestor.
This simplifies the logic to always apply the rect offset after checking
if we've reached the ancestor. Fixes determining float intrusions inside
block elements with `margin: auto` set.
Fixes#4083.
This commit removes the -Wno-unusued-private-field flag, thus
reenabling the warning. Unused field were either removed or marked
[[maybe_unused]] when unsure.
For example, `@font-face` is not only invalid inside a style rule, it's
also invalid inside a child of a style rule. This fixes a test
regression that we previously passed by accident.
CSSFontFaceRule now stores its values as a CSSFontFaceDescriptors, with
a ParsedFontFace produced on request. This is exposed via the `style`
attribute, so we pass a lot of tests that try to read values from
that.
We have one test regression, which we passed by mistake before: The test
wanted to ensure we don't allow `@font-face` nested inside other rules.
We passed it just because we discarded any `@font-face` without a
`font-family`. What we're supposed to do is 1) keep at-rules with
missing required descriptors and just not use them, and 2) reject
certain ones when nested.
We may want to cache the ParsedFontFace in the future, but I didn't here
because 1) it's called rarely, and 2) that would mean knowing to
invalidate it when the CSSFontFaceDescriptors changes, which isn't
obvious to me right now.
The goal here is to do something a bit smarter with the parsing here
than we do for properties. Instead of the JSON saying "here are the
values, and here are the keywords, and we can have up to 3", here we
place the syntax in the JSON directly (though currently broken up as
one string per option) and then we attempt to parse each one in
sequence. It's something we'll need eventually for `@property` among
other things.
...However, in this first pass, I've gone with the simplest option of
hard-coding the types instead of figuring them out properly. So there's
a PositivePercentage type and a UnicodeRangeTokens type, instead of
properly implementing the grammar for those in a generic way.
Add a new JSON file describing at-rule descriptors, and then use it to
generate a DescriptorID enum, and code to check if it's accepted in a
given at-rule.
This implements a setting to change the languages provided to websites
from `navigator.language(s)` and the `Accept-Language` header. Whereas
the existing Qt settings dialog allows users to type their language of
choice, this setting allows users to select from a predefined list of
languages. They may choose any number of languages and their preferred
order.
This patch only implements the persisted settings and their UI. It does
not integrate the choses languages into the WebContent process.
The dialog classes here have specific names that will be used in another
dialog. This patch renames them to be more generic. And moves a border
style to not assume a dialog-footer element will exist.
This commit removes the unused m_heap member from ConnectionFromClient.
This also works around an issue where some cmake version doesn't apply
compiler options from within a subdirectory globally.
Turns out we need this directory to pass to the -frontend command
for creating the interop header, so refactor the whole find module
to find it on each platform.
Start work on a speculative HTML Parser in Swift. This component will
walk ahead of the normal HTML parser looking for fetch() requests to
make while the normal parser is blocked. This work exposed many holes in
the Swift C++ interop component, which have been reported upstream.
Add the proper annotations for the Cell and Cell::Visitor classes to be
visible in Swift. This lets us remove some OpaquePointer shinangians in
the Swift bindings.
This is a workaround for the lack of support for imported class
hierarchies in Swift. Swift's ClangImporter doesn't tell the Swift
frontend about derived class relationships between imported types.
This patch adds a workaround for a Swift issue where boolean bitfields
with getters and setters in SWIFT_UNSAFE_REFERENCE types are improperly
imported, causing an ICE.
This was added to be used with `kfree_sized` when we construct a String
from a StringBuilder. As of 53cac71cec, it
is unused, causing some compilers to raise a warning.
This reduces the size of StringData from 24 to 16 bytes.
Skia has a check in debug mode to verify that surface is only used
within one thread. Before this change we were violating this by
allocating surfaces on the main thread while using and destructing them
on the rendering thread.
Basically convert o["foo"]=x into o.foo=x when emitting bytecode.
These are effectively the same thing, and the latter format opts
into using an inline cache for the property lookups.
Basically convert o["foo"] into o.foo when emitting bytecode. These are
effectively the same thing, and the latter format opts into using an
inline cache for the property lookups.
It turned out that some web applications want to send fairly large
messages to WebWorker through IPC (for example, MapLibre GL sends
~1200KiB), which led to failures (at least on macOS) because buffer size
of TransportSocket is limited to 128KiB. This change solves the problem
by wrapping messages that exceed socket buffer size into another message
that holds wrapped message content in shared memory.
Co-Authored-By: Luke Wilde <luke@ladybird.org>
`DisplayListPlayer::execute_impl()` can receive null surface if it was
invoked to rasterize nested display lists (we use them for iframes). In
this case we should not call `flush()` by the end of execution and wait
until outer display list execution will do that.
In CMake 4.0, having a minimum policy version set to less than 3.5 is
a hard error at configure time. Add an override until the issue can be
resolved in vcpkg itself.
IntrusiveRedBlackTree relies on a member pointer for accessing the value
of a node. On windows member pointers can be of variable length,
depending on the inheritance structure of the class. This commit casts
the 4 byte member pointer, or rather offset to a full pointer type, so
that the bit_cast to u8* works, as previously the source was smaller
than the destination, which fails inside __builtin_bit_cast().
This implements an autocomplete engine inside LibWebView, to replace the
engine currently used by Qt. Whereas Qt uses the Qt Network framework to
perform autocomplete requests, LibWebView uses RequestServer. This moves
downloading this untrusted data out of the browser process.
This patch only implements the persisted settings and their UI. It does
not integrate this engine into the browser UI.
When the request is stopped, we clear its internal stream data. There is
a window where RequestServer may have sent an IPC message whose callback
will try to access that data in the time between the data being cleared
and RS receiving the stop signal. When this happens, just bail from IPC.
This is very clearly a very dangerous API to have, and was causing
a crash on Linux as a result of a stack use-after-free when visiting
https://www.index.hr/.
Fixes#3901
If a class field initializer is just a simple literal, we can skip
creating (and calling) a wrapper function for it entirely.
1.44x speedup on JetStream3/raytrace-private-class-fields.js
1.53x speedup on JetStream3/raytrace-public-class-fields.js
The Skia Ganesh backend we currently use doesn't support painting from
multiple threads, which could happen before this change when the main
thread used Skia to paint on the HTML canvas while the rendering thread
was working on display list rasterization.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/4172
Deleteing set_surface() makes DisplayListPlayer API a bit more intuitive
because now caller doesn't have to think whether it's necessary to
restore previous surface after execution, instead DisplayListPlayer
takes care of it by maintaining a stack of surfaces.
This seems to have broken in some recent-ish AppKit update. When we add
the status label to the view hierarchy / change its visibility state,
the NSApp is resetting the cursor to the standard cursor. By overriding
the cursorUpdate method to do nothing, we prevent this from happening.
Instead of reaching into the IFC of the LineBuilder from the BFC, we
should let LineBuilder determine how to deal with the running vertical
float clearance. No functional changes.
If a block with inline children ends with a line break clearing any
floats, we not only need to take the introduced clearance into account
for the next line box, but the containing block needs to set the correct
height as well.
Since the spec calls for using the last line box' bottom as the resolved
height (if treated as auto), we now correctly apply the clearance to the
previous line box' bottom coordinate.
Fixes#4058.
We were accidentally providing it with absolute Y-coordinates, messing
up stacked floating boxes that would otherwise intrude on each other.
Fixes#4160.
This is the same PRNG used by major browser engines, and although it's
a step down in randomness, it massively improves performance of
Math.random().
1.20x speedup on JetStream3/sync-file-system.js :^)
This is a improved version of a73cd88f0c
The old commit was reverted in 552dd18696
The new version only paints an element into a new layer if background
blend modes other than normal are used. The rasterization performance
of most websites should therefore not suffer.
Co-Authored-By: Alexander Kalenik <kalenik.aliaksandr@gmail.com>
This change allows us to overlap rasterization and rendering work across
threads: while the rasterization thread processes frame N, the main
thread can simultaneously work on producing the display list for frame
N+1.
This test is timing-sensitive, similar to some other animation tests we
had to disable, and starts failing on CI after the change in upcoming
commit, so let's disable it for now.
Instead of returning internal generator results as ordinary JS::Objects
with properties, we now use GeneratorResult and CompletionCell which
both inherit from Cell directly and allow efficient access to state.
1.59x speedup on JetStream3/lazy-collections.js :^)
The display list is an immutable data structure, so once it's created,
rasterization can be moved to a separate thread. This allows more room
for performing other tasks between processing HTML rendering tasks.
This change makes PaintingSurface, ImmutableBitmap, and GlyphRun atomic
ref-counted, as they are shared between the main and rendering threads
by being included in the display list.
This removes the old autoplay allowlist file in favor of the new site
setting. We still support the command-line flag to enable autoplay
globally, as this is needed for WPT.
The idea with the UI here is that it will serve as a generic component
for all site settings, such as autoplay, notifications, etc. When the
site settings dialog is opened, it is filled with the requested setting
data, and messages sent to the browser process are based on the setting.
This patch only implements the UI and persisted settings. It does not
apply autoplay changes to the WebContent process.
By moving the LHS and RHS pointers used by rope strings into a
RopeString subclass, we shrink PrimitiveString by 16 bytes. Most strings
are not rope strings, so this ends up saving quite a bit of memory.
Previously, all `GC::Cell` derived classes were Weakable. Marking only
those classes that require this functionality as Weakable allows us to
reduce the memory footprint of some frequently used classes.
Increase the step timeouts on Linux from 50 to 75 milliseconds, since
we're seeing the occasional timeout on CI. For macOS, we should probably
be able to execute the tests a bit quicker than 500ms per step.
This reverts commit a73cd88f0c.
Emitting SaveLayer for each paintable made rasterization a lot slower
on every website because now Skia has to allocate enormous amounts of
temporary surfaces. Let's revert it for now and figure how to implement
it with less aggressive SaveLayer usage.
PrimitiveString is now internally either UTF-8, UTF-16, or both.
We no longer convert them to/from ByteString anywhere, nor does VM have
a ByteString cache.
When we build internal pages (e.g. about:settings), there is currently
quite a lot of boilerplate needed to communicate between the browser and
the page. This includes creating IDL for the page and the IPC for every
message sent between the processes.
These internal pages are also special in that they have privileged
access to and control over the browser process.
The framework introduced here serves to ease the setup of new internal
pages and to reduce the access that WebContent processes have to the
browser process. WebUI pages can send requests to the browser process
via a `ladybird.sendMessage` API. Responses from the browser are passed
through a WebUIMessage event. So, for example, an internal page may:
ladybird.sendMessage("getDataFor", { id: 123 });
document.addEventListener("WebUIMessage", event => {
if (event.name === "gotData") {
console.assert(event.data.id === 123);
}
});
To handle these messages, we set up a new IPC connection between the
browser and WebContent processes. This connection is torn down when
the user navigates away from the internal page.
This adds a command for saving the current layer of the canvas.
This is useful for painting content onto a blank background in
isolation and later compositing it onto the canvas.
This makes them accessible outside of PropertyParsing.cpp (which will be
useful if/when descriptors can include them). I've also renamed them to
use the correct term: "arbitrary substitution function".
It might be useful to have these artifacts, even for older commits. As
an added bonus, this causes the JS benchmarks to run as well giving us
more datapoints.
This fixes the frame-ancestors WPT tests from crashing when an iframe
is blocked from loading. This is because it would get an undefined
location.href from the cross-origin iframe, which causes a crash as it
expects it to be there.
If we don't have parameter expressions, we don't need to collect
metadata about whether instantiated var names collide with parameter
names or function names, as these flags are only used in the parameter
code path.
This avoids going through all the shape transitions when setting up the
most common form of ESFO.
This is extremely hot on Uber Eats, and this provides some relief.
With search enabled:
- For a single word that does not look like a url: search (a
reported bug).
- If no suffix or a suffix that is not recognized: search.
In general:
- Respect the user's choice if they specified a scheme.
- Respect localhost and registered TLDs.
- As before, add file:// to filesystem files that exist.
We can use the index's invalid state to signal an empty optional.
This makes Optional<IdentifierTableIndex> 4 bytes instead of 8,
shrinking every bytecode instruction that uses these.
Whenever we generate line boxes, we might end up with a residual
vertical float clearance by way of having a `<br>` with `clear: ..` set.
Set the Y offset of the next block level box to place by this vertical
clearance.
Relates to #4058.
This involves yeeting the 'invalid' union member as it was not really
checked against properly anyway; now the 'invalid' state is simply
StringData*{nullptr}, which was assumed to not exist previously.
Note that this is still accessing inactive union members, but is
promising to the compiler that they're fine where they are (the provided
debug macro AK_STRINGBASE_VERIFY_LAUNDER_DEBUG makes the
would-be-UB-if-not-for-launder ops verify that the operation is correct)
Should fix the GCC build.
Instead of making a copy of the Vector<FunctionParameter> from the AST
every time we instantiate an ECMAScriptFunctionObject, we now keep the
parameters in a ref-counted FunctionParameters object.
This reduces memory usage, and also allows us to cache the bytecode
executables for default parameter expressions without recompiling them
for every instantiation. :^)
Let's simply reinsert the element respecting it's new position in the
DOM tree, instead of crashing.
Fixes regression in WPT tests caused by introducion of cache for
getElementById().
We were introducing a line break and applying vertical clearance to the
inline formatting context, but that vertical clearance only applied to
new floating boxes. We should move the current block offset to the
vertical clearance to make sure the next line box starts beyond the
cleared floats.
There was a layout test for `<br>` with `clear: ..` set, but that test
did not actually do anything - removing the `clear` property would
result in the same layout. Replace that test with something that
actually tests float clearing.
Relates to #4058.
This works because at the end of the finally chunk, a
ContinuePendingUnwind is generated which copies the saved return value
register into the return value register. In cases where
ContinuePendingUnwind is not generated such as when there is a break
statement in the finally block, the fonction will return undefined which
is consistent with V8 and SpiderMonkey.
These will be used by a similar generator for CSS at-rule descriptors.
For `get_snake_case_function_name_for_css_property_name()`, I've rolled
its behaviour into `snake_casify()` with an optional ability to trim
leading underscores.
Having this hidden away in ImageStyleValue meant that
CSSStyleProperties (and anyone else who holds style values) had to know
exactly which types need visiting. This is a footgun waiting to happen,
so make this a virtual method on CSSStyleValue instead.
Our recent change to get rid of the "move 1px at a time" algorithm in
the float positioning logic introduced the issue that potentially
intersecting float boxes were not evaluated in order anymore. This could
result in float boxes being pushed down further than strictly necessary.
By finding the highest point we can move the floating box to and
repeating the process until we're no longer intersecting any floating
box, we also solve some edge cases like intersecting with very long
floating boxes whose edges lay outside the current box' edges.
This is by no means the most efficient solution, but it is more correct
than what we had until now.
Fixes#4110.
We used to always clear the side data after encountering a box with
`clear: ..`, but this is not the right thing to do if that same box also
has `float: ..` set. For example, with `clear: right` and `float: left`
it might be that the next box still wants to clear the right side, and
since the previous box is floating it did not push the next box down far
enough to justify clearing the side data at that point.
This changes the logic to only clear the float side if the clearing box
itself is not floating. We also no longer clear the opposite side after
placing a floating box; that doesn't seem to be necessary anymore.
Fixes#4102.
Busybox version of sort utility doesn't support --version-sort and
--check CLI args, only their short alternatives (-V and -c,
respectively).
Use the short alternatives to gain the busybox compatibility.
Now we use `before_paint()` and `after_paint()` calls for SVG root box
to make sure that both clip and scroll are applied.
Fixes painting of SVG arrows on www.ubereats.com
When generating line boxes, we place floats simultaneously with the
other items on the same line. The CSS text spec requires us to trim the
whitespace at the end of each line, but we only did so after laying out
all the line boxes.
This changes the way we calculate the current line box width for floats
by subtracting the amount of pixels that the current trailing whitespace
is using.
Fixes#4050.
Allows us to avoid DOM node lookup whenever we need to query natural
size of a box during layout.
Makes 3-4% of `Box::preferred_aspect_ratio()` go away from profiles on
www.nyan.cat
It's safe to remove this pointer because intrinsic layout should never
look up a box's state beyond its containing block.
This change affects the expectations of two layout tests, but both
already differ slightly from other browsers, and the difference between
expectations is less than 5px.
More work on recovering the performance regression from
DeprecatedFlyString removal.
Local measurements on my MBP:
- 2.5% speedup on Octane/zlib.js
- 2% speedup on Octane/typescript.js
In practice this does not make a big difference, but technically it
could happen that a second JS Repl artifact was built before the first
JS Benchmarks job is executed. So make sure to filter on commit ID.
See https://github.com/LadybirdBrowser/ladybird/issues/3911. The
html/rendering/replaced-elements/svg-inline-sizing/svg-inline.html WPT
test is causing a hang/timeout that prevents being able to run the test
suite to completion.
With this change we maintain a data structure that maps ids to
corresponding elements. This allows us to avoid tree traversal in
getElementById() in all cases except ones when lookup happens for
unconnected elements.
All abspos boxes are expected to be blockified, so we are certain that
we can ignore non-box elements when collecting abspos nodes for layout.
Fixes a crash caused by an attempt to cast a BreakNode to a Box while
performing abspos layout.
Instead of reparsing the style attributes every time we instantiate
the internal shadow tree for a text input element, we now parse them
once (in the internal CSS realm) and reuse them for all elements.
Roughly a ~10% speedup on Speedometer 2.1 :^)
This is used for default UA style right now, and we'll expand its use in
the near future.
Note that this required teaching the CSS parser to handle url()
functions when there's no document URL to resolve them against. If we
don't handle that, namespace rules in UA style don't parse correctly.
The JS runtime is full of checks for is<NumberObject> and friends.
They were showing up in a Speedometer profile as ~1% spent in
dynamic_cast, and this basically chops that down to nothing.
On some macs, the default maximum number of file descriptors is 256.
This quickly makes the WPT runner run out of descriptors, so let's check
the active value and increase the soft limit if necessary.
The `transform` property supports transform functions that sometimes
need their `calc(percentage)` values to be converted to a number instead
of a length. Currently this only applies to the `scale*` family of
functions, which are marked as such in `TransformFunctions.json`.
We were not consistently applying the `NumberPercentage` type to these
functions though, and in addition, any `NumberPercentage` value would
not consider calculated values.
"Functional" as in "it's a function token" and not "it works", because
the behaviour for these is unimplemented. :^)
This is modeled after the pseudo-class parsing, but with some changes
based on things I don't like about that implementation. I've
implemented the `<pt-name-selector>` parameter used by view-transitions
for now, but nothing else.
There were several issues with the previous parsing code, like ignoring
trailing tokens, not handling whitespace, and not requiring the value
to be a `<family-name>`. So, fix all that.
Also correct the serialization code, which didn't call
`serialize_a_string()` previously.
Instead of reserving space for data required to run animations in every
DOM element, we now allocate it lazily only if element actually has some
animations. This allows us to save 336 bytes on non-animated DOM
elements.
This mode made a lot of incorrect assumptions about string lifetimes,
and instead of fixing it, let's just remove it and tweak the few unit
tests that used it.
We should always prefer working with String, and Value::to_string() may
even return a cached String if the Value refers to a primitive string,
but no caching occurs for ByteString.
Previously, `String.prototype.split()` caused the construction of a
temporary StringObject when a string primitive was passed as an
argument, solely to perform a Symbol.split lookup. This change allows
skipping that allocation by looking directly into the prototype of
primitive values.
As a result, we can avoid ~200000 StringObject allocations in a single
test from the Speedometer 2 benchmark.
Co-Authored-By: Andreas Kling <andreas@ladybird.org>
A significant portion of reported build problems come from people trying
to build Ladybird with Nix, and it seems there's always something broken
for someone. The maintainers are currently not focused on supporting
Nix, and as a result PRs are not reviewed as well as they could have
been.
This removes all Nix-related files.
Quite simply, ignore any declarations for properties we don't want,
while computing a pseudo-element's style.
I've imported a WPT test for this, which fails without this patch.
Pseudo-elements have specific rules about which CSS properties can be
applied to them. This is a first step to supporting that.
- If a property whitelist isn't present, all properties are allowed.
- Properties are named as in CSS.
- Names of property groups are prefixed with `#`, which makes this match
the spec more clearly. These groups are implemented directly in the
code generator for now.
- Any property name beginning with "FIXME:" is ignored, so we can mark
properties we don't implement yet.
We previously supported a few -webkit vendor-prefixed pseudo-elements.
This patch adds those back, along with -moz equivalents, by aliasing
them to standard ones. They behave identically, except for serializing
with their original name, just like for unrecognized -webkit
pseudo-elements.
It's likely to be a while before the forms spec settles and authors
start using the new pseudo-elements, so until then, we can still make
use of styles they've written for the non-standard ones.
The upcoming generated types will match those for pseudo-classes: A
PseudoElementSelector type, that then holds a PseudoElement enum
defining what it is. That enum will be at the top level in the Web::CSS
namespace.
In order to keep the diffs clearer, this commit renames and moves the
types, and then a following one will replace the handwritten enum with
a generated one.
Before:
- a separate Word element allocation of the underlying Vector<Word> was
necessary for every new word in a multi-word shift
- two additional temporary UnsignedBigInteger buffers were allocated
and passed through, including in downstream calls (e.g. Multiplication)
- an additional allocation and word shift for the carry
- FIXME note seems to point to some of these issues
After:
- main change is in LibCrypto/BigInt/Algorithms/BitwiseOperations.cpp
- one single allocation per call, using shift_left_by_n_words
- only the input "number" and "output" need to be allocated by the
caller
- downstream calls are adapted not to allocate or pass temporary
buffers
- noticeable performance improvement when running TestBigInteger:
0.41-0.42s (before) to 0.28-0.29s (after) Intel Core i9 laptop
Bonus: remove unused variables from UnsignedBigInteger::divided_by
- These were likely cut-and-paste artifacts from
UnsignedBigInteger::multiplied_by; not caught by "unused-varible".
NOTE: making this change in a separate commit than shift_right, even if
it touches the same file BitwiseOperations.cpp since:
- it is a "bonus" addition: not necessary for fixing the shift_right
bug, but logically unrelated to the shift_right code
- it brings a chain of downstream interface modifications (7 files),
unrelated to shift_right
- Before: UnsignedBigInteger::shift_right( n ) trigger index
verification error for n>31. An assumption of
num_bits<UnsignedBigInteger::BITS_IN_WORD was being made
- After: shift_right( n ) works correctly for n>31.
NOTE: "bonus" change; not necessary for fixing BigFraction::to_double
This fixes three WPT test cases at html/dom/elements/global-attributes/dir-assorted.window.html
Update test expectations for Tests/LibWeb/Text/expected/wpt-import/css/selectors/dir-pseudo-on-bdi-element.txt
Instead of bothering the GC heap with a bunch of DOMRect allocations,
we can just pass around CSSPixelRect internally in many cases.
Before this change, we were generating so much DOMRect garbage that
we had to do a garbage collection *every frame* on the Immich demo.
This was due to the large number of intersection observers checked.
We still need to relax way more when idle, but for comparison, before
this change, when doing nothing for 10 seconds on Immich, we'd spend
2.5 seconds updating intersection observers. After this change, we now
spend 600 ms.
When decoding data into bitmaps, we end up with different alpha types
(premultiplied vs. unpremultiplied color data). Unfortunately, Skia only
seems to handle premultiplied color data well when scaling bitmaps with
an alpha channel. This might be due to Skia historically only supporting
premultiplied color blending, with unpremultiplied support having been
added more recently.
When using Skia to blend bitmaps, we need the color data to be
premultiplied. ImmutableBitmap gains a new method to enforce the alpha
type to be used, which is now used by SharedResourceRequest and
CanvasRenderingContext2D to enforce the right alpha type.
Our LibWeb tests actually had a couple of screenshot tests that exposed
the graphical glitches caused by Skia; see the big smiley faces in the
CSS backgrounds tests for example. The failing tests are now updated to
accommodate the new behavior.
Chromium and Firefox both seem to apply the same behavior; e.g. they
actively decode PNGs (which are unpremultiplied in nature) to a
premultiplied bitmap.
Fixes#3691.
This replaces the existing menu item that would open the Qt settings
dialog. That menu item still exists, but is no longer the default
settings action.
This adds a basic settings page to manage persistent Ladybird settings.
As a first pass, this exposes settings for the new tab page URL and the
default search engine.
The way the search engine option works is that once search is enabled,
the user must choose their default search engine; we do not apply any
default automatically. Search remains disabled until this is done.
There are a couple of improvements that we should make here:
* Settings changes are not broadcasted to all open about:settings pages.
So if two instances are open, and the user changes the search engine
in one instance, the other instance will have a stale UI.
* Adding an IPC per setting is going to get annoying. It would be nice
if we can come up with a smaller set of IPCs to send only the relevant
changed settings.
This adds a WebView::Settings class to own persistent browser settings.
In this first pass, it now owns the new tab page URL and search engine
settings.
For simplicitly, we currently use a JSON format for these settings. They
are stored alongside the cookie database. As of this commit, the saved
JSON will have the form:
{
"newTabPageURL": "about:blank",
"searchEngine": {
"name": "Google"
}
}
(The search engine is an object to allow room for a future patch to
implement custom search engine URLs.)
For Qt, this replaces the management of these particular settings in the
Qt settings UI. We will have an internal browser page to control these
settings instead. In the future, we will want to port all settings to
this new class. We will also want to allow UI-specific settings (such as
whether the hamburger menu is displayed in Qt).
It really doesn't make sense for GitHub to be the default search engine.
If some really wants this, we can eventually implement setting custom
search engine URLs.
In order to maintain a consistent look and feel between internal about:
pages going forward, let's use a central CSS file to define Ladybird
colors and some common form control styles.
When the return key is pressed, we try to handle it as a commit action
for input elements. However, we would then go on to actually insert the
return key's code point (U+000D) into the input element. This would be
sanitized out, but would leave the input element in a state where it
thinks it has text to commit. This would result in a change event being
fired when the return key is pressed multiple times in a row.
We were also firing the beforeinput/input events twice for all return
key presses.
To fix this, this patch changes the input event target to signify if it
actually handled the return key. If not (i.e. for textarea elements),
only then do we insert the code point. We also must not fall through to
the generic key handler, to avoid the repeated input events.
The select dropdown was doing its own ad-hoc method of handling DPR. We
now handle it just like other context menus. Previously, the drop down
in the AppKit chrome was twice as large as it should be.
For some reason, after some seemingly unrelated upcoming changes, the
unqualified `move`s in this header result in an ADL failure:
AK/TemporaryChange.h:22:39: error: call to function 'move' that is
neither visible in the template definition nor found by argument-
dependent lookup
22 | ~TemporaryChange() { m_variable = move(m_old_value); }
| ^
Libraries/LibDNS/Resolver.h:491:29: note: in instantiation of member
function 'AK::TemporaryChange<bool>::~TemporaryChange' requested here
491 | TemporaryChange change(m_attempting_restart, true);
Previously, we would only invalidate style when setting the `media` IDL
attribute; changing the attribute via `setAttribute()` and
`removeAttribute()` had no immediate effect.
There are two changes happening here: a correctness fix, and an
optimization. In theory they are unrelated, but the optimization
actually paves the way for the correctness fix.
Before this commit, the HTML tokenizer would attempt to look for named
character references by checking from after the `&` until the end of
m_decoded_input, which meant that it was unable to recognize things like
named character references that are inserted via `document.write` one
byte at a time. For example, if `∉` was written one-byte-at-a-time
with `document.write`, then the tokenizer would only check against `n`
since that's all that would exist at the time of the check and therefore
erroneously conclude that it was an invalid named character reference.
This commit modifies the approach taken for named character reference
matching by using a trie-like structure (specifically, a deterministic
acyclic finite state automaton or DAFSA), which allows for efficiently
matching one-character-at-a-time and therefore it is able to pick up
matching where it left off after each code point is consumed.
Note: Because it's possible for a partial match to not actually develop
into a full match (e.g. `¬indo` which could lead to `⋵̸`),
some backtracking is performed after-the-fact in order to only consume
the code points within the longest match found (e.g. `¬indo` would
backtrack back to `¬`).
With this new approach, `document.write` being called one-byte-at-a-time
is handled correctly, which allows for passing more WPT tests, with the
most directly relevant tests being
`/html/syntax/parsing/html5lib_entities01.html`
and
`/html/syntax/parsing/html5lib_entities02.html`
when run with `?run_type=write_single`. Additionally, the implementation
now better conforms to the language of the spec (and resolves a FIXME)
because exactly the matched characters are consumed and nothing more, so
SWITCH_TO is able to be used as the spec says instead of RECONSUME_IN.
The new approach is also an optimization:
- Instead of a linear search using `starts_with`, the usage of a DAFSA
means that it is always aware of which characters can lead to a match
at any given point, and will bail out whenever a match is no longer
possible.
- The DAFSA is able to take advantage of the note in the section
`13.5 Named character references` that says "This list is static and
will not be expanded or changed in the future." and tailor its Node
struct accordingly to tightly pack each node's data into 32-bits.
Together with the inherent DAFSA property of redundant node
deduplication, the amount of data stored for named character reference
matching is minimized.
In my testing:
- A benchmark tokenizing an arbitrary set of HTML test files was about
1.23x faster (2070ms to 1682ms).
- A benchmark tokenizing a file with tens of thousands of named
character references mixed in with truncated named character
references and arbitrary ASCII characters/ampersands runs about 8x
faster (758ms to 93ms).
- The size of `liblagom-web.so` was reduced by 94.96KiB.
Some technical details:
A DAFSA (deterministic acyclic finite state automaton) is essentially a
trie flattened into an array, but it also uses techniques to minimize
redundant nodes. This provides fast lookups while minimizing the
required data size, but normally does not allow for associating data
related to each word. However, by adding a count of the number of
possible words from each node, it becomes possible to also use it to
achieve minimal perfect hashing for the set of words (which allows going
from word -> unique index as well as unique index -> word). This allows
us to store a second array of data so that the DAFSA can be used as a
lookup for e.g. the associated code points.
For the Swift implementation, the new NamedCharacterReferenceMatcher
was used to satisfy the previous API and the tokenizer was left alone
otherwise. In the future, the Swift implementation should be updated to
use the same implementation for its NamedCharacterReference state as
the updated C++ implementation.
This workflow starts after a successful js-artifacts workflow, picks up
the JS repl binary and runs our js-benchmarks tool. It does not yet
publish or otherwise store the benchmark results, but it's a start!
...boxes with non-auto height.
We know for sure that by the time we layout abspos boxes, their
containing block has definite height, so it's possible to resolve
non-auto heights and mark it as definite before doing inner layout.
Big step towards having reasonable performance on
https://demo.immich.app/photos because now we avoid a bunch of work
initiated by mistakenly invoked intersection observer callbacks.
Co-Authored-By: Andreas Kling <andreas@ladybird.org>
It was annoying to maintain two separate but almost identical functions
that gradually accumulated small differences over time. This change
replaces them with a single function that resolves either width or
height, depending on the specified dimension.
This avoids emitting TDZ checks for multiple bindings declared and
referenced within one variable declaration, i.e:
var foo = 0, bar = foo;
In the above case, we'd emit an unnecessary TDZ check for the second
reference to `foo`.
This reverts commits bf333eaea2 and
6f69a445bd.
The commit linter needs to run on event `pull_request_target` to have
access to its secret token, which means we cannot have a dependency on
that job from another workflow that is run as a result of the
`pull_request` event.
Additionally, the linters were no longer run for first-time
contributors. This isn't a huge problem but it was nice that a
preliminary check took place before running the full CI on their PRs.
Before this change, we would enumerate all the keys with
[[OwnPropertyKeys]], and then do [[GetOwnPropertyDescriptor]] twice for
each key as we went through them.
We now only do one [[GetOwnPropertyDescriptor]] per key, which
drastically reduces the number of proxy traps when those are involved.
The new trap sequence matches what you get with V8, so I don't think
anyone will be unpleasantly surprised here.
The "disable DevTools" button looked like a "close this notification"
button to me, and although a tooltip was set, it only showed up
immediately on the AppKit UI and not the Qt version.
This makes the behavior of clicking the disable button a lot clearer by
showing a button with "Disable" as its title.
If all the parameter default values end up in locals, the lexical
environment we create to hold them would never be used for anything,
and so we can elide it and avoid the GC work.
Instead of creating a new iterator result Object for every step of
for..in iteration, we can create a single object up front and reuse it
for every step. This avoids generating a bunch of garbage that isn't
observable by author code anyway.
We can also reuse the existing premade shape for these objects.
Instead of pruning as-we-go, which means a ton of hash lookups,
we now only do a single pass to prune all non-enumerable keys when
setting up for for..in iteration.
This fixes a compile warning on GCC 13.3.0 where it warns about the
use of Variant within this class with no key function. Since the class
was almost a CRTP base class, make it a real one.
Previously, when serializing an angle value, we would always convert it
to degrees. We now canonicalize the angle value only when serializing
its computed value.
Previously, when serializing a time value, we would always convert it
to seconds. We now canonicalize the time value only when serializing
its computed value.
GIF loader was completely failing when encountering errors with
frame descriptors or individual frames, even when some frames were
successfully loaded. Now we attempt to decode at least some frames
and fail only when no frames can be decoded at all.
This currently does not enforce it on Layout tests, even though
it seems most necessary there.
This is to speed up the review on this PR due to an excessive
amount of layout tests that would need rebaselining if DOCTYPEs
were added to them.
This requires us to always run the CI job and check the individual jobs'
results, since only having `needs:` will not work when `lint_commits` is
potentially skipped.
We can prevent running builds unnecessarily by requiring the linters to
succeed first. If either the code or commit linter fails, it means the
author of the PR needs to rework their branch and after pushing their
changes, we need to do a full new CI run anyway.
JsonParser has a footgun where it does not retain ownership of the
string to be parsed. For example, the following results in UAF:
JsonParser parser(something_returning_a_string());
parser.parse();
Let's avoid this altogether by only allowing use of JsonParser with
a static, safe method.
JsonParser only holds a view into the provided string, the caller must
keep it alive. Though we can actually just use JsonValue::from_string
here instead.
When we inspect a DOM node, we currently serialize many properties for
that node, including its layout, computed style, used fonts, etc. Now
that we aren't piggy-backing on the Inspector interface, we can instead
only serialize the specific information required by DevTools.
Now that we aren't piggy-backing on the Inspector interface, we can make
our box-model serialization provide exactly the values that DevTools
requires.
With this change we no longer stretch "width: auto" for replaced
elements and also use "width calculation rules for block-level replaced
elements", like suggested by the spec.
The intent is that this will replace the separate Task Manager window.
This will allow us to more easily add features such as actual process
management, better rendering of the process table, etc. Included in this
page is the ability to sort table rows.
This also lays the ground work for more internal `about` pages, such as
about:config.
CSSStyleDeclaration is a base class that's used by various collections
of style properties or descriptors. This commit moves all
style-property-related code into CSSStyleProperties, where it belongs.
As noted in the previous commit, we also apply the CSSStyleProperties
prototype now.
We previously had PropertyOwningCSSStyleDeclaration and
ResolvedCSSStyleDeclaration, representing the current style properties
and resolved style respectively. Both of these were the
CSSStyleDeclaration type in the CSSOM. (We also had
ElementInlineCSSStyleDeclaration but I removed that in a previous
commit.)
In the meantime, the spec has changed so that these should now be a new
CSSStyleProperties type in the CSSOM. Also, we need to subclass
CSSStyleDeclaration for things like CSSFontFaceRule's list of
descriptors, which means it wouldn't hold style properties.
So, this commit does the fairly messy work of combining these two types
into a new CSSStyleProperties class. A lot of what previously was done
as separate methods in the two classes, now follows the spec steps of
"if the readonly flag is set, do X" instead, which is hopefully easier
to follow too.
There is still some functionality in CSSStyleDeclaration that belongs in
CSSStyleProperties, but I'll do that next. To avoid a huge diff for
"CSSStyleDeclaration-all-supported-properties-and-default-values.txt"
both here and in the following commit, we don't apply the (currently
empty) CSSStyleProperties prototype yet.
Previously, parse_css_style_attribute() would parse the string, extract
the properties, add them to a newly-created
ElementInlineCSSStyleDeclarations, and then user code would take the
properties back out of it again and throw it away. Instead, just return
the list of properties, and the caller can create an EICSD if it needs
one.
The spec defines several properties on the declaration which we
previously made virtual or stored on subclasses. This commit moves these
into CSSStyleDeclaration itself, and updates some of the naming.
We have the "Element, but also maybe a pseudo-element of it" concept in
a lot of places, so let's wrap it up in a single type to make it easier
to deal with.
The `as_corner()` and `floored_device_pixels()` functions popped up
frequently in profiles when selecting text on some Tweakers.net pages.
For every corner we're performing multiple device pixel calculations
regardless of whether any radius was actually set.
Add an early return if no radius is set. On my machine this reduces the
time taken in both `as_corner()` and `floored_device_pixels()` by 46%
(63% fewer calls).
This also implements the `:high-value` and `:low-value` that are in the
spec.
Same note as before about this being based on the very-drafty CSS Forms
spec. In fact, some of this isn't even in that spec yet. Specifically,
the `:suboptimal-value` and `:even-less-good-value` names are undecided
and subject to change. However, it's clear that this is a pseudo-class
situation, not a pseudo-element one, so I think this is still an
improvement, as it allows styling of the `::fill` pseudo-element
regardless of what state it is in.
Relevant spec issue: https://github.com/openui/open-ui/issues/1130
This spec is very early on, and likely to change. However, it still
feels preferable to use these rather than the prefixed -webkit ones.
Plus, as we have a `::fill` on range inputs, we can use that for styling
the bar instead of inserting CSS from C++.
Currently, we create `this_argument` with
`ordinary_create_from_constructor`, then we use `arguments_list` to
build the callee_context.
The issue is we don't properly model the side-effects of
`ordinary_create_from_constructor`, if `new_target` is a proxy object
then when we `get` the prototype, arbitrary javascript can run.
This javascript could perform a function call with enough arguments to
reallocate the interpreters m_argument_values_buffer vector. This is
dangerous and leads to a use-after-free, as our stack frame maintains a
pointer to m_argument_values_buffer (`arguments_list`).
LibCore's list of ignored header files for Swift was missing the Apple
only files on non-Apple platforms. Additionally, any generic glue code
cannot use -fobjc-arc, so we need to rely on -fblocks only.
This has two slightly different implementations for ARC and non-ARC
compiler modes. The main idea is to store a block pointer as our
closure and use either ARC magic or BlockRuntime methods to manage
the memory for the block. Things are complicated by the fact that
we don't yet force-enable swift, so we can't count on the swift.org
llvm fork being our compiler toolchain. The patch adds some CMake
checks and ifdefs to still support environments without support
for blocks or ARC.
This implementation also fixes an issue where the individual components
of the `border-radius` shorthand were always assumed to be of type
`BorderRadiusStyleValue`, which could lead to a crash when CSS-wide
keywords were used.
A Storage object may be created with an existing storage bottle. For
example, if you navigate from site.com/page1 to site.com/page2, they
will have different localStorage objects, but will use the same bottle
for actual storage.
Previously, if page1 set some key/value item, we would initialize the
byte count to 0 on page2 despite having a non-empty bottle. Thus, if
page2 set a smaller value with the same key, we would overflow the
computed byte count, and all subsequent writes would be rejected.
This was seen navigating from the chess.com home page to the daily
puzzle page.
On macOS, we should use the Cmd (Super) modifier key along with the
arrow keys to scroll to the beginning/end of the document, or navigate
back and forth in the history, rather than the Ctrl or Alt keys.
This is a normative change in the ECMA-402 spec. See:
https://github.com/tc39/ecma402/commit/7508197
In our implementation, we don't have the affected AOs directly, as we
delegate to ICU. So instead, we must ensure we provide ICU a locale with
the relevant extension keys present.
This improves the output of `getComputedStyle()` for the `top`,
`bottom`, `left` and `right` properties, where the used value is now
returned rather than the computed value, where applicable."
positioned element is a descendant of inline-block
Sets inline block offsets in InlineFormattingContext.cpp, but this is
not enough. When static position rect is calculated during layout,
not all ancestors of abspos box may have their offsets calculated yet
(more info here: https://github.com/LadybirdBrowser/ladybird/pull/2583#issuecomment-2507140272).
So now static position rect is calculated relative to static containing
block during layout and calculation relative to actual containing block
is done later in
FormattingContext::layout_absolutely_positioned_element.
Fixes wpt/css/CSS2/abspos/static-inside-inline-block.html
A couple of fixes here:
- Parse a `<complex-selector>` instead of a `<selector-list>`
- Don't match if any unknown `::-webkit-*` pseudo-elements are found
Both `@supports` and `@font-face` need this. There may be some automatic
way of querying whether our renderer supports these, but I couldn't
figure it out, so here's a basic hard-coded list. I think the font-tech
list has false negatives, as I don't know enough about fonts to
determine what we support accurately.
A MediaFeature either has a MediaFeatureValue, or a Range, or nothing.
Combining these into a Variant reduces the size from 176 bytes to 128,
and also makes constructing these a little less awkward.
CSS Values 5 now defines a `<boolean-expr[]>` type that is used in place
of the bespoke grammar that previously existed for `@media` and
`@supports` queries. This commit implements some BooleanExpression
types to represent the nodes in a `<boolean-expr[]>`, and reimplements
`@media` and `@supports` queries using this.
The one part of this implementation I'm not convinced on is that the
`evaluate()` methods take a `HTML::Window*`. This is a compromise
because `@media` requires a Window, and `@supports` does not require
anything at all. As more users of `<boolean-expr[]>` get implemented in
the future, it will become clear if this is sufficient, or if we need
to do something smarter.
As a bonus, this actually improves our serialization of media queries!
Instead of parsing the parts of a `@supports` query, then only
evaluating them when constructing the Supports itself, we can instead
evaluate them as we parse them. This simplifies things as we no longer
need to pass a Realm around, and don't have to re-parse the conditions
again with a new Parser instance.
Fixes underinvalidation caused by resolving text-decoration-thickness
during layout commit, while this property can be invalidated
independently of layout.
JavaScript URLs run in the same document context the navigation was
started in, so they're not eligible to be moved to a new WebContent
process.
Fixes the "Login as demo user" link on https://demo.immich.app/
Instead of directly invoking `to_px()`,
`calculate_min_content_contribution()` needs to use
`calculate_inner_width()` and `calculate_inner_height()`, which are
aware of how to correctly handle `min-content` and `max-content` values.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/3469
Height resolution assumes that when available space is definite, it
matches the size of non-anonymous containing block. With this change, we
correctly maintain this assumption when box is wrapped in anonymous
node.
Fixes https://github.com/LadybirdBrowser/ladybird/issues/3422
While we don't want to be writing this type of code in 'normal'
code this is useful to do in tests as:
EXPECT_EQ(my_optional, OptionalNone {});
Has a much better error on assertion failure when compared with:
EXPECT(!my_optional.has_value());
Exactly one place seems to define these at the moment: service worker
clients. Since we don't have a type for these and just use
EnvironmentSettingsObject, I've put it there.
The plugin may go out of scope before the callbacks we've queued have
fired. It is undefined behavior for these callbacks to access the plugin
data after that has occurred.
This patch protects these callbacks with weak pointers. Note that the
plugin is not ref-counted, so we cannot use `strong_ref` here.
The NodeIdentifier struct essentially contains the DOM node ID within
its WebContent process. These will repeat across multiple processes,
thus we cannot use them to search for node actors in the global actor
registry.
Instead, we can store a map of all node actors created by the walker
itself. The NodeIdentifier is then appropriate for actor lookups on
that map. This has the added benefit of not needing to search the entire
actor registry many times while forming the DOM node caches.
This fix allows us to inspect multiple tabs at once.
We must reply to requests received from the client in the order they are
received. The wrench in this requirement is handling requests that must
be performed asynchronously, such as fetching the serialized DOM tree
from the WebContent process.
We currently handle this with a "block token". Async request handlers
hold a token that blocks any subsequent responses from being sent. When
that token is removed (i.e. the async request now has a response to be
sent), the async response is then sent followed by the blocked responses
in-order.
This strategy had a limitation that we could not handle an actor trying
to take 2 block tokens, meaning only one async request could be handled
at a time. This has been fine so far, but an upcoming feature (style
sheet sources) will break this limitation. The client will request N
sources at a time, which would try to take N block tokens.
The new strategy is to assign all requests an ID, and store a list of
request IDs that are awaiting a response. When the server wants to send
a reply, we match the ID of the replied-to message to this list of IDs.
If it is not the first in this list, then we are blocked waiting for an
earlier reply, and just store the response. When the earlier request(s)
receive their response, we can then send out all blocked replies (up to
the next request that has not yet received a response).
This is to prepare for an upcoming change where we will need to track
replies to messages by ID. We will be able to add parameters to this
structure without having to edit every single actor subclass header
file.
Element styles should be invalidated when they depend on CSS variables
defined in one of their ancesors or when one of their ancestors change
their class, due to the possibility of adding new CSS variables in
scope.
This allows us to parse the Content-Security-Policy header and
Referrer-Policy header from navigation responses and actually allow
them to start having an effect.
We now parse `<counter-name>` values as a `<custom-ident>`. This
disallows `default` and CSS-wide keywords as counter names. The
specification additionally disallows `none` as a counter name.
Before this change, the layout tree for slots was only updated after
creating a layout node for the slot itself. This was not enough to
account for partial layout tree rebuilds when the slot content changed.
With this change, we always recurse into the slot content.
Fixes expanding and collapsing of nodes in DOM tree inspector broken in
9b26f7eb0f
This removes some boilerplate around executing async requests, such as
calling dbgln_if on any errors, handling weak pointers to `this`, and
dealing with block tokens.
This is just to help make the message handlers a bit briefer. I had
considered adding a TRY-like macro to auto-return when the lookup fails,
but since statement expressions cannot return references, that would
result in a copy of all e.g. object and array lookups.
The "from" field is required in every response. It is the name of the
actor sending the message. This patch fills in the "from" field in the
Actor base class so that subclasses don't have to.
DevTools will ask us to find nodes by their actor names. If the actor is
missing, we should inform DevTools of the error instead of just dropping
the request.
The diff here is a bit noisy, just due to a leftward shift of code that
used to be in an if-statement.
This commit:
- Prevents path traversal via the about: scheme
- Prevents loading about:inspector
- Requires about: URIs to be opaque paths
- Prevents crashes with invalid percent encoded paths
These commands are used for the "Edit As HTML" feature in DevTools. This
renames our existing HTML getter IPC to indicate that it is for outer
HTML. DevTools will need a separate inner HTML getter.
Since cross-site navigation is a pretty frequent task, creating a spare
process is commonplace in other browsers to reduce the overhead of
directing the target site to a new process.
We store this process on the WebView application. If it is unavailable,
we queue a task to create it later.
Site isolation is a common technique to reduce the chance that malicious
sites can access data from other sites. When the user navigates, we now
check if the target site is the same as the current site. If not, we
instruct the UI to perform the navigation in a new WebContent process.
The phrase "site" here is defined as the public suffix of the URL plus
one level. This means that navigating from "www.example.com" to
"sub.example.com" remains in the same process.
There's plenty of room for optimization around this. For example, we can
create a spare WebContent process ahead of time to hot-swap the target
site. We can also create a policy to keep the navigated-from process
around, in case the user quickly navigates back.
We were using the public suffix of the URL's host as its registrable
domain. But the registrable domain is actually the public suffix plus
one additional label.
If an element is affected only by selectors using the direct sibling
combinator `+`, we can calculate the maximum invalidation distance and
use it to limit style invalidation. For example, the selector
`.a + .b + .c` has a maximum invalidation distance of 2, meaning we can
skip invalidating any element affected by this selector if it's more
than two siblings away from the element that triggered the style
invalidation.
This change results in visible performance improvement when hovering
PR list on GitHub.
The public one at echo.websocket.org tends to be slow at times,
frequently causing timeouts. Ideally we would run a local websocket
server but for now we can make do with a self-hosted alternative.
Instead of trying to manually determine which parts of a bitmap fall
within the box of the `<img>` element, just draw the whole bitmap and
let Skia clip the draw-area to the correct rectangle.
This fixes a bug where the entire bitmap was squashed into the rectangle
of the image box instead of being clipped.
With this change, image rendering is now correct enough to import some
of the WPT tests for object-fit and object-position. To get some good
coverage I have imported all tests for the `<img>` tag. I also wanted to
import a subset of the tests for the `<object>` tag, since those are
passing as well now. Unfortunately, they are flaky for unknown reasons.
This is the second attempt at this bugfix. The prior one was e055927ead
and broke image rendering whenever the page was scrolled. It has
subsequently been reverted in 16b14273d1. Hopefully this time it is not
horribly broken.
e055927ead introduced a regression that caused images to not be rendered
when the page was scrolled. This was subsequently reverted in
16b14273d1. To ensure this mistake cannot happen again, this commit
introduces a regression test for that error case.
When setting scroll position during page load we need to consider
whether we actually have a fragment to scroll to. A script may already
have run at that point and may already have set a scroll position.
If there is an actual fragment to scroll to, it is fine to scroll to
that fragment, since it should take precedence. If we don't have a
fragment however, we should not unnecessarily overwrite the scroll
position set by the script back to (0, 0).
Since this problem is caused by a spec bug, I have tested the behavior
in the three major browsers engines. Unfortunately they do not agree
fully with each other. If there is no fragment at all (e.g. `foo.html`),
all browsers will respect the scroll position set by the script. If
there is a fragment (e.g. `foo.html#bar`), all browsers will set the
scroll position to the fragment element and ignore the one set by
script. However, when the fragment is empty (e.g. `foo.html#`), then
Blink and WebKit will set scroll position to the fragment, while Gecko
will set scroll position from script. Since all of this is ad-hoc
behavior anyway, I simply implemented the Blink/WebKit behavior because
of the majority vote for now.
This fixes a regression introduced in 51102254b5.
This has been a longstanding ergonomic issue with our IPC compiler. Non-
trivial types were previously passed by const&. So if we wanted to avoid
expensive copies, we would have to const_cast and move the data.
We now pass ownership of all transferred data to the client subclasses.
This allows us to remove const_cast from these methods, and allows us to
avoid some trivial expensive copies that we didn't bother to const_cast.
This removes a couple of places where we were constructing strings or
vectors just to transfer data over IPC. And passes some values by const&
to remove clangd noise.
For example, consider the following IPC message:
do_something(u64 page_id, String string, Vector<Data> data) =|
We would previously generate the following C++ method to encode/transfer
this message:
void do_something(u64 page_id, String string, Vector<Data> data);
This required the caller to either have to copy the non-trivial types or
`move` them in. In some places, this meant we had to construct temporary
vectors just to send an IPC.
This isn't necessary because we weren't holding onto these parameters
anyways. We would construct an IPC::Message subclass with them (which
does require owning types), but then immediate encode the message to
an IPC::MessageBuffer and send it.
We now generate code such that we don't need to construct a Message. We
can simply encode the parameters directly without needing ownership.
This allows us to take view-types to IPC parameters.
So the above example now becomes:
void do_something(u64, StringView, ReadonlySpan<Data>);
This will be needed in an upcoming commit so that this method may call
itself recursively to generate overloads. Doing this extraction ahead of
time will simply make that diff easier to grok.
This isn't particularly important, but when staring at generated IPC
files, it's nice not to have an extra newline after every line of code
throughout the files.
This isn't actually necessary, since we already invalidate style for the
entire document, and the subsequent style update will discover any
additional layout invalidation needed as well.
Because we cache the transformed text string in text nodes affected by
text-transform, we have to actually update the layout tree when this
property value changes.
If a CSS animation or transition was being used to manipulate a property
that itself does not affect layout, we were still doing a full relayout
whenever any animation or transition related property was changed.
As it turns out, we can just not do that, and we avoid a bunch of
unnecessary layout work on many pages. When a layout-affecting property
is being animated, the animation/transition update code takes care to
invalidate layout as appropriate anyway!
This was very noticeable on GitHub, where moving the mouse cursor
between "Issues" and "Pull requests" would trigger a full relayout every
time. Now that doesn't happen, and it's much more responsive. :^)
12c6ac78e2 with fixed mistake when cache
slot is copied instead of being referenced:
```cpp
auto cache =
box.cached_intrinsic_sizes().min_content_height.ensure(width);
```
while it should've been:
```cpp
auto& cache =
box.cached_intrinsic_sizes().min_content_height.ensure(width);
```
This change moves intrinsic sizes cache from
LayoutState, which is local to current layout run,
to layout nodes, so it could be reused between
layout runs. This optimization is possible because
we can guarantee that these measurements will
remain unchanged unless the style of the element
or any of its descendants changes.
For now, invalidation is implemented simply by
resetting cache on whole ancestors chain once we
figured that element needs layout update.
The case when layout is invalidated by DOM's
structural changes is covered by layout tree
invalidation that drops intrinsic sizes cache
along with layout nodes.
I measured improvement on couple websites:
- Mail list on GMail 28ms -> 6ms
- GitHub large code page 47ms -> 36ms
- Discord chat history 15ms -> 8ms
(Time does not include `commit()`)
This was completely unnecessary, and we can just let the internal
DOM tree changes trigger partial layout updates instead.
Noticed we were repeatedly dropping layout trees on ChatGPT and this
was one of the culprits.
If the CharacterData node has no layout node when we're changing its
text, we don't need to mark the document for relayout.
This is fine, because if the node ends up getting a layout node attached
to it, we'll naturally perform relayout after that anyway.
We don't need to perform inside layout of flex and grid formatting
contexts when one of their ancestors is undergoing intrinsic size
measurement. This is because the parent formatting context will have
already sized the flex/grid container, and thus inside layout is
completely redundant work.
This full invalidation was just papering over earlier bugs in the
partial layout tree update code. The DOM mutations that happen here
should be enough to drive the necessary invalidation now.
Note that this is covered by a regression test added with the
invalidation.
This requires a couple of amendments to the DOM node serialization.
Namely, we need to include the HTML namespace, otherwise the context
menu item to create a new node is disabled.
For DevTools, we will want to forward mutation events to the UI in order
to inform the DevTools client about changed DOM nodes. The API for this
requires the new values associated with the events; for example, for
character data events, this will be the node's new text data.
This patch moves the queueing of the mutation record until after we have
the new character data stored. This is not observable.
This is a prepatory commit to be able to handle DOM mutations. Once a
node actor is created, the DOM node it is created for must continue to
be associated with the same actor even after DOM mutations. This change
stores an identifier on the node actor, and only creates new actors when
an actor for a node does not exist.
The pattern of:
auto tab = get_tab();
auto walker = get_walker();
if (tab && walker) {
if (auto node = walker->dom_node(node_id)) {
delegate().do_something(tab->description(), node);
}
}
Is getting a bit unergonomic and is often repeated. This patch just adds
a helper to WalkerActor to do most of this work, so now we have:
if (auto node = WalkerActor::dom_node_for(get_walker(), node_id)) {
delegate().do_something(node->tab->description(), node);
}
This change implements the requirements for the “suffering from an
overflow” and “suffering from an underflow” algorithms for
HTMLInputElement constraint validation.
This patch removes those unused 2 algorithms:
1. `fetch_internal_module_script_graph`
2. `fetch_descendants_of_a_module_script`
Those 2 algorithms were removed in spec and are not used in our
codebase.
This allows us to avoid a full layout tree rebuild after change of
"display" property, which happens frequently in practice. It also
allows us to avoid a full rebuild after DOM node insertion, since
previously, computing styles for newly inserted nodes would trigger a
complete layout tree rebuild.
The User Timing, Resource Timing and Navigation Timing specifications
have not been updated to account for the queue method to also append to
the underlying performance buffer and it's method of checking it's
full.
This separates the functionality into a different function, with a flag
to indicate whether to use the custom full buffer logic or not.
This change fixes a bug in our implementation of the “step base”
algorithm at https://html.spec.whatwg.org/#concept-input-min-zero. We
were using the “value” IDL/DOM attribute in a particular step, where the
spec instead actually requires using the “value” content attribute.
A font-size with rem units need to resolve against the default font
metrics for the root element, otherwise every time we compute style,
the reference value for rem units grows.
This fixes an issue where text on some web pages would grow every time
there was a relayout. This was very noticeable on https://proton.me/Fixes#339
We already have a check to skip the layout of descendants if the
available size is intrinsic, but this is not sufficient in nested
intrinsic layout cases, where the available size might be definite even
though we are in intrinsic layout mode.
We already have a check to skip the layout of descendants if the
available size is intrinsic, but this is not sufficient in nested
intrinsic layout cases, where the available size might be definite even
though we are in intrinsic layout mode.
This commit adds support for using the standard library implementation
of <stacktrace> if libbacktrace is not found. This can also be
explicitly enabled through ENABLE_STD_STACKTRACE for platforms that have
libbacktrace available.
Co-Authored-By: Andrew Kaster <andrew@ladybird.org>
Instead of trying to manually determine which parts of a bitmap fall
within the box of the `<img>` element, just draw the whole bitmap and
let Skia clip the draw-area to the correct rectangle.
This fixes a bug where the entire bitmap was squashed into the rectangle
of the image box instead of being clipped.
With this change, image rendering is now correct enough to import some
of the WPT tests for object-fit and object-position. To get some good
coverage I have imported all tests for the `<img>` tag. I also wanted to
import a subset of the tests for the `<object>` tag, since those are
passing as well now. Unfortunately, they are flaky for unknown reasons.
This change implements HTMLInputElement type=email constraint validation
in conformance with the current spec requirements (which happens to also
produce behavior that’s interoperable with other existing engines).
Before this change, an element masked with 'mask-image: url(...)' would
show the mask, but 'mask: url(...)' would not. On e.g. dialogic.nl it
would show white boxes instead of the actual images in the top
navigation bar. We still do not support many of the other mask
properties, but with this change at least the masks show up in both
cases.
There is further work needed to complete the implementation of
URL::Pattern::Pattern, but this implements the remaining URLPattern
exec and test IDL interfaces, leaving all remaining work to LibURL.
As the comment in this file explains the caller of LibURL APIs are
meant to assume if they see any error, that it is a TypeError since
that is all the spec throws at the moment.
A custom error type exists here so that we can include more
information in TypeError's which are thrown.
All URLs are now either constucted through the URL Parser or by
default constructing a URL, and setting each of the fields of that
URL manually. This makes it much more difficult to create invalid
URLs.
This is a bit weird in the spec in it passing through a string here
instead of a URL record. However, the string being used in this
case should only ever be a valid URL string if it is not the empty
string.
Instead of relying on the implicit URL constuctor. Parsing should
never fail here as urlString before adding the suffix is already
parsed above, and the suffix should only be a valid query string.
This is the same type as what is spec'd. We cannot use a URL record
for this member as the spec in some scenarios will set and compare
the URL string to an invalid URL value, such as the empty string.
With implicit string constructors for the URL class removed
explicitly using URL::Parser::basic_parse makes the code look
quite silly in those places.
Trivia is whatever whitespace and comments appear before a token.
Previously this was always given a TokenCategory of Invalid, so it
would be displayed as an error in the view-source page, with red wiggly
underlines. Instead, treat it as what it actually is: whitespace and
comments!
The "on_received_console_message" and "on_received_console_messages"
were indistinguishable in purpose based on their name. This renames them
to:
on_console_message_available - WebContent has output a console message
and it is available for the client to retrieve.
on_received_styled_console_messages - WebContent has replied to a
request for the available console messages.
The "styled" qualifier is used here to indicate that the messages have
been styled with CSS for display in a WebView. This is to prepare for
an upcoming patch where DevToolsConsoleClient will not stylize the
output; DevTools will want the raw JS values.
The idea originally was that the WebContentConsoleClient would perform
some amount of console handling that both InspectorConsoleClient and
DevToolsConsoleClient needed. But in implementing the DevTools console,
it's become clear that these implementations will not overlap at all. So
this patch moves the existing Inspector functionality away from
WebContentConsoleClient.
There were a couple of issues here. First, the IterableContainerOf
concept was testing if dereferencing an iterator of ContainerType<T>
returns a value of type T. However, it should return a T&.
Second, the constructor was trying to move values out of a constant
reference to the container. We now accept both lvalue and rvalue
containers.
This change implements HTMLInputElement type=url constraint validation
in such a way as to match the behavior in other existing engines (which
is, however, very different from what the spec currently requires).
We already have logic to play or cancel animations in an element's
subtree when the display property changes to or from none. However,
this was not sufficient to cover the case when an element starts/stops
being nested in display none after insertion.
The DOMParsing spec is in the process of being merged into the HTML one,
gradually. The linked spec change moves XMLSerializer, but many of the
algorithms are still in the DOMParsing spec so I've left the links to
those alone.
I've done my best to update the GN build but since I'm not actually
using it, I might have done that wrong.
Corresponds to 2edb8cc7ee
Add and expose a `ladybird` package in the Nix flake. Allows building
Ladybird only using Nix, and without relying on the dev shell. Other
users will be able to build Ladybird from source using nix3 CLI via
`nix build github:LadybirdBrowser/ladybird` or add it as a flake input
to consume the package.
We also re-use the package in the devshell, to keep dependencies in-sync
between the bleeding-edge source package, and the dev shell. This is an
upgrade to how we previously inferred dependencies to Nixpkgs package
for Ladybird, which had a chance to lack dependencies required to build
the latest commit.
Motive: flake-utils is an unnecessary abstraction library that manages
to creep into every project under the guise of making things "easier."
There is no reason to use flake-utils here, as Nix is a powerful enough
DSL to handle system abstractions in-house without relying on more 3rd
party code, especially in a flake as small as Ladybird's.
We are not re-inventing the wheel; we are merely going back to circle
wheels.
These form the basis of Content Security Policy. A policy is a
collection of directives that are parsed from either the
Content-Security-Policy(-Report-Only) HTTP header, or the `<meta>`
element.
The directives are what restrict the operations can be performed in the
current global execution context. For example, "frame-ancestors: none"
tells us to prevent the page from being loaded in an embedded context,
such as `<iframe>`.
You can see it a bit like OpenBSD's pledge() functionality, but for the
web platform: https://man.openbsd.org/pledge.2
Previously, despite CTRL being held, the webpage elements such as
checboxes (if existing) could 'hijact' moving to the next and previous
tab with CTRL+TAB and CTRL+SHIFT+TAB.
In the case, where IDL enums start with a character, that is an invalid
start character for C++ identifiers (e.g. a number), the C++ generaion
for enums fails. An example would be "2d" see #3788
Previously the IDL Parser Complained, that a type with the name ''
(an empty string) couldn't be found. It wasn't that easy to see the
mistake, as the not named type is printed without '' around it, so the
message seemed to miss a type. This now catches this specify error
earlier and reports it cleanly to the user. An example of this
occurring would be ''typedef A (B or //FIXME: C )
All necessary invalidations are issued while invalidating animated
style. There is no need to drop display list simply because there are
some animations that might need an update.
Note that "becomes browsing-context connected" is defined as:
> When the insertion steps are invoked with it as the argument and it is
> now browsing-context connected.
This fixes an issue where WPT editing tests would clone the entire DOM
thousands of times and re-fetch all the linked CSS files once per clone.
We set the page's focused navigable upon mouse-down events from the UI.
However, we neglected to ever clear that focused navigable upon events
such as subsequent page navigations. This left the page with a stale
reference to a no-longer-active navigable. The effect was that any key
events from the UI would not be sent to the new page until either the
reference was collected by GC, or another mouse-down event occurred.
In the test added here, without this fix, the text sent to the input
element would not be received, and the change event would not fire.
In some cases, we might be hovering directly on an element
scrollable e.g. horizontally, but we are scrolling vertically.
In these cases, we need to delegate the scroll to the parent
instead of stalling the user's scroll.
This is an editorial change in the Temporal proposal. See:
https://github.com/tc39/proposal-temporal/commit/03770bb
Note: We were actually already using the Temporal definition of this AO
in Intl.DurationFormat, so there's no change needed there.
These are going to be included in the ECMA-262 AOs once Temporal reaches
stage 4. There's no need to keep them in the Temporal namespace. Some
upcoming Temporal editorial changes will get awkward without this patch.
Instead of marking all nodes in the subtree for style recalculation,
including subtrees of subsequent siblings, we can fall back to the
default invalidation path, which is optimized to skip siblings
unaffected by sibling selectors.
Makes scrolling on https://frame.work/pl/en/about go a lot smoother.
The generic `ssl` feature selects Secure Transport on macOS, which is a
deprecated library and support for it in curl is also deprecated and
scheduled for removal after May 2025: https://daniel.haxx.se/blog/tag/securetransport/
Secure Transport is replaced by Network Framework, but as per the blog
post above, there's no foreseeable future of curl supporting it.
With this information, we now explicitly use OpenSSL as the backend for
curl, inline with the default choice for Linux.
This gives us some key benefits:
- A maintained and current TLS library
- TLS 1.0 and 1.1 is disabled by default
- TLS 1.3 is now available
- Modern cipher suites
- Removal of TLS_EMPTY_RENEGOTIATION_INFO_SCSV extension
- Opportunity to support HTTP/3 with nghttp3 and OpenSSL's QUIC support
- More extensions, key exchanges, EC point formats, etc.
This has been left unimplemented since we switched to the Skia renderer.
Now `text-decoration-style: wavy` actually paints a wavy line. :^)
We had a text-decoration test, but it only checked `solid` lines, so
I've replaced it with a modified version of the old test page from
Serenity, without the blink option, and with some thickness parameters.
I did experiment with using a `SkPath1DPathEffect` to make it repeat the
pattern for us, but I couldn't make it look good at all.
Previously, we only returned the first result that looked like an IPv6
or IPv4 address.
This cropped up when attempting to connect to https://cxbyte.me/ whilst
IPv6 on the server wasn't working. Since we only returned the first
result, which happened to be the IPv6 address, we wasn't able to
connect.
Returning all results allows curl to attempt to connect to a different
IP if one of them isn't working, and potentially make a successful
connection.
The `cursor` property accepts a list of possible cursors, which behave
as a fallback: We use whichever cursor is the first available one. This
is a little complicated because initially, any remote images have not
loaded, so we need to use the fallback standard cursor, and then switch
to another when it loads.
So, ComputedValues stores a Vector of cursors, and then in EventHandler
we scan down that list until we find a cursor that's ready for use.
The spec defines cursors as being `<url>`, but allows for `<image>`
instead. That includes functions like `linear-gradient()`.
This commit implements image cursors in the Qt UI, but not AppKit.
A cursor is an image, with an optional x,y hotspot.
We know that a CursorStyleValue's bitmap never needs to change size, so
we create the ShareableBitmap once and then cache it, so that we don't
have to repeatedly create an FD for it or do the work of painting.
To avoid repainting that bitmap, we cache the values that were used to
create it - what currentColor is and its length resolution context -
and only repaint when those change.
None of the code here actually needs a NodeWithStyleAndBoxModelMetrics,
and we'll need to be able to resolve images from inside
NodeWithStyle::apply_style().
IntegerOrCalculated and NumberOrCalculated's T types don't have a
to_string() method because they're i64 and double respectively, so use
String::number() for them instead.
Also rearrange this method to avoid checking the variant's contents
multiple times.
We previously only invalidated the cached color-stop data when the
painted area's size changed. However, multiple elements can use the
same gradient and be the same size, but have different parameters that
affect the gradient stop positions, for example if a stop has an em
position. This can also change for the same element over time.
The new cache instead uses these parameters as the cache key. So we
recompute the cache if lengths would resolve differently, or the area's
size is different.
The included test fails without this change.
This supports evaluating the script and replying with the result. We
currently serialize JS objects to a string, but we will need to support
dynamic interaction with the objects over IPC. This does not yet support
sending console messages to DevTools.
Our existing WebContentConsoleClient is very specific to our home-grown
Inspector. It renders console output to an HTML string. For DevTools, we
will not want this behavior; we will want to send representations of raw
JS values.
This patch makes WebContentConsoleClient a base class to handle console
input from the user, either from the Inspector or from DevTools. It then
moves the HTML rendering needed for the Inspector to a new class,
InspectorConsoleClient. And we add a DevToolsConsoleClient (currently
just stubbed) to handle needs specific to DevTools.
We choose at runtime which console client to install, based on the
--devtools command line flag.
Otherwise finalization step of initial `about:blank` navigation might
cancel user-initiated navigations by changing ongoing navigation id.
This is implemented by marking navigable as ready to start processing
navigation in SHTQ task, because we know for sure this task cannot be
processed until finalization step of initial `about:blank` navigation
is done.
The main users were the `dump()` functions, which now dump their
children instead, which is more correct anyway.
The others are for serializing numeric values, so
NumericCalculationNode's to_string() is renamed to value_to_string
() and used for those for convenience.
This gets us 37 new subtest passes in css/css-values, and 13 passes in
our other in-tree tests (and probably some random other ones!)
As noted in comments, a few parts of this algorithm have ad-hoc
behaviour to handle some issues in the spec.
Having multiple kinds of node that hold numeric values made things more
complicated than they needed to be, and we were already converting
ConstantCalculationNodes to NumericCalculationNodes in the first
simplification pass that happens at parse-time, so they didn't exist
after that.
As noted, the spec allows for other contexts to introduce their own
numeric keywords, which might be resolved later than parse-time. We'll
need a different mechanism to support those, but
ConstantCalculationNode could not have done so anyway.
This commit implements the main "render blocking" behavior for link
elements, drastically reducing the amount of FOUC (flash of unstyled
content) we subject our users to.
The document will now block rendering until linked style sheets
referenced by parser-created link elements have loaded (or failed).
Note that we don't yet extend the blocking period until "critical
subresources" such as imported style sheets have been downloaded
as well.
Previously, we would only keep the cell that must survive alive, but
none of it's edges.
This cropped up with a GC UAF in must_survive_garbage_collection of
WebSocket in .NET's SignalR frontend implementation, where an
out-of-scope WebSocket had it's underlying EventTarget properties
garbage collected, and must_survive_garbage_collection read from the
destroyed EventTarget properties.
See: https://github.com/dotnet/aspnetcore/blob/main/src/SignalR/clients/ts/signalr/src/WebSocketTransport.ts#L81
Found on https://www.formula1.com/ during a live session.
Co-Authored-By: Tim Flynn <trflynn89@pm.me>
This change fixes a bug that can be reproduced with the following steps:
```js
const iframe = document.createElement("iframe");
document.body.appendChild(iframe);
iframe.contentWindow.location.href = ("http://localhost:8080/demo.html");
```
These steps are executed in the following order:
1. Create iframe and schedule session history traversal task that adds
session history entry for the iframe.
2. Generate navigation id for scheduled navigation to
`http://localhost:8080/demo.html`.
3. Execute the scheduled session history traversal task, which adds
session history entry for the iframe.
4. Ooops, navigation to `http://localhost:8080/demo.html` is aborted
because addings SHE for the iframe resets the navigation id.
This change fixes this by delaying all navigations until SHE for a
navigable is created.
We hold a raw pointer to the mouse selection target, which is a mixin-
style class inherited only by JS::Cell classes. By not visiting this
object, we sometime had a dangling reference to it after it had been
garbage collected.
LibDevTools was implicitly including generated IPC endpoints from
LibWebView. This is not a dependency declared in the CMakeLists.txt. So
updates to the IPC file might not have caused the endpoint header to be
regenerated by the time LibDevTools is compiled, resulting in a build
error.
This patch removes that implicit dependency entirely.
Before this change, we only parsed fit-content as a standalone keyword,
but CSS-SIZING-3 added it as a function as well. I don't know of
anything else in CSS that is overloaded like this, so it ends up looking
a little awkward in the implementation.
Note that a lot of code had already been prepped for fit-content values
to have an argument, we just weren't parsing it.
This adds all parameters supported by skia by default. Supporting the
other parameters would just be a matter of defining the constants, but
that's work for another time.
Before this change, we were always processing all row groups first,
and then all rows. This led to incorrect table layouts when rows and row
groups were encountered in anything but that order.
Previously, they would stay open for the entire WebContent lifetime,
or until the server closed the connection. This was particularly
noticeable on collaborative websites/games such as
https://jigsawpuzzles.io/, where the user using Ladybird would stick
around even after they had navigated away.
This change — part of the HTML constraint-validation API (aka
“client-side form validation”) — implements the willValidate IDL/DOM
attribute/property for all form controls that support it.
When setting `font-family: monospace;` in CSS, we have to interpret
the keyword font sizes (small, medium, large, etc) as slightly smaller
for historical reasons. Normally the medium font size is 16px, but
for monospace it's 13px.
The way this needs to behave is extremely strange:
When encountering `font-family: monospace`, we have to go back and
replay the CSS cascade as if the medium font size had been 13px all
along. Otherwise relative values like 2em/200%/etc could have gotten
lost in the inheritance chain.
We implement this in a fairly naive way by explicitly checking for
`font-family: monospace` (note: it has to be *exactly* like that,
it can't be `font-family: monospace, Courier` or similar.)
When encountered, we simply walk the element ancestors and re-run the
cascade for the font-size property. This is clumsy and inefficient,
but it does work for the common cases.
Other browsers do more elaborate things that we should eventually care
about as well, such as user-configurable font settings, per-language
behavior, etc. For now, this is just something that allows us to handle
more WPT tests where things fall apart due to unexpected font sizes.
To learn more about the wonders of font-size, see this blog post:
https://manishearth.github.io/blog/2017/08/10/font-size-an-unexpectedly-complex-css-property/
This essentially reverts 1b46a52cfc
and adds more tests.
The reverted change was an incorrect workaround for the real issue,
which was that we weren't creating anonymous wrapper boxes around inline
children of table-cell boxes.
Now that this has been fixed, we can go back to aligning text properly.
This fixes an issue where CSS vertical-align on a table-cell box would
incorrectly apply to both the table-cell box and any inline content it
had inside.
The spec changes seem to mostly be about introducing a TrustedHTML type
which we do not yet support, so we have a couple of FIXMEs.
TrustedTypes::InjectionSink is an attempt at matching the spec, but it's
not entirely clear to me how it should work. I'm sure it'll get
revisited once we start implementing trusted types.
Before, adding an overflow'n `Checked<T>` to another `Checked<T>` would
cause a verification faliure when instead it should propogate m_overflow
and allow the user to handle the overflow.
The DevTools client will ask for this actor before trying to render any
box model or computed style information. We can just stub out this actor
for now.
The DevTools client will now send requests to the node actor, rather
than just sending messages to other actors on the node's behalf.
This exposed a slight issue in the way we assign actor IDs. Node actors
are created in the walker actor constructor, which executes before the
actor ID is incremented. So we must be sure to increment the actor ID
before invoking any actor constructors. Otherwise, the walker actor and
the first node actor have the same numeric ID.
We will be asked for different highlighters throughout the DevTools
session, e.g. ViewportSizeOnResizeHighlighter and BoxModelHighlighter.
The latter will be responsible for rendering and overlay on DOM nodes
when the user hovers over a node in the inspector panel.
Our own Inspector differs from most other DevTools implementations with
regard to highlighting DOM nodes as you hover elements in the inspected
DOM tree. In other implementations, as you change the hovered node, the
browser will render a box model overlay onto the page for that node. We
currently don't do this; we wait until you click the node, at which
point we both paint the overlay and inspect the node's properties.
This patch does not change that behavior, but separates the IPCs and
internal tracking of inspected nodes to support the standard DevTools
behavior. So the DOM document now stores an inspected node and a
highlighted node. The former is used for features such as "$0" in the
JavaScript console, and the latter is used for the box model overlay.
Our Inspector continues to set these to the same node.
We currently receive serialized JSON values over IPC and forward them to
them WebView callbacks, leaving it to the implementations of those
callbacks to parse the strings as JSON objects. This patch hoists that
parsing up to WebContentClient as soon as the IPC message is received.
This is to reduce the work needed for secondary implementations of these
callbacks (i.e. our Firefox DevTools server).
I recently questioned whether this would work as expected:
JsonValue value { JsonObject {} };
auto object = move(value.as_object());
So this just adds a unit test to ensure that it does.
Previously, the charset of name "UTF-16BE/LE" would be checked against
when following standards to convert the charset to UTF-8, but in
reality, the charsets "UTF-16BE" and "UTF-16LE" should be checked
separately.
Co-authored-by: Jelle Raaijmakers <jelle@ladybird.org>
This information is not particularly interesting, and it can be quite
verbose (such as style-propagation-for-long-continuation-chain.html,
which would log hundreds of lines of "a"s and "b"s).
This reduces the number of `.cpp` files that need to be recompiled when
one of the below header files changes as follows:
CSS/ComputedProperties.h: 1113 -> 49
CSS/ComputedValues.h: 1120 -> 209
This reduces the number of `.cpp` files that need to be recompiled when
one of the below header files changes as follows:
Painting/Command.h: 1030 -> 61
Painting/DisplayList.h: 1030 -> 60
Painting/DisplayListRecorder.h: 557 -> 59
One point to note is that I am not entirely sure what the result
of the pre-existing valueAsNumber test should be for this strange
case which does not lie exactly on a week/day boundary. Chrome
gives a negative timestamp, which seems more wrong than the result
we give, and neither gecko or WebKit appear to support the 'week'
type. So I'm considering this result acceptable for now, and this
may be something that will need more WPT tests added in the future.
This is intended to be used by the <input> element 'week' type state
and convert that to milliseconds from the Unix epoch (ignoring leap
seconds).
I am certain there is a much better way of writing this that does
not need a loop, but I am less convinced about writing that without
running into some edge case I didn't consider.
Even though we don't actually make use of these values at the moment,
we still want them to be reflected correctly once we start exposing used
margin values soon.
We were incorrectly shrinking the used right margin to make it fit
within the available space, even though this was not necessary and the
margin is allowed to stretch beyond the containing block.
This is not observable yet, but will be once we start exposing used
margin values in a subsequent change.
By the time we're measuring the height of a BFC root, we've already
collapsed all relevant margins for the root and its descendants.
Given this, we should simply use 0 (relative to the BFC root) as the
lowest block axis coordinate (i.e Y value) for the margin edges.
This fixes a long-standing issue where BFC roots were sometimes not tall
enough to contain their children due to margins.
The `on_ready_to_read` callback on the underlying socket will be called
for various reasons which do not always guarantee that the next read
operation will be successful. For example, the server might have sent an
alert or a TCP RST.
We handle fatal errors on the SSL connection before calling to the user
so that `can_read_without_blocking` does not falsely advertise. The same
checks should be performed there, but it is not possible due to the
function being const.
The OpenSSL documentation mentions that after `SSL_ERROR_SYSCALL` or
`SSL_ERROR_SSL` no further operations should be performed and
`SSL_shutdown` should not be called.
When a fatal error occurs, close the underlying socket and free the
`SSL` struct.
Corresponds to part of https://github.com/whatwg/html/pull/9841 and then
https://github.com/whatwg/html/pull/11047
Adding `Auto` as a type state feels a little odd, as it's not an actual
type allowed in HTML. However, it's the default state when the value is
missing or invalid, which works out the same, as long as we never
serialize "auto", which we don't.
Analysis of selectors on modern websites shows that the `:hover`
pseudo-class is mostly used in the subject position within relatively
simple selectors like `.a:hover`. This suggests that we could greatly
benefit from segregating them by id/class/tag name, this way reducing
number of selectors tested during hover style invalidation.
With this change, hover invalidation on Discord goes down from 70ms to
3ms on my machine. I also tested GMail and GitHub where this change
shows nice 2x-3x speedup.
This is required to store Content Security Policies, as their
Directives are implemented as subclasses with overridden virtual
functions. Thus, they cannot be stored as generic Directive classes, as
it'll lose the ability to call overridden functions when they are
copied.
This avoids static builds putting the "no-op" implementation of
PlaybackStream::create() in the static archive before the strong
implementation later. On ELF (and probably PE/COFF too), a weak
definition in a static archive is chosen and locked-in as the
definition before a strong definition later in the archive.
Before this change, static builds would have no audio support
at all.
This was a relic from the SerenityOS CI, where architecture meant
what architecture to build Serenity for. For just ladybird, we might
want to build ladybird for multiple architectures per OS.
Regressed in 036327332f.
This commit moves the optimization a little later in replaceData(),
still avoiding relayout (the important part).
Recovers 480 points on WPT. :^)
This change updates the contents of the AriaRoles.json file to match the
data in the Characteristics tables in the ARIA spec — at, for example,
https://w3c.github.io/aria/#application.
Most of the changes are to match the “Supported States and Properties”
and “Inherited States and Properties” data — but some changes are to
match the “Name From” data and “Accessible Name Required” data.
This change also intentionally removes all states and properties the
Characteristics tables list as “deprecated on this role in ARIA 1.2”.
This fixes an issue where `vertical-align: middle` would incorrectly
shift the text away from the natural alphabetic baseline.
Fixing this makes many WPT table tests work correctly, so I'm also
importing one of them here. :^)
We were incorrectly treating cellpadding=0 as if the attribute was
missing. This commit fixes it so it behaves as `padding: 0` on cells.
When adding a test, I discovered that we were not invalidating style for
cells when their containing table's cellpadding attribute changed.
So this commit fixes that as well.
This removes the use of StringBuilder::OutputType (which was ByteString,
and only used by the JSON classes). And it removes the StringBuilder
template parameter from the serialization methods; this was only ever
used with StringBuilder, so a template is pretty overkill here.
This implementation can be better improved in the future by ripping
out a lot of the manual logic in LibWebSocket and rely on libcurl to
parse our message payloads. But for now, this uses the 'raw mode' of
curl websockets in connect-only mode to allow for somewhat seamless
integration into our event loop.
The previous implementation would call send a half-dozen times
when sending each frame of WebSocket data. This is excessive,
especially since we need to allocate a new buffer for the payload
in order to mask it anyway. Let's just allocate one buffer up front,
and send all the completed data at the end of the method
If selector does not have any descendant combinators then we know for
sure it won't be filtered out by ancestor filter, which means there is
no need to check for it.
This change makes hover style invalidation go faster on Discord where
with this change we spend 4-5% in `should_reject_with_ancestor_filter()`
instead of 20%.
These are causing too many macOS CI timeouts, so let's focus on only
running the ones that add completely new parser coverage (i.e new
variants of the tests that were already skipped.)
There's a quirk in HTML where the parser should ignore any line feed
character immediately following a `pre` or `textarea` start tag.
This was working fine when we could peek ahead in the input stream and
see the next token, but didn't work in character-at-a-time parsing with
document.write().
This commit adds the "can ignore next line feed character" as a parser
flag that is maintained across invocations, making it work in this
parsing mode as well.
20 new passes in WPT/html/syntax/parsing/ :^)
Instead of always inserting a new text node, we now continue appending
to an extisting text node if the parser's character insertion point is
a suitable text node.
This fixes an issue where multiple invocations of document.write() would
create unnecessary sequences of text nodes. Such sequences are now
merged automatically.
19 new passes in WPT/html/syntax/parsing/ :^)
We were neglecting to return after handling the `frameset` start tag,
which caused us to process it twice, once properly and once generically.
54 new passes in WPT/html/syntax/parsing/ :^)
Before this change, the explicit EOF inserted by document.close() would
instantly abort the parser. This meant that parsing algorithms that ran
as part of the parser unwinding on EOF would never actually run.
591 new passes in WPT/html/syntax/parsing/ :^)
This exposed a problem where the parser would try to insert a root
<html> element on EOF in a document where someone already inserted such
an element via direct DOM manipulation. The parser now gracefully
handles this scenario. It's covered by existing tests (which would
crash without this change.)
Since we don't support the "variant" meta tag stuff in WPT, I've simply
copied the test files here, and then test.js looks at the filename to
figure out which test function to use.
This incrases our coverage of the HTML parser substantially by also
invoking it via document.write() one-shot, and character-at-a-time.
When constructing an entry list, XHR::FormDataEntry is created
manually and appended to the entry list instead of using the
spec-defined method of creating an entry.
Allows us to avoid invalidating layout when CharacterData didn't change.
Results in visible improvement on Discord that continuously invokes
this function with the same data, which previously resulted in relayout
on every frame.
Before this change, we did the following:
1. Created a bitmap with the matching state for each rule containing
`:hover`.
2. Changed the actively hovered element in the document.
3. Created another bitmap with the matching state for each rule
containing `:hover`.
With this change, we iterate rule by rule and compare the matching
state. This allows us to break early once we find the first rule whose
matching state changes after the hovered element update. Additionally,
this removes the need to allocate a bitmap.
Instead of using `has_pseudo_elements()` that iterates over all pseudo
elements, only check if `::before` or `::after` are present.
Before this change, `has_pseudo_elements()` was 10% of profiles on
Discord while now it's 1-2%.
This adds a command line option to enable the DevTools server. Here, we
make the application the DevToolsDelegate to reply to requests from the
DevTools server about the state of the application.
There is a lot needed all at once to actually inspect a tab's DOM tree.
It begins with requesting a "watcher" from a TabActor. It seems there
can be many types of watchers, but here we implement the "frame" watcher
only. The watcher creates an "inspector", which in turn creates a
"walker", which is the actor ultimately responsible for serializing and
inspecting the DOM tree.
In between all that, the DevTools client will send a handful of other
informational requests. If we do not reply to these, the client will not
move forward with the walker. For example, the CSSPropertiesActor will
be asked for a list of all known CSS properties.
Previously, we could connect to our DevTools server from Firefox, but
could not see any information on Ladybird's opened tabs. This implements
enough of the protocol to see a tab list, but we cannot yet inspect the
tabs.
To aid with debugging web page issues in Ladybird without needing to
implement a fully fledged inspector, we can implement the Firefox
DevTools protocol and use their DevTools. The protocol is described
here:
https://firefox-source-docs.mozilla.org/devtools/backend/protocol.html
This commit contains just enough to connect to Ladybird from a DevTools
client.
For Firefox DevTools, we will need to track WebViews by a numerical ID.
Here, we just increment a static 64-bit counter. We can switch to using
IDAllocator if we ever have an issue with this.
This patch adds such an ID to the views and a couple of APIs to access
WebViews after creation.
We have mutable accessors on the JsonValue class already. This will be
needed for interaction with Firefox's DevTools, where we will want to
mutate the serialized DOM tree we receive from WebContent.
Invalid URLs should be signified by a wrapper class, such as an
Optional<URL::URL> in the IPC file. I do not believe that we have
anything which currently relies on passing through an invalid URL.
Having one big `if` to determine whether or not we should restructure an
inline layout node became a bit unwieldy. This extracts the logic into a
separate method.
vcpkg requires this flag to be set when building on non-x86 Linux,
as they don't ship enough cached tools for those platforms.
Put another way, we must allow system-installed versions of tools
such as CMake, Ninja, and pkg-config into the vcpkg build process
on these platforms.
When restructuring inline nodes because of a block element insertion
during the layout tree build, we might end up with a `display: contents`
element in the ancestor stack that is not part of the actual layout
tree, since it's never actually used as a parent for any node. Because
we were only rewinding the ancestor stack with actual new layout nodes,
it became corrupted and layout nodes were added to the wrong parent.
This new logic leaves the ancestor stack intact, only replacing layout
nodes whenever a new one is created.
Fixes the sidebar on https://reddit.com.
Fixes#3590.
We only need a Page for file:// urls. At some point we probably
needed it for other kinds of requests, but the current functionality
doesn't need to store the Page pointer on the ResourceLoader.
This makes it so that the IDL generator no longer assumed that
dictionary members with a default value are optional, since they
will always, at least, have the default value.
This is the core object behind a URL pattern which when constructed
can be used for matching the pattern against URLs.
However, the implementation here is missing key functions such as
the constructor and the 'test'/'exec' functions as that relies on
a significant amount of supporting URLPattern infrastructure such
as two different parsers and a tokenizer.
However, this is enough for us to implement some more of the IDL
wrapper layer of this specification.
A URL pattern consists of components such as the 'port', 'password'
'hostname', etc. A component is compiled from the input to the
URLPattern constructor and is what is used for matching against
URLs to produce a match result.
This is also where the regex dependency is introduced into LibURL
to support the URLPattern implementation.
...in inherited style update. Instead of comparing old absolutized value
with new non-absolutized value, we should wait until
`absolutize_values()` and then compare old and new values, when both are
absolutized.
Improves performance on pages with GitHub action logs where previously
we had to invalidate layout after hover style recalculation, because
there was `margin-left: 1rem`.
run:sqlite3 $HOME/Library/Application\ Support/com.apple.TCC/TCC.db "INSERT OR IGNORE INTO access VALUES ('kTCCServiceMicrophone','/usr/local/opt/runner/provisioner/provisioner',1,2,4,1,NULL,NULL,0,'UNUSED',NULL,0,1687786159,NULL,NULL,'UNUSED',1687786159);"
// `Error::from_string_view(ByteString::formatted(...))` is a somewhat common mistake, which leads to a UAF situation.
// If your string outlives this error and _isn't_ a temporary being passed to this function, explicitly call .view() on it to resolve to the StringView overload.
static_assert(DependentFalse<T>,"Error::from_string_view(String) is almost always a use-after-free");
VERIFY_NOT_REACHED();
returnError(syscall_name,code);
}
staticErrorcopy(Errorconst&error)
{
returnError(error);
}
// NOTE: Prefer `from_string_literal` when directly typing out an error message:
// Prefer `from_string_literal` when directly typing out an error message:
//
// return Error::from_string_literal("Class: Some failure");
//
// If you need to return a static string based on a dynamic condition (like
// picking an error from an array), then prefer `from_string_view` instead.
// If you need to return a static string based on a dynamic condition (like picking an error from an array), then
// Note: This overload is optional. It exists purely to match the SerenityOS and `std::optional` behaviour.
// The only (observable) difference between this overload and the next one is that this one calls the move assignment operator when both `this` and `other` have a value.
// The other overload just unconditionally calls the move constructor.
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.