This change switches the Meta/import-wpt-test.py script to using the
standard html.parser module rather than BeautifulSoup.
Otherwise, without this change, when a contributor first tries to run
the script, if they don’t have BeautifulSoup installed, it will fail.
Note that this patch also includes an unrelated small change that
switches to using os.path.normpath — rather than Path.absolute() — to
“normalize” the destination names of the downloaded test files.
This is not in the spec, but I did see a null pointer dereference here
while browsing the web, and it seems completely harmless for this
function to skip over navigables without an active document.
This is just a wrapper to easily construct an Array from a span. This
avoids creating a Vector of values that are possiby Objects. One such
case is in ArrayIteratorPrototype::next.
This is not safe from GC. Unfortunately we cannot add a test to capture
the issue, as the allocation which may trigger GC is internal and not
observable from JS.
This was previously still valid since the `Optional<String>` move
constructor copied its input, but since the move is now destructive,
the code no longer works.
`Optional` and `Variant` both use essentially the same pattern of only
declaring a copy constructor/move constructor/destructor and copy/move
assignment operator if all of their template parameters have one.
Let's move these into a macro to avoid code duplication and to give a
name to the thing we are trying to accomplish.
Stealing the callbacks from the AnimationFrameCallbackDriver made them
no longer safe from GC. Continue to store them on the class until we
have finished their execution.
Better support for CSS shorthands when setting the style attribute. This
improves some tests in WPT /css/css-align/default-alignment/*shorthand.
When setting the style attribute in JS via element.style = '..' or the
setAttribute method, shorthand properties were not expanded to longhand
properties.
The DOM spec gets overridden by both the SVG2 and MathML core specs in
that unknown elements should not inherit DOM::Element, but
SVG::SVGElement and MathML::MathMLElement respectively.
This is a mixin in the IDL, so let's treat it as a mixin in our code and
let both SVGElement and MathMLElement reuse the implementations that we
wrote for HTMLElement.
This change helps to ensure issue authors include the details needed to
reproduce problems — including repro steps, expected behavior, a reduced
test case, and logs/backtrace.
Previously, attempting to parse a floating point number with an integer
part larger than `(2 ^ 31) - 1` would cause the browser to crash. We now
avoid this by converting the integer part of the number to a `double`
rather than an `i32`.
This change makes the Meta/import-wpt-test.py script handle URLs such as
https://wpt.live//WebCryptoAPI/generateKey/../util/helpers.js and paths
containing, e.g., wpt-import/WebCryptoAPI/generateKey/../util/helpers.js
(that is, URLs and paths with “..” parent-directory references in them).
Otherwise, without this change, when the import-wpt-test.py script tries
a URL like https://wpt.live//WebCryptoAPI/generateKey/../util/helpers.js
which contains a “..” parent-directory reference, the script fails with
a “urllib.error.HTTPError: HTTP Error 404: Not Found” error message.
This avoids needing to creating root handles for each heap-allocated
object captured in the animation callback. An upcoming commit would add
several of these.
Fixes wpt/png/exif-chunk.html.
At some point there should probably be some mechanism to handle this
outside of the individual decoder plugins. The TIFF decoder seems to
have its own version of this, and as far as I can tell, the JPEG decoder
doesn't handle this at all, even though that's probably the most common
use case for Exif orientations. :^)
This change imports the WPT accname/name/comp_embedded_control.html
test, along with related resources files it depends on.
Note that in the wai-aria/scripts/aria-utils.js file, this changes the
get_computed_label call to use our window.internals.getComputedLabel.
This change implements the “C: Embedded Control” step at
https://w3c.github.io/accname/#step2C in the “Accessible Name and
Description Computation” spec — to compute the accessible names for
labeled form controls.
Similar to commit a9c858fc78, this patch
implements the WebDriver spec concept of shadow references grouped
according to their browsing context groups.
We currently spin the event loop to wait for the specified element to
become available. As we've seen with other endpoints, this can result
in dead locks if another web component also spins the event loop.
This patch makes the locator implementations asynchronous.
We paint grid item nodes as a stacking context when they have no
`z-index` style set. However, a grid item could already have a stacking
context established - for example, when the `filter` style is applied.
This causes these nodes to be drawn twice.
Skip painting grid item nodes if a stacking context is already present.
This seems to have vanished from the spec, but in any case, we still
need it. Without this change we erroneously thought that calculations
that match <percentage> did not match <number-percentage>.
This will allow us to remove the use of SafeFunction in it's
implementation. This requires a fair amount of plumbing to wire up the
GC heap to the appropriate places in order to create the timers.
This input event handling change is intended to address the following
design issues:
- Having `DOM::Position` is unnecessary complexity when `Selection`
exists because caret position could be described by the selection
object with a collapsed state. Before this change, we had to
synchronize those whenever one of them was modified, and there were
already bugs caused by that, i.e., caret position was not changed when
selection offset was modified from the JS side.
- Selection API exposes selection offset within `<textarea>` and
`<input>`, which is not supposed to happen. These objects should
manage their selection state by themselves and have selection offset
even when they are not displayed.
- `EventHandler` looks only at `DOM::Text` owned by `DOM::Position`
while doing text manipulations. It works fine for `<input>` and
`<textarea>`, but `contenteditable` needs to consider all text
descendant text nodes; i.e., if the cursor is moved outside of
`DOM::Text`, we need to look for an adjacent text node to move the
cursor there.
With this change, `EventHandler` no longer does direct manipulations on
caret position or text content, but instead delegates them to the active
`InputEventsTarget`, which could be either
`FormAssociatedTextControlElement` (for `<input>` and `<textarea>`) or
`EditingHostManager` (for `contenteditable`). The `Selection` object is
used to manage both selection and caret position for `contenteditable`,
and text control elements manage their own selection state that is not
exposed by Selection API.
This change improves text editing on Discord, as now we don't have to
refocus the `contenteditable` element after character input. The problem
was that selection manipulations from the JS side were not propagated
to `DOM::Position`.
I expect this change to make future correctness improvements for
`contenteditable` (and `designMode`) easier, as now it's decoupled from
`<input>` and `<textarea>` and separated from `EventHandler`, which is
quite a busy file.
Makes this method to not fail if updating of start offset (which happens
before update of the end offset) already moved end offset to the end of
string on the following step:
> 1. If range’s root is not equal to node’s root, or if bp is after the
range’s end, set range’s end to bp.
We use the CSSRule::Type enum for identifying the type of a CSSRule, but
the spec requires that only some of these types are exposed via the
`type` attribute. For the rest, we're required to return 0, so let's do
so. :^)
The spec says that "isTrusted is a convenience that indicates whether
an event is dispatched by the user agent (as opposed to using
dispatchEvent())"
But when dispatching a pageshow event the flag was incorrectly set
to false.
This fixes https://wpt.fyi/results/html/syntax/parsing/the-end.html
In particular, this property now interacts correctly when the flex
container has flex-wrap: wrap-reverse.
This caused some "regressions" in WPT tests for negative overflow in
flex containers, but the previous behavior wasn't correct either,
it just happened to give false positives on tests.
...when running in test mode. This cuts down on the time it takes to run
the imported WPT tests, and you can still get the full error by opening
tests in the browser.
When a window containing a WebView becomes visibile, we have to inform
WebContent. This was only implemented for the Tab class (not Inspector
or Task Manaager).
This patch adds LadybirdWebViewWindow to contain the bare minimum needed
to render a LadybirdWebView. All windows containing a WebView inherit
from this class, to ensure their WebContent processes know they became
visible.
We implement these built-in accessors via a lexical environment override
on the inspected document's global scope. However, ClassicScript will
parse the script we provide as a JS program, in which any evaluated
bindings will be interpreted as global bindings. Our global binding
lookup in the bytecode interpreter does not search the lexical env for
the binding, thus scripts like "$0" fail to evaluate.
Instead, we can create an ECMAScriptFunctionObject to evaluate scripts
entered into the Inspector. These are not evaluated as JS programs, and
thus any evaluated bindings are interpreted as non-global bindings. The
lexical environment override we set will then be considered.
This change imports the WPT html/dom/aria-attribute-reflection.html test
into being an in-tree test — and deletes the related existing test
from https://github.com/LadybirdBrowser/ladybird/commit/a924e8747a4
previously “ported” from the WPT with changes to run under our (non-WPT)
in-tree test harness.
Similar to LadybirdBrowser/ladybird#1714.
We don't implement the linejoin values `miter-clip` and `arcs`, because
according to the SVG 2 spec:
> The values miter-clip and arcs of the stroke-linejoin property are at
> risk. There are no known browser implementations. See issue Github
> issue w3c/svgwg#592.
Nothing uses this yet. The next step is to change
SVGPathPaintable::paint() to read `graphics_element.stroke_linejoin()`
and `graphics_element.stroke_miterlimit()` when painting.
The cols and rows attributes are limited to only positive numbers with
fallback. The cols IDL attribute's default value is 20. The rows IDL
attribute's default value is 2.
The default value was returned only for the negative number. I added an
additional check for the case when the attribute is 0 to match the
specification.
Not only does this match the spec, but otherwise when the UI process
sends us the initial visibility update, we would ignore the message as
we believed we were already visible (thus the update would not reach the
document).
It's currently possible for window size/position updates to hang, as the
underlying IPCs are synchronous. This updates the WebDriver endpoint to
be async, to unblock the WebContent process while the update is ongoing.
The UI process is now responsible for informing WebContent when the
update is complete.
On macOS, this is resulting in values of window.screenX, window.screenY,
window.outerWidth and window.outerHeight that are 2x larger than Safari,
Firefox, and our AppKit UI.
Window origins in AppKit are the bottom-left position of the NSWindow,
relative to the bottom-left of the screen. So we must do some alignment
of the top-left position received from WebDriver.
We can currently crash on WebDriver session shutdown when we receive a
Delete Session command. This destroys the WebDriver client while we are
inside the client's socket's on_ready_to_read callback. This is not
allowed by AK::Function.
To avoid this, we now only read data from the socket in the callback. We
then defer handling the message to break out of the callback.
By making use of the known set of supported dictionary names in that
overload set. Note that this list is typically very small (the max that
we have currently is 1).
It would be strange for the IDL to be defined as such, so instead of
leaving a FIXME comment, let's just verify that this doesn't happen in
practise incase it does end up happening in reality.
This is really bare bone as we only support the `xyz-d50` color space
for the moment.
It makes us pass the following WPT tests:
- css/css-color/predefined-016.html
- css/css-color/xyz-d50-001.html
- css/css-color/xyz-d50-002.html
All its overrides return constants, and without virtual dispatch the
`qualified_layer_name` and `absolutized_selectors` functions can benefit
from slightly better optimizations.
`CSSRule`s aren't allocated that often, so the memory impact is minimal.
We were transforming coordinates for SVG gradients in a pretty
convoluted way: an inverse, unscaled transformation matrix was set up in
order to work around some (old?) technical limitations.
Rework this so the coordinate transformation no longer needs to be
inversed. This fixes gradients with "userSpaceOnUse" for its
gradientUnits attribute, which might cause coordinates to lie outside of
the bounding box of the gradient.
Two tests have updated reference screenshots with minor pixel updates;
this is probably the result of floating point precision improvements by
not inversing the matrix.
One test (svg-text-effects) has a bigger change: the gradient stops seem
to have moved along the text. This does seem to match other browsers
slightly better, so I'm moving forward with this ref update.
This resolves compiler errors in HelperProcess.cpp when instantiating
Process::spawn() with various client types like WebContentClient and
RequestClient.
1. We were not propagating selectedness updates from option to select
if the option was inside an optgroup.
2. When two or more options were selected, we were always favoring the
last one in tree order, instead of the last one that got checked.
3. We were neglecting to return in the `display size is 1` case when
all elements were disabled.
This was covered by some of the :has() selector tests. :^)
We basically need to do this for every invocation of invalidate_style()
right now, so let's just do it inside invalidate_style() itself.
Fixes one missing invalidation issue caught by a WPT test. :^)
The traversal for these was incorrect and awkward. Now it's less
incorrect but still very awkward. We should find better ways to
implement this, but for now this at least passes many more WPT tests.
This sucks, and we're gonna have to do better, but for now let's
invalidate the whole document's style, so that we get correct behavior
if there are :has() selectors present.
The specialization for the destructor of `AK::Optional` is ambiguous
when `T` is neither TriviallyDestructible nor Destructible.
The fix adds an additional check for `Destructible` when generating the
destructor of `AK::Optional`, since it calls the destructor of `T`, and
therefore already required `T` to be `Destructible` anyway.
Attempting this resulted in an error because the `m_pointer` field does
not exist on `Optional<T>`. Creating a shared `ptr()` function and
adding the necessairy overloads solves this issue.
And here's the wild part: instead of cloning WPT tests, import the
relevant WPT tests that this fixes into our own test suite.
This works by adding a small Ladybird-specific callback in
resources/testharnessreport.js (which is what that file is meant for!)
Note that these run as text tests, and so they must signal the runner
when they are done. Tests using the "usual" WPT harness should just
work, but tests that do something more freestyle will need manual
signaling if they are to be imported.
I've also increased the test timeout here from 30 to 60 seconds,
to accommodate the larger WPT-style tests.
Reading the RFC9111 spec makes it clear that the stored response was
not intended to be cloned. This is because there is a "clone response"
operation that is used in other places, but never for stored responses.
Responses returned from `http_network_or_cache_fetch` were copied
directly from the cache, which is incorrect, since revalidation may
later modify the response, or even invalidate it, such as when the
`Access-Control-Allow-Origin` header is changed.
This fixes WPT test [wpt/cors/304.htm](http://wpt.live/cors/304.htm)
When aspect-ratio is degenerate (e.g. 0/1 or 1/0) we should
fallback to the same behaviour as `aspect-ratio: auto` according to spec
This commit explicitly handles this case and fixes five WPT test in
css/css-sizing/aspect-ratio (zero-or-infinity-[006-010])
The support in LibWeb is quite easy as the previous commits introduced
helpers to support lab-like colors.
Now for the methods in Color:
- The formulas in `from_lab()` are derived from the CIEXYZ to CIELAB
formulas the "Colorimetry" paper published by the CIE.
- The conversion in `from_xyz50()` can be decomposed in multiple steps
XYZ D50 -> XYZ D65 -> Linear sRGB -> sRGB. The two first conversion
are done with a singular matrix operation. This matrix was generated
with a Python script [1].
This commit makes us pass all the `css/css-color/lab-00*.html` WPT
tests (0 to 7 at the time of writing).
[1] Python script used to generate the XYZ D50 -> Linear sRGB
conversion:
```python
import numpy as np
# http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html
# First let's convert from D50 to D65 using the Bradford method.
m_a = np.array([
[0.8951000, 0.2664000, -0.1614000],
[-0.7502000, 1.7135000, 0.0367000],
[0.0389000, -0.0685000, 1.0296000]
])
# D50
chromaticities_source = np.array([0.96422, 1, 0.82521])
# D65
chromaticities_destination = np.array([0.9505, 1, 1.0890])
cone_response_source = m_a @ chromaticities_source
cone_response_destination = m_a @ chromaticities_destination
cone_response_ratio = cone_response_destination / cone_response_source
m = np.linalg.inv(m_a) @ np.diagflat(cone_response_ratio) @ m_a
D50_to_D65 = m
# https://en.wikipedia.org/wiki/SRGB#From_CIE_XYZ_to_sRGB
# Then, the matrix to convert to linear sRGB.
xyz_65_to_srgb = np.array([
[3.2406, - 1.5372, - 0.4986],
[-0.9689, + 1.8758, 0.0415],
[0.0557, - 0.2040, 1.0570]
])
# Finally, let's retrieve the final transformation.
xyz_50_to_srgb = xyz_65_to_srgb @ D50_to_D65
print(xyz_50_to_srgb)
```
LLVM recommends compiling with at least -O1 to have decent performance
with sanitizers enabled. Indeed, this improves CI performance of LibWeb
tests as follows:
GCC on Linux: 160.61s to 119.68s (40.93s faster)
Clang on Linux: 65.56s to 55.64s ( 9.92s faster)
If statements without an else clause generated jumps to the next
instruction, this commit fixes the if statement generation so that it
dosen't produce them anymore.
This is an example of JS code that generates the useless jumps
(a => if(a){}) ();
This avoids having to do O(n) contains() in the various flag accessors.
Yields a ~20% speed-up on the following microbenchmark:
const re = /foo/dgimsvy;
for (let i = 0; i < 1_000_000; ++i)
re.flags;
To help people in troubleshooting problems when running the WPT.sh
script, this change makes the script echo to stdout the complete
“wpt run” invocation (including all the flags and path args).
Invoking exec() entirely blocks the UI application's main thread. Qt
explicitly recommends against this. In practice, it seems prevents some
IPC messages from being handled by the UI until the dialog is closed by
the user.
Instead, use open() (which is non-blocking) and set up a signal handler
to deal with the result.
There was a timing issue here where WebDriver would dismiss a dialog,
and then invoke another endpoint before the dialog was actually closed.
This is because the dismissal first has to hop over to the UI process to
close the graphical dialog, which then asynchronously informs WebContent
of the result. It's not until WebContent receives that result that the
dialog is considered closed, thus those subsequent endpoints would abort
due a dialog being "open".
We now wait for dialogs to be fully closed before returning from the
dismissal endpoints.
Similar to commit c2cf65adac, we should
avoid spinning the event loop from the WebContent-side of the WebDriver
connection. This can result in deadlocks if another component in LibWeb
also spins the event loop.
The AO to await navigations has two event loop spinners - waiting for
the navigation to complete and for the document to reach the target
readiness state. We now use NavigationObserver and DocumentObserver to
be notified when these conditions are met. And we use the same async IPC
mechanism as script execution to notify the WebDriver process when all
conditions are met (or timed out).
This change also removes as much direct use of JS::Promise in LibWeb
as possible. When specs refer to `Promise<T>` they should be assumed
to be referring to the WebIDL Promise type, not the JS::Promise type.
The one exception is the HostPromiseRejectionTracker hook on the JS
VM. This facility and its associated sets and events are intended to
expose the exact opaque object handles that were rejected to author
code. This is not possible with the WebIDL Promise type, so we have
to use JS::Promise or JS::Object to hold onto the promises.
It also exposes which specs need some updates in the area of
promises. WebDriver stands out in this regard. WebAudio could use
some more cross-references to WebIDL as well to clarify things.
This change was made in the HTML spec to address a comment from the
Gecko team for the Streams API in
a20ca78975
It also opens the door for some more Promise related refactors.
This condition was included to implement flex containers with auto
height, but it actually can reset the definitive height to 0 for inline
blocks with only replaced elements such as an SVG. Removing the
condition does not break any in-tree test, so let's improve the
situation on the SVG side of things for now.
Since `Storage::item_value` never returns an empty Optional,
and since `PlatformObject::is_supported_property_index` only
returns false when `item_value` returns an empty Optional,
the loop in `PlatformObject::internal_own_property_keys` will
never terminate when executed on a `Storage` instance.
This fix allows youtube.com to load successfully :^)
We can reuse the same HeapFunction when queueing up a rendering task
on the HTML event loop. No need to create extra work for the garbage
collector like this.
This commit makes LibRegex's atomic loop rewrite opt also accept cases
where the follow block jumps to the end of the forking block
(which is essentially a loop without a proper header in fancy clothes)
This makes patterns like /([^x]*)x/ where the loop is not _immediately_
followed by a block significantly faster.
The order of precedence with the `*` operator sometimes makes it a bit
harder to detect whether or not the result is actually used. Let's fail
compilation if anyone tries to discard the result.
These files seem to have been marked as executable by error.
Found by running the command:
find \( -name WPT -or -name Toolchain -or -name Build \) \
-prune -or -executable \! -type d -print \
| grep -Pv '\.(sh|py)$'
It is easy to forget to set this flag on macOS, where doing so causes
many tests to fail. So let's just set it via code along with other
options to make it a bit more foolproof.
We were already caching UTF-8 and byte strings, so let's add a cache
for UTF-16 strings as well. This is particularly profitable whenever we
run regular expressions, since the output of regex execution is a set of
UTF-16 strings.
Note that this is a weak cache like the other JS string caches, meaning
that strings are removed from the cache as they are garbage collected.
This avoids billions of PrimitiveString allocations across a run of WPT,
significantly reducing GC activity.
A NodeIterator rooted at some element cannot produce an element before
that root. That is, in a DOM tree such as:
<div id=one><div id=two><div id=three></div></div></div>
If we create a NodeIterator rooted at element `three`, then invoking the
previousNode() method on that iterator is guaranteed to return null.
There was also a bug here where if we ever did enter the while() loop,
we would have looped indefinitely, as we were not updating the current
node.
Our currently ad-hoc method of tracking element references is basically
a process-wide map, rather than grouping elements according to their
browsing context groups. This prevents us from recognizing when an
element reference is invalid due to its browsing context having been
closed.
This implements the WebDriver spec concept of element references grouped
according to their browsing context groups.
This patch is a bit noisy because we now need to plumb the current BC
through to the element reference AOs.
This change completes handling for all ARIA properties defined in the
current ARIA spec — by adding handling for the following properties:
- aria-braillelabel
- aria-brailleroledescription
- aria-colindextext
- aria-description
- aria-rowindextext
While LibJS does cache regex literals, there's more than one way to
create regex objects, this cache is hit regularly just browsing the web,
though no real measurement has been done on its potential speedups.
There are many WPT subtests which validate how we behave against frames
that have been removed. They do this by adding an iframe element with a
button whose click action removes the iframe element. When the click is
dispatched, the spec would have us generate a mouse event relative to
that iframe, rather than the top-level frame, thus the click would miss
the target button.
Serendipitously, a spec issue and PR were just opened to generate mouse
events relative to the top-level frame. This patch implements that PR;
it has some editorial issues to be resolved, but is a clear improvement
for these tests.
This implementation is incomplete in that we do not fully implement the
steps to match the given font against the fonts in the set.
This is used by fonts.google.com to load the fonts used for sample text.
`find_binding_and_index` was doing a linear search, and while most
environments are small, websites using JavaScript bundlers can have
functions with very large environments, like youtube.com, which has
environments with over 13K bindings, causing environment lookups to
take a noticeable amount of time, showing up high while profiling.
Adding a HashMap significantly increases performance on such websites.
We call ns_event_to_key_event for the NSFlagsChanged event as well. By
retrieving the isARepeat flag on those events, we are guaranteed to
throw an exception.
See:
https://developer.apple.com/documentation/appkit/nsevent/1528049-arepeat?language=objc
Raises an NSInternalInconsistencyException if sent to an
NSFlagsChanged event or other non-key event.
This abstraction will help us to support multiple IPC transport
mechanisms going forward. For now, we only have a TransportSocket that
implements the same behavior we previously had, using Unix Sockets for
IPC.
Ladybird's HelperProcess.cpp was the only user of the IPCProcess class.
Moving the helper class from LibCore allows for some more interesting
LibIPC changes in the upcoming commits.
`git` and `bash` are most likely already installed, `bash` is part of
`base-system`, and `git` is required to pull the repository in the
first place, but they are not included in `base-minimal` or
`base-container`, and they ARE required for a successful build,
so I have added them regardless.
`qt6-tools-devel` and `qt6-wayland-devel` were not required to compile
and link Ladybird on my machine, but I have included them as they are
installed on the other Linux distributions.
This aligns with an update to the HTML specification which instead
stores these promises on the global object instead of the settings
object.
It also makes progress towards implementing the ShadowRealm proposal
as these promises are not present in the 'synthetic' realm for that
proposal.
This allows us to align our implementation in the same order as the
specification.
No functional change with the current implementation of this AO.
However, this change is required in order to correctly implement a
proposed update of the shadow realm proposal for integration with
the HTML spec host bindings in order to give the ShadowRealm
object the correct 'intrinsic' realm.
This is due to that proposed change adding a step which manipulates the
currently executing Javascript execution context, making the ordering
important.
When we check whether navigationParams is null, we should check if it is
`NullWithError`, since `NullWithError` is equivalent to `Empty`, but is
used for error messages.
This ensures we cannot set or get cookies on non-HTTP(S) origins. Since
this would prevent our ability to test cookies during LibWeb tests, this
also adds an internals API to allow cookie access on file:// URLs.
Bring together the docs on running tests, with the ones on writing them
which were hidden in Browser/Patterns.md
I've made a few adjustments while I was at it, because RunningTests.md
was a bit outdated and didn't mention `Meta/ladybird.sh test`. It's
possible they're still outdated and wrong, but I'm not familiar enough
with that area to know.
Handling tabs during text shaping caused issues because we tried to
index 'input_glyph_info' whilst iterating until 'glyph_count' and these
can be different sizes.
The difference is due to when one or more characters get
merged into the same glyph when calling 'input_glyph_info' (see
https://lazka.github.io/pgi-docs/HarfBuzz-0.0/classes/glyph_info_t.html).
We don't want to render tabs as they come up as tofu characters so
instead let's strip them out of the text chunk before starting text
shaping.
Platforms such as X11 will typically send repeated keyRelease/keyup
events as a result of auto-repeating:
* KeyPress (initial)
* KeyRelease (repeat)
* KeyPress (repeat)
* KeyRelease (repeat)
* ad infinitum
Make our EventHandler more spec-compliant by ignoring all repeated keyup
events. This fixes long-pressing the arrow keys on
https://playbiolab.com/.
When a platform key press or release event is repeated, we now pass
along a `repeat` flag to indicate that auto-repeating is happening. This
flag eventually ends up in `KeyboardEvent.repeat`.
We need this, because https://www.slatejs.org/ that is used by Discord
checks this function to decide whether a browser has "beforeinput" event
support.
We have more work to do before we can run WPT headlessly by default
(i.e. handling alerts). But for now, we can run it headlessly locally
with the --headless flag.
We previously only supported enabling headless mode on a per-session
basis via the capabilities record. We don't have the ability to mutate
this record from WPT, so this adds a flag to set the default mode.
Previously, tests would intermittently fail because the current session
wasn't yet aware of a newly created window handle.
Co-authored-by: Timothy Flynn <trflynn89@pm.me>
With 6a549f6270 we need to check if
optional scrollable overflow exists for paintable box, because it's not
computed for inline nodes.
Fixes crashing after navigating into direct messages screen on Discord.
The reason we were keeping track of the pre-shaping buffer was to know
where we had tab characters in the input. This is a very strange way of
doing that, but since it broke the web, let's patch it up quickly.
Follow-up to #1870 which broke text layout on many web pages.
This fixes a browser crash as experienced on Wikipedia when encountering
the ≠ entity. As a side-effect, this also affects some tab-align and
-wrap tests.
Per css-ui-4, setting `appearance: none` is supposed to suppress the
creation of a native-looking widget for stuff like checkboxes, radio
buttons, etc.
This patch implements this behavior by simply falling back to creating
a layout node based on the CSS `display` property in such cases.
This fixes an issue on the hey.com imbox page where we were rendering
checkboxes on top of sender profile photos.
Some callers (namely WebDriver) will need access to the navigable opened
by these steps. But if the noopener parameter is set, the returned proxy
will always be null.
This splits some of the Window open steps into an internal method that
returns the chosen navigable.
This is strictly nicer than passing them around as i32 everywhere,
and by switching to i64 as the underlying type, ID allocation becomes as
simple as incrementing an integer.
On the view-source page, generate anchor tags for any 'href' or 'src'
attribute value we come across. This handles both when the attribute
contains an absolute URL and a URL relative to the page.
This requires sending the document's base URL over IPC to resolve
relative URLs.
These flags always propagate to the root, so once we encounter an
ancestor with the flag set, we can stop traversal since everything above
it will already be set as well.
For pseudo elements that represent a browser-generated shadow tree
element, such as ::placeholder, we were reparsing their style attribute
in StyleComputer for some reason.
Instead of doing this, just access the already-parsed version via
Element::inline_style().
We only supported named properties on Storage, and as a result
`localStorage[0]` would be disconnected from the Storage's backing map.
Fixes at least 20 subtests in WPT in /webstorage.
As efforts to begin porting to Windows is underway, doing so should be a
bit less daunting if we clean up syscall wrappers that aren't used.
Note: While this removes Serenity-only wrappers, it leaves the Serenity
implementations of used wrappers in place for now, to not needlessly
complicate merging between the two orgs.
After InlinePaintable is gone it's possible to make this function accept
a PaintableBox instead of more broad
Layout::NodeWithStyleAndBoxModelMetrics type.
We've added a few JS::Handle members to this class over time. Let's
avoid creating a new GC root for each of these, and explicitly add a
visitation method.
Some of this code is older than widespread use of GCPtr. These functions
returning raw pointers has been a point of confusion at times, so lets
just indicate that they are non-null.
Without this, a worker can be GC'd in a very simple script such as:
const worker = new Worker("script.js");
worker.onmessage = () => {};
Where script.js attempts to post a message back to the parent window.
When the Worker is GC'd, the IPC connection from the WebContent process
to the WebWorker process is closed. When this occurs, the WebWorker will
exit() from LibIPC, and any message from the worker to its parent does
not have a chance to run.
This just updates our copied spec steps - new steps are not implemented
here. This is mostly just to highlight new steps we are missing around
MessagePorts.
No behavior change, but this does resolve an outstanding FIXME around
spec step ordering.
1.25x speed-up on this microbenchmark:
let o = { get x() { return 1; } };
for (let i = 0; i < 10_000_000; ++i)
o.x;
I looked into this because I noticed getter invocation when profiling
long-running WPT tests. We already had the mechanism for non-getter
properties, and the change to support getters turned out to be trivial.
These are created when a style rule has properties listed after another
rule. For example:
```css
.test {
--a: 1;
--b: 1;
--c: 1;
.thing {
/* ... */
}
/* These are after a rule (.thing) so they're wrapped in a
CSSNestedDeclarations: */
--d: 1;
--e: 1;
--f: 1;
}
```
They're treated like a nested style rule with the exact same selectors
as their containing style rule.
For example, this:
```css
.foo {
color: red;
&:hover {
color: green;
}
}
```
now has the same effect as this:
```css
.foo {
color: red;
}
.foo:hover {
color: green;
}
```
CSSStyleRule now has "absolutized selectors", which are its selectors
with any `&`s resolved. We use these instead of the "real" selectors
when matching them, meaning the style computer doesn't have to know or
care about where the selector appears in the CSS document.
Through the CSSOM, rules can be moved around, and so anything cached
(for now just the qualified layer name) needs to be recalculated when
that happens. This method is virtual so that other rules will be able
to clear their cached data too.
We were hard-coding "about:blank" as the document URL for parsed HTML
documents, which was definitely not correct.
This fixes a bunch of WPT tests under /domparsing/ :^)
On any `display: list-item` Node a CSS pseudo element (`::marker`) needs
to be created. This commit allows the ::maker pseudo element to be
nested within other pseudo elements (e. g. ::before or ::after).
This fixes this WPT test:
http://wpt.live/css/CSS2/generated-content/after-content-display-003.xht
This resolves all WPT timeouts in html/canvas/element/manual/imagebitmap
We can now run an additional 6 tests and 126 subtests :)
This also adds regression tests for this behavior.
InlinePaintable was an ad-hoc paintable type required to support the
fragmentation of inline nodes across multiple lines. It existed because
there was no way to associate multiple paintables with a single layout
node. This resulted in a lot of duplicated code between PaintableBox and
InlinePaintable. For example, most of the CSS properties like
background, border, shadows, etc. and hit-testing are almost identical
for both of them. However, the code had to be duplicated to account for
the fact that InlinePaintable creates a box for each line. And we had
quite many places that operate on paintables with a code like:
```
if (box.is_paintable_box()) {
// do something
} else (box.is_inline_paintable()) {
// do exactly the same as for paintable box but using InlinePaintable
}
```
This change replaces the usage of `InlinePaintable` with
`PaintableWithLines` created for each line, which is now possible
because we support having multiple paintables per layout node. By doing
that, we remove lots of duplicated code and bring our implementation
closer to the spec.
CSS fragmentation implies 1:N relationship between layout nodes and
paintables. This change is a preparation for implementation of inline
fragmentation where InlinePaintable will be replaced with
PaintableWithLines corresponding to each line.
MSG_NOSIGNAL is a no-op for Windows, so we can define it to 0.
At the same *time*, none of the CLOCK_* macros are defined on
Windows, as clock_gettime does not exist. Put AK_OS_WINDOWS in the
same category of the BSDs for the COARSE versions of those macros.
Co-authored-by: Andrew Kaster <akaster@serenityos.org>
Monotonic uses QueryPerformanceCounter, while realtime uses
GetSystemTimeAsFileTime. These should approximate clock_gettime
fairly accurately. The QPC implementation only grabs microseconds,
but if we have actual use cases for nanos, we can bump that up.
Co-authored-by: Andrew Kaster <akaster@serenityos.org>
Always create a new formatting context for <math> elements. Previously
that didn't happen if they only had inline children, e.g. mtable.
This fixes a crash in the WPT MathML test
mathml/crashtests/children-with-negative-block-sizes.html
A couple of parts of this:
- Store the source text for Declarations of custom properties.
- Then save that in the UnresolvedStyleValue.
- Serialize UnresolvedStyleValue using the saved source when available -
that is, for custom properties but not for regular properties that
include var() or attr().
This is in a weird position where the spec tells us to discard the
comments, but then we have to preserve the original source text which
may include comments. As a compromise, I'm treating each comment as a
whitespace token - comments are functionally equivalent to whitespace
so this should not have any behaviour changes beyond preserving the
original text.
Ignoring the fact that we should serialize a simplified form of calc()
expressions, the following are wrong:
- grid-auto-columns
- grid-auto-rows
- grid-template-columns
- grid-template-rows
- transform-origin
Generated in part with this python script (though I've since iterated on
the output repeatedly so it's quite different):
```py
import json
properties_file = open("./Userland/Libraries/LibWeb/CSS/Properties.json")
properties = json.load(properties_file)
for (key, value) in properties.items():
if not 'valid-types' in value:
continue
if 'longhands' in value:
continue
valid_types = value['valid-types']
for type_string in valid_types:
name, *suffix = type_string.split(None, 1)
match name:
case 'integer' | 'number':
print(f'{key}: calc(2 * var(--n));')
case 'angle':
print(f'{key}: calc(2deg * var(--n));')
case 'flex':
print(f'{key}: calc(2fr * var(--n));')
case 'frequency':
print(f'{key}: calc(2hz * var(--n));')
case 'length':
print(f'{key}: calc(2px * var(--n));')
case 'percentage':
print(f'{key}: calc(2% * var(--n));')
case 'resolution':
print(f'{key}: calc(2x * var(--n));')
case 'time':
print(f'{key}: calc(2s * var(--n));')
```
Previously we would serialize these as the empty string. eg, this:
```
<div style="grid-auto-columns: auto"></div>
```
would have a computed `grid-auto-columns` value of ``.
In order to know whether `calc(2.5)` is a number or an integer, we have
to see what the property will accept. So, add that knowledge to
`Parser::expand_unresolved_values()`.
This makes `counter-increment: foo calc(2 * var(--n));` work correctly,
in a test I'm working on.
Selectors and at-rules both made assumptions about their indentation
level, which made it difficult to read the dump output. It'll become
even worse once rules can be further nested within each other, so let's
fix it now. :^)
This will be the first step is making better use of system libraries
like fontconfig and CoreText to load system fonts for use by the UI
process and the CSS style computer.
This reverts 6d25bf3aac
Invalidating the style here means that transitions can cause an element
to leave style computation with its "needs style update" flag set to
true. This then causes a VERIFY to fail in the TreeBuilder.
This invalidation does not otherwise seem to have any effect. The
original commit suggests this was to fix a bug, but it's not clear what
bug that was. If it reappears, we can try to solve the issue in a
different way.
The clang-format version released with llvm 19 will format many files
differently than clang-format-18.
This change presents the existing warning shown for incorrect
clang-format versions to those with versions greater than 18.
Fixes issue #1750
I had made a stab at implementing this to determine whether it could
assist in fixing an issue where scroll_to_the_fragment was not getting
called at the appropriate time. It did not fix that issue, and actually
ended up breaking one of our in tree tests. In the meantime, factor out
this method into a standalone function.
These don't have to worry about the input not being valid UTF-8 and
so can be infallible (and can even return self if no changes needed.)
We use this instead of Infra::to_ascii_{upper,lower}_case in LibWeb.
- Include vertical border spacing in row group offset calculation so
that they are axis-aligned with child row/cell elements. This makes it
so there isn't horizontal and vertical overflow caused by child
row/cell elements.
- Include horizontal border spacing in tr width calculations. This makes
it so tr elements don't have overflow anymore when there are multiple
columns.
- Apply vertical caption offset to row group top offset.
- Don't double-count top padding when calculating vertical offset for
tr and row groups.
Instead of re-symbolicating entire stacks from scratch every time
we want a JS VM backtrace, we now use the ExecutionContext object as
cache storage via a new CachedSourceRange object.
This means that once a stack frame has been symbolicated, we don't
have to resymbolicate it again (unless the program counter moves
within that stack frame).
This drastically reduces time spent in symbolication in some WPT tests.
Cookies have a minimum expiry resolution of 1 second. So to test cookie
expiration, the test had to idle for at least a second, which is quite a
noticeable delay now that LibWeb tests are parallelized.
Instead, we can add an internal API to expire cookies with a time offset
to avoid this idle delay.
Useful for finding tests that take a long time to execute.
As of this commit, on macOS, we have:
Text/input/cookie.html: 1228ms
Text/input/css/transition-basics.html: 1060ms
Text/input/HTML/DedicatedWorkerGlobalScope-instanceof.html: 182ms
Text/input/WebAnimations/misc/animation-events-basic.html: 148ms
Text/input/Crypto/SubtleCrypto-deriveBits.html: 130ms
Text/input/IntersectionObserver/observe-box-inside-container-with-scrollable-overflow.html: 117ms
Text/input/navigation/attempt-navigating-object-without-a-document.html: 109ms
Text/input/css/getComputedStyle-print-all.html: 71ms
Text/input/WebAnimations/misc/animation-single-iteration-no-repeat.html: 61ms
Text/input/WebAnimations/animation-methods/updatePlaybackRate.html: 55ms
And on Linux:
Text/input/cookie.html: 1326ms
Text/input/css/transition-basics.html: 1155ms
Screenshot/text-shadow.html: 772ms
Screenshot/css-background-repeat.html: 622ms
Screenshot/object-fit-position.html: 541ms
Screenshot/css-background-position.html: 456ms
Screenshot/css-gradients.html: 451ms
Screenshot/border-radius.html: 400ms
Screenshot/svg-radialGradient.html: 398ms
Text/input/css/getComputedStyle-print-all.html: 325ms
CSS Syntax 3 (https://drafts.csswg.org/css-syntax) has changed
significantly since we implemented it a couple of years ago. Just about
every parsing algorithm has been rewritten in terms of the new token
stream concept, and to support nested styles. As all of those
algorithms call into each other, this is an unfortunately chonky diff.
As part of this, the transitory types (Declaration, Function, AtRule...)
have been rewritten. That's both because we have new requirements of
what they should be and contain, and also because the spec asks us to
create and then gradually modify them in place, which is easier if they
are plain structs.
This is an ad-hoc change to account for the fact that we may run
arbitrary code while waiting for the tasks in this function to complete.
I don't have a way to reproduce it, but I've seen trouble caused by
navigables disappearing, which causes the history step numbers to be
disturbed.
Previously Selection.extend() used only the relative node order to decide which
direction to extend the selection. This leads to incorrect behaviour if
both the existing and new boundary points are within the same DOM node
and the selection direction is reversed.
This change fixes all the failing subtests in the WPT extend-* test
suites.
Loading Ladybird on Github results in 37 debug logs about being unable
to parse an empty Date string. This log is intended to catch Date
formats we do not support to detect web compatability problems, which
makes this case not particuarly useful to log.
Instead of trying to parse all of the different date formats and
logging that the string is not valid, let's just return NAN immediately.
This was previously negated due to a misread of
https://url.spec.whatwg.org/#concept-url-equals. This change fixes a
bunch of WPT crashes such as
"/html/browsers/history/the-history-interface/001".
Prior to this change, running ./Meta/ladybird.sh rebuild would not
remove the user-variables.cmake file that was generated by the build
script. This caused errors when testing out the .devcontainer on my
Mac because the pkg-config binary lived in different dirs in the
container vs host.debug
The optimized devcontainer workflow downloads an image from the
GitHub container registry. Now that we've made that image, which is
built in CI, public, it would help to have the correct org name.
There was no need to use FlyString for error messages, and it just
caused a bunch of churn since these strings typically only existed
during the lifetime of the error.
Update the base image and the feature images
Add new packages to the install.sh command
as they are now needed by some dependencies, that are built via vcpkg
Add newer clang version, but the default stays the same
DedicatedWorkerGlobalScope is an object with a Global extended
attribute, but does not define any named property getters. This needs to
be handled by setting the prototype chain to:
DedicatedWorkerGlobalScope
^ DedicatedWorkerGlobalScopePrototype
^ WorkerGlobalScopePrototype
(This is different from something like Window, where there is an
intermediate WindowProperties object for named properties.)
Previously, we treated the GlobalMixin object as if it was a simple
prototype object, accidentally setting DedicatedWorkerGlobalScope's
prototype to WorkerGlobalScopePrototype. This caused the expression
self instanceof DedicatedWorkerGlobalScope
to return false inside workers.
This makes us pass many more of the "/xhr/idlharness.any.worker" WPT
tests than before, rather than failing early.
If the image data to decode is incomplete, e.g. a corrupt image missing
its last scanlines the decoder would previously keep looping for ever.
By breaking out of the loop if no more scanlines were produced we can at
least display the partial image up to that point.
Contradictory to the spec, the Set Timeouts endpoint should update the
existing timeouts configuration in-place, rather than replacing it. WPT
expects this, and other browsers already implement this endpoint this
way.
This is a method defined in the WebDriver spec, but requires access to a
bunch of private fields in these classes, so this is implemented in the
same manner as the reset algorithm.
The "isCollapsed" attribute on a selection must "return true if and only
if the anchor and focus are the same".
In addition to checking that the anchor and focus belonged to the same
DOM node, we now also check that they refer to the same position within
the node.
With this change Ladybird passes all the subtests in the "isCollapsed"
WPT suite.
https://wpt.live/selection/isCollapsed.html
This way we don't have to allocate separate vector with both scroll and
sticky frame that is used for display list player (scroll and sticky
frames share id pool), so player could access offset by frame id.
No behavior change.
...and add display list item that does translation instead. By doing
that we no longer need to map each coordinate in display list by
translation in recorder state.
Set the connection timeout which only limits the connection phase of the
request.
Previously, CURLOPT_TIMEOUT would apply to all transfer operations which
could result in legitimate upload or download operations being
cancelled.
Printing the whole array causes wpt
console/console-log-large-array.any.html to crash.
This limits logged arrays to 100 elements and
truncates the rest with ...
The headless-browser source is getting a bit unwieldy. The ordering of
class and method definitions is fragile; e.g. the application and web
view classes each require full definitions of each other. So it has
reached the point where it makes sense to give headless-browser some
better file structure.
To prepare for that, this patch simply moves its source to live along-
side the other browser chromes. This location is a bit better prepared
for creating more files, as the Utilities folder doesn't even have its
own CMakeLists.txt.
- Add support for placement of abspos items into track formed by last
line and padding edge of grid container
- Correctly handle auto-positioned abspos items by placing them between
padding edges of grid container
Fixes crashing on https://wpt.live/css/css-grid/abspos/positioned-grid-descendants-001.html
Disable some non-supported flags on windows platforms, and
pull in some flags from the other windows support branches.
Co-Authored-By: Andrew Kaster <andrew@ladybird.org>
The video was accidentally removed in commit d5ba665f89.
This adds the video back to the LibWeb/Text/data folder, and validates
that the video loads in the test that depends on it loading.
We have support for using (shift+)tab to move focus to the next/previous
element on the page. However, there were several ways for this to crash
as written. This updates our implementation to check if we did not find
a node to move focus to, and to reset focus to the first/last node in
the document.
This doesn't seem to work when wrapping around from the first to the
last node. A FIXME has been added for that, as this would already not
work before this patch (the main focus here is not crashing).
The spec says we don't need to await navigations if we navigate to the
same URL that we are already on, but at least in our implementation, we
should still await the page load. Otherwise, we will invoke WebDriver
endpoints on the wrong page.
This is necessary when we add more ServiceWorker capabilities, that
actually check this value. The more this spoof functionality is used,
the more we'll need to actually support serving test files over https.
Our handling of left vs. right modifiers keys (shift, ctrl, etc.) was
largely not to spec. This patch adds explicit UIEvents::KeyCode values
for these keys, and updates the UI to match native key events to these
keys (as best as we are able).
If the user only presses the shift key, for example, we are required to
still send that event to WebContent and generate the corresponding JS
events. Unfortunately, NSApp does not inform us of these events via the
keyDown/keyUp methods. We have to implement the flagsChanged interface,
and track for ourselves what modifier keys were pressed or released.
...traversal. We've already fixed step 3 and 9 to not filter out
non-positioned stacking contexts, because modern CSS has more ways to
create stacking context besides being positioned with z-index (like by
using "transform", "filter" or "clip-path" properties).
See following spec issue for more details https://github.com/w3c/csswg-drafts/issues/2717
Visual improvement on https://basecamp.com/
Prior to this change, SVGs were following the CSS painting order, which
means SVG boxes could have established stacking context and be sorted by
z-index. There is a section in the spec that defines what kind of SVG
boxes should create a stacking context
https://www.w3.org/TR/SVG2/render.html#EstablishingStackingContex
Although this spec is marked as a draft and rendering order described in
this spec does not match what other engines do.
This spec issue comment has a good summary of what other engines
actually do regarding painting order
https://github.com/w3c/svgwg/issues/264#issuecomment-246432360
"as long as you're relying solely on the default z-index (which SVG1
does, by definition), nothing ever changes order when you apply
opacity/filter/etc".
This change aligns our implementation with other engines by forbidding
SVGs to create a formatting context and painting them in order they are
defined in tree tree.
When the TokenStream code was originally written, there was no such
concept in the CSS Syntax spec. But since then, it's been officially
added, (https://drafts.csswg.org/css-syntax/#css-token-stream) and the
parsing algorithms are described in terms of it. This patch brings our
implementation in line with the spec. A few deprecated TokenStream
methods are left around until their users are also updated to match the
newer spec.
There are a few differences:
- They name things differently. The main confusing one is we had
`next_token()` which consumed a token and returned it, but the spec
has a `next_token()` which peeks the next token. The spec names are
honestly better than what I'd come up with. (`discard_a_token()` is a
nice addition too!)
- We used to store the index of the token that was just consumed, and
they instead store the index of the token that will be consumed next.
This is a perfect breeding ground for off-by-one errors, so I've
finally added a test suite for TokenStream itself.
- We use a transaction system for rewinding, and the spec uses a stack
of "marks", which can be manually rewound to. These should be able to
coexist as long as we stick with marks in the parser spec algorithms,
and stick with transactions elsewhere.
Between WPT.sh and ladybird.sh.
This is useful to me as I set my default build configuration to Debug,
and have been hacking around with the WPT script to align with this
configuration.
ladybird.sh allows the source directory to be overriden to point to
another source directory. I am not sure if anyone is actually using this
behaviour in practise, but let's make the behaviour at least common
between the two scripts with a helper function.
We have a bit of forgiveness around allowing tests to pass with varying
trailing newlines. Only write a rebaselined test to disk if it would not
have passed under those conditions.
Before this change, we transferred the input element's line-height to
both the editable text *and* the placeholder. This caused some strange
doubling of the effective line-height when the editable text was empty,
pushing down the placeholder.
These were used to provide a layer of abstraction between ResourceLoader
and the networking backend. Now that we only have RequestServer, we can
remove these adapters to make the code a bit easier to follow.
Now that we use libcurl, there's no reason to keep Qt networking around.
Further, it doesn't support all features we need anyways, such as non-
buffered request handling for SSE.
The spec expects `postMessage()` to act as if it is invoked
immediately. Since `postMessage()` isn't actually invoked immediately,
keep tasks with source `PostedMessage` in the task queue, so that these
tasks are processed. Fixes a hang when `WorkerGlobalScope.close()` is
called immediately after `postMessage()`.
https://www.w3.org/TR/event-timing/#sec-performance-event-timing
Add idl, header and stubs for PerformanceEventTiming interface.
Two missing `PerformanceEntry` types that have come up in issues
are the `first-input` and the `event` entryTypes. Those are both
this.
Also, because both of those are this same interface, the static
methods from the parent class are difficult to implement because
of instance-specific details. Might either need subclasses or to
edit the parent and also everything that inherits from it :/
This test caused some flakiness due to the about:blank load it triggers.
It causes headless-browser to receive a load event for about:blank. If
we have moved onto the next test before that event arrived, that test
would ultimately time out, as its own load will have been dropped while
the about:blank load is still ongoing.
This patch makes us wait for that iframe load event before completing
the test.
We may want to consider never sending subframe load events to the UI
process as well. We really only care about top-level page loads in the
receivers of that event.
We don't create a ChromeProcess in headless-browser, so it is currently
not increasing it's open file limit. This is causing crashes on macOS,
which has a very low default limit.
This uses the setup-python action in the setup action to install python
3.12, and removes the --break-system-packages hack that the system pip
requires.
Before this change viewport was allowed to be scrolled whenever it has a
scrollable overflow, which is not correct when overflow is specified to
be hidden.
Partially reverting a3149c1ce9
Spinning the event loop was causing a crash on:
https://wpt.live/url/percent-encoding.window.html
As it was turning what is meant to be a synchronous operation into an
asynchronous one.
The sequence demonstrated by the reproducing test is as follows:
* A src attribute is changed for the iframe
* process_the_iframe_attributes entered with valid content navigable
* Event loop is spun, allowing the queued iframe removal to execute
* process_the_iframe_attributes continues with null content navigable
* 💥
This reverts commit 556a0936dd.
This was causing a large slow down in WPT, and a crash on macOS during
session shutdown when running WebDriver manually.
Transitions are currently not implemented for pseudo elements which
causes the transition to be applied to the "real"/"parent" element. When
a transition adjusts width/height on a pseudo element this causes the
real elements layout to break.
As a quick fix we just skip doing transitions when they are against
pseudo elements.
We were overly aggressive in clipping SVG roots, which effectively made
them behave as if they always had `overflow: hidden`.
This fixes incorrect clipping of the logo on https://basecamp.com/
Which fixes the following WPT test from failing due to issues stemming
from all of the windows which have been opened.
https://wpt.live/url/failure.html
This will give us 1205 new subtests passing in WPT.
We currently create a single WebView and run all 1400+ LibWeb tests in
serial over that WebView. Instead, let's create as many WebViews as
there are processes on the system, and run LibWeb tests concurrently
over those views.
To do this performantly requires that we never block the main thread of
the headless-browser process once the tests are running. Doing so will
effectively pause execution of all other tests. So test execution is now
Promise-based.
On my machine (with a hardware concurrency of 32), this reduces the run
time of LibWeb tests from 31.382s to 3.640s. CPU utilization increases
from 5% to 67%.
Instead, just log the view's current URL when the WebView crashes. It
won't make any sense to track the executing test this way once there are
many tests running concurrently.
We currently run all tests in a single WebView instance. That instance
owns the process-wide RequestClient / ImageDecoderClient, so if we were
to create a second instance, we'd run into trouble.
This migrates ownership of these services to the Application class, and
makes the Application own the WebView. In the future, this will let the
Application own a list of views.
We currently pass around the individual fields of the Application class
to a bunch of free functions. This makes adding a new field, and passing
it all the way to e.g. run_dump_test pretty annoying, as we have to go
through about 5 function calls.
This will get much worse in an upcoming patch to run LibWeb tests
concurrently. There, we will have to further pass these flags around as
async lambda value captures.
To make this nicer, just access the flags from Application::the(), which
is how the "real" UIs access their application objects as well.
If, by the time we need to schedule rendering of the next frame, the
previous one is still not processed, we could skip it instead of growing
task queue.
Should help with https://github.com/LadybirdBrowser/ladybird/issues/1647
When the flex container is sized under a min-content constraint in the
main axis, any flex items with a percentage main size should collapse
to zero width, not take up their own intrinsic min-content size.
This is not in the spec, but matches how other browsers behave.
Fixes an issue where the cartoons on https://basecamp.com/ were way
too large. :^)
Static position of a box is defined by formatting context type it
belongs to, so let's define this algorithm separately for each FC
instead of assuming FormattingContext::calculate_static_position_rect()
understands how to handle all of them.
Also with this change calculate_static_position_rect() is no longer
virtual function.
At least on my mac, clock_gettime only provides millisecond resolution.
So if many WebContent processes are opened at once, it is not unlikely
that they will all create their backing stores within the same ms. When
that happens, all but the first will fail (and crash).
To prevent this, generate the shared memory file name based on the PID
and a static counter.
We are currently trying to access the current parent and top-level
browsing contexts from the current BC itself. However, if the current BC
is closed, its association to the parent and top-level BCs is lost, and
we are no longer able to handle WebDriver endpoints involving those BCs.
Instead, let's store the parent and top-level BCs separately, and update
them in accordance with the spec.
We were already allowing intrinsic height layout to see definite widths,
and I can't think of a reason *not* to allow it the other way around.
More importantly, this fixes an issue where things with an aspect ratio
didn't have a height to resolve against before.
Makes the logo show up on https://basecamp.com/ :^)
If we end up in a situation where the navigable no longer has an active
window, we can't perform navigation or many other navigable operations.
These are all ad-hoc, since the navigables spec is basically all written
as if there's always an active window. Unfortunately, the active window
comes from the active document's browsing context, which is a nullable
concept even in the spec, so we do need to deal with null here.
This removes all the locally reproducible crashes when running WPT over
the legacy Japanese encoding directory on my computer.
Yes, this is a bit of a monkey patch, but it should be harmless since
we're (as I understand it) dealing with navigables that are still
hanging around with related tasks queued on them. Once all these tasks
have been completed, the navigables will go away anyway.
Because of the previous awkward factoring of Origin we had two
implementations of Origin serializing and creation. Move the
implementation of DOMURL::url_origin into URL::origin, and
instead use the implemenation of URL::Origin::serialize for
serialization (replacing URL::serialize_origin).
This happens to fix 8 URL subtests as the two implemenations had
diverged, and URL::serialize_origin was previously missing the spec
changes of: whatwg/url@eee49fd and whatwg/url@fff33c3
This closer matches specification - and removes any dependency on
LibWeb in the implementation of DOMURL::url_origin.
It is also one step closer to moving BlobURLRegistry to a singleton
process to match LibWeb's multiprocess Worker architecture.
While Origin is defined in the HTML spec - this leaves us with quite an
awkward relationship as the URL spec makes use of AO's from what is
defined in the HTML spec.
To simplify this factoring, relocate Origin into LibURL.
Previously, if there was an unhandled exception in an async test, it
might fail to call done() and timeout. Now we have a default "error"
handler to catch unhandled exceptions and fail the test. A few tests
want to actually test the behavior of window.onerror, so they need an
escape hatch.
Before this change we were serializing them in a bogus 8-digit hex color
format that isn't actually recognized by HTML.
This code will need more work when we start supporting color spaces
other than sRGB.
Now we can register jobs and they will be executed on the event loop
"later". This doesn't feel like the right place to execute them, but
the spec needs some updates in this regard anyway.
Implements https://github.com/whatwg/html/pull/10007 which basically
moves style, layout and painting from HTML processing task into HTML
task with "rendering" source.
The biggest difference is that now we no longer schedule HTML event loop
processing whenever we might need a repaint, but instead queue a global
rendering task 60 times per second that will check if any documents
need a style/layout/paint update.
That is a great simplification of our repaint scheduling model. Before
we had:
- Optional timer that schedules animation updates 60 hz
- Optional timer that schedules rAF updates
- PaintWhenReady state to schedule a paint if navigable doesn't have a
rendering opportunity on the last event loop iteration
Now all that is gone and replaced with a single timer that drives
repainting at 60 hz and we don't have to worry about excessive repaints.
In the future, hard-coded 60 hz refresh interval could be replaced with
CADisplayLink on macOS and similar API on linux to drive repainting in
synchronization with display's refresh rate.
update_layout() need to be invoked before checking if layout node is
present, because layout not being updated might be the reason why layout
node doesn't exist yet.
...otherwise animated style invalidation will be skipped.
This change is a preparation before applying latest HTML event loop
procesing spec changes to avoid regressing our tests.
Fixes at least one WPT test that was previously timing out:
- html/semantics/document-metadata/the-base-element/base_target_does_not_affect_iframe_src_navigation.html
Horizontal scrollbar has to leave space at right edge for the vertical
scrollbar to fully extend from top-to-bottom edge of viewport. Before,
this was done by just moving it leftward beyond the edge of viewport.
Now, it gets scaled down appropriately to fit between left edge of
viewport & vertical scrollbar without clipping.
The existing rebaseline script is a bit limiting in that it can only
rebaseline a single test at a time. When making sweeping changes, this
patch will let us rebaseline any number of tests at once.
For example, in the following abbreviated test HTML:
<span>some text</span>
<script>println("whf")</script>
We would have to craft the expectation file to include the "some text"
segment, usually with some leading whitespace. This is a bit annoying,
and makes it difficult to manually craft expectation files.
So instead of comparing the expectation against the entire DOM inner
text, we now send the inner text of just the <pre> element containing
the test output when we invoke `internals.signalTextTestIsDone`.
The events tested here are decidedly async. We also can't really write
sync tests of the form "test(async () => {})". Nothing will await the
async callback.
These tests were mostly async tests written in a manual way. This ports
them to use the standard asyncTest() infrastructure.
This is mostly just to reduce calls to internals.signalTextTestIsDone,
which will have a required parameter in an upcoming test.
These used to serve as tests before we had proper testing infrastructure
set up. Now, they just sit forgotten about, gathering dust. Let's remove
them.
`STDERR_FILENO` is pretty common but not part of the standard. On
Windows, in order to get a file number, one must use `_fileno` to
convert the `FILE *` to a file number. Adopt this pattern similar
to the Android path which uses platform specific operations.
There is an issue where gifs with many frames cannot be loaded, as each
bitmap is sent over IPC using a separate file descriptor, and there is
limit on the maximum number of descriptors per IPC message. Thus, trying
to load gifs with more than 64 frames (the current limit) causes the
image decoder process to die.
This commit introduces the BitmapSequence class, which is a thin wrapper
around the type Vector<Optional<NonnullRefPtr<Gfx::Bitmap>>> and
provides an IPC encode/decode routine that collates all bitmap data into
a single buffer so that only a single file descriptor is required per
IPC transfer, even if multiple frames are being sent.
Previously, if you ran with a relative path, then everything would run
fine except that the test name would be blank in the output, eg:
1/1234:
instead of:
1/1234: Text/input/canvas/export.html
Multiple font properties are either the `normal` keyword or some
non-keyword value, so this lets us avoid some boilerplate for those, at
the cost of the existing `none` users having marginally more verbose
code.
This is a special form of `<string>` so doesn't need its own style value
type. It's used in a couple of font-related properties. For completeness
it's included in ValueType.
Two font properties, font-feature-settings and font-variation-settings,
contain a list of values that are an `<opentype-tag>` followed by a
single value. This class is intended to fill both roles.
StyleComputer is responsible for assigning animation targets, so we
have to make sure there are no pending style updates before querying
animations of an element.
This change also introduces a version of getAnimations() that does not
check style updates and used by StyleComputer to avoid mutual recursion.
Merely specifying `self-hosted` is not enough - to get closer to
reproducible timings, we want these jobs to run on dedicated hardware.
The `test262-runner` label was introduced for runners that offer this.
swift-format is available in the Xcode 16 Beta and homebrew.
We will need some extra docs to tell Linux developers how to get it on
their distribution.
This also makes use of the fact that you can pass git diff a colon
delimited pattern to include ':*pattern' or exclude ':!*pattern'
matching files, which is pretty neat.
swift-format is only packaged for homebrew, Arch, and nixpkgs at the
moment. Rather than installing swiftly and a swift toolchain, let's
change the job to run on macOS.
We are currently creating a signal socket and socket notifier before the
Qt event loop itself has been created. Thus, when we receive a signal,
we are not actually notified when we write that signal number to the
signal socket.
This was also the source of the following error message being displayed
on every launch of the browser:
QSocketNotifier: Can only be used with threads started with QThread
The HTML tokenizer specification says that we're supposed to do this
when leaving the Attribute name or when emitting the token, as
appropriate.
Hopefully 'as appropriate' can mean only when emitting the token, as
that's the easiest place to insert this logic without complicating the
tokenizer any more.
Similar to script execution, this spins the WebDriver process until the
action is complete (rather than spinning the WebContent process, which
we've seen result in deadlocks).
This implements execution of the pointer up, pointer down, and pointer
move actions.
This isn't 100% complete. Pointer move actions are supposed to break
the move into iterations over the specified duration, which we currently
do not do.
In particular, we need to convert web element references to the actual
element. The AO isn't fully implemented because we will need to work out
mixing JsonValue types with JS value types, which currently isn't very
straight forward with our JSON clone algorithm.
This is only used for finding font directories for now, but having a
convenient function for it means if anyone needs to use XDG_DATA_DIRS
in future, they're less likely to implement it themselves and miss the
case of it being present but empty.
We also now canonicalize the data directory paths, as we do for the
other standard paths.
The XDG spec repeatedly says, for example:
> If $XDG_DATA_HOME is either not set or empty, a default equal to
$HOME/.local/share should be used.
- https://specifications.freedesktop.org/basedir-spec/latest/index.html
It's rare in practice, but does happen, for example in #1507 where we
would fail to find any system fonts if `XDG_DATA_DIRS` was blank.
This code now treats whitespace-only variables as empty too, which may
be overkill, but seems better to me than not doing so.
We had numerous NiH-based implementations of audio formats and metadata
that we now no longer need because we either don't make use of the code,
or we replaced its implementation by FFmpeg.
This loader supports whatever format libavformat and libavcodec can
handle. Currently only seekable streams are supported, and we still have
some limitations as to the number of channels and sample format.
Plays all non-streaming audio files at:
https://tools.woolyss.com/html5-audio-video-tester/
Previously, we set the "needs style update" flag to false at the
beginning of recomputing the style. This meant that if any code within
the cascade set this flag to true, then we would end style computation
thinking the element still needed its style updating. This could occur
when starting a transition, and would make TreeBuilder crash.
By ensuring that we always set the flag to false at the very end of
style computation, this is avoided, along with any similar issues - I
noticed a comment in `Animation::cancel()` which sounds like a
workaround was needed for a similar problem previously.
This has no visible effect, but internally it's also highlighting any
CSS and JS embedded in the page, which will be made use of later. We'll
also be able to use this code for highlighting CSS or JS files directly
in the future.
It's not a perfect fit - the syntax highlighters give specific styles to
their spans, which we then ignore and just use their data integer to
figure out which CSS class to give to the span. It feels cleaner to me
to produce HTML styled that way, instead of every token having
`style="color: ...; font-weight: ...; text-decoration: ...;"` set on
it.
Most of this new `to_html_string()` code is adapted from Serenity's
`TextEditor::paint_event()`, so it should be pretty solid.
The code previously ensured that JS/CSS tokens did not share values with
the HTML tokens, but still let them share values with each other. The
numbers chosen (1000 and 2000) are somewhat arbitrary, but give us
plenty of room to avoid overlaps.
Fixes crashing on https://playbiolab.com/ in
VERIFY(page.client().is_ready_to_paint()) caused by attempting to start
the next repaint before the ongoing repaint is done.
This is an ad-hoc implementation that resolves the ready() promise once
the document and all fonts collected by the style computer are done
loading. A spec-compliant implementation would include creating a proxy
CSS::FontFace for each @font-face and correctly implementing the
specification steps for font fetching, but we are far from there yet.
This hackish implementation should yield good WPT progress because it
will actually start waiting for the Ahem font to load before capturing
layout measurements. For example, it makes
https://wpt.live/css/css-grid/abspos/positioned-grid-descendants-001.html
go from 0/100 to 36/100 passing subtests.
We were generating click events always using the primary mouse button
instead of the provided button, and with the buttons field set to that
provided button.
After closing a window, it is the client's job to switch to another
window before executing any other command. Currently, we will crash if
that did not happen when we try to send an IPC to a window handle that
we no longer hold. This patch makes us return a "no such window" error
instead.
The exceptions to this new check are the "Switch to Window" and "Get
Window Handles" commands.
This is what the spec tells us to do:
The root element’s display type is always blockified,
and its principal box always establishes an independent
formatting context.
Additionally, a display of contents computes to block
on the root element.
Spec link: https://drafts.csswg.org/css-display/#rootFixes#1562
CSS Fonts level 4 renames font-stretch to font-width, with font-stretch
being left as a legacy alias. Unfortunately the other specs have not yet
been updated, so both terms are used in different places.
It's possible to resolve box's height without doing inner layout, when
computed value is not auto. Doing that fixes height resolution, when box
with percentage height has containing block with percentage height.
Before:
- resolve used width
- layout box's content
- resolve height
After:
- resolve used width
- resolve height if treated as not auto
- layout box's content
- resolve height if treated as auto
When a property is a "legacy name alias", any time it is used in CSS or
via the CSSOM its aliased name is used instead.
(See https://drafts.csswg.org/css-cascade-5/#legacy-name-alias)
This means we only care about the alias when parsing a string as a
PropertyID - and we can just return the PropertyID it is an alias for.
No need for a distinct PropertyID for it, and no need for LibWeb to
care about it at all.
Previously, we had a bunch of these properties, which misused our code
for "logical aliases", some of which I've discovered were not even
fully implemented. But with this change, all that code can go away, and
making a legacy alias is just a case of putting it in the JSON. This
also shrinks `StyleProperties` as it doesn't need to contain data for
these aliases, and removes a whole load of `-webkit-*` spam from the
style inspector.
The vcpkg install is handled through an action to run vcpkg install with
the private --x-install-root flag that their CMake toolchain file uses
to install dependencies into a build-time directory.
We now use the "report an exception" AO when a script has an execution
error. This has mostly replaced the older "report the exception" AO in
various specifications. Using this newer AO ensures that
`window.onerror` is invoked when a script has an execution error.
Rather than checking the avcodec version in CMake, check it using the
avcodec version macros in the only source file that needs to know about
the AVFrame API/ABI change in version 59.24.100. This is friendlier to
other build systems that would rather avoid configure time checks.
We are currently returning a JSON object of the form:
{
"name": "element-6066-11e4-a52e-4f735466cecf",
"value": "foo"
}
Instead, we are expected to return an object of the form:
{
"element-6066-11e4-a52e-4f735466cecf": "foo"
}
Very similar to commit e5877cda61.
By sending as much data as we can in a single write, we see a massive
performance improvement on WPT tests that hammer WebDriver with errors.
On my Linux machine, this reduces the runtime of:
/webdriver/tests/classic/perform_actions/invalid.py
from 45-60s down to 3-4s.
We must send a Cache-Control header, which then also requires that we
respond with an HTTP/1.1 response (the Pragma cache option is HTTP/1.0).
We should also send the Content-Type header using the same casing as is
written in the WebDriver spec (lowercase).
Both of these are explicitly tested by WPT.
First, this isn't actually helpful, as we no longer store 32-bit values
in JsonValue. They are stored as 64-bit values anyways.
But more imporatantly, there was a bug here when trying to coerce an i64
to an i32. All negative values were cast to an i32, without checking if
the value is below NumericLimits<i32>::min.
Instead of creating a unique new prototype shape every time a function
object is instantiated, we now keep one cached with the intrinsics.
This avoids a whole lot of shape allocations, reducing GC pressure.
Instead of converting images to alpha masks on the CPU, we now delegate
that work to the GPU if possible, by way of SkSL shaders.
This noticeably speeds up https://vercel.com/ which has a ton of SVG
masking going on. The old implementation used 15% of CPU time when
loading the page, this one uses basically none.
I originally believed that this could never receive a null URL and the
spec was inaccurate, but it seems like it can indeed.
I don't have a distilled test, but this makes logging in with GitHub
work on https://v0.dev/
The spec allows us to either treat them as part of the UA origin, or as
its own origin before author styles. This second behaviour turns out to
be what we are currently doing, which is nice!
Funnily enough this was clarified in the spec barely a month after this
original comment was written. :^)
`revert` is supposed to revert to the previous cascade origin, but we
previously had it reverting to the previous layer. To support both,
track them separately during the cascade.
As part of this, we make `set_property_expanding_shorthands()` fall back
to `initial` if it can't find a previous value to revert to. Previously
we would just shrug and do nothing if that happened, which only works
if the value you want to revert to is whatever is currently in `style`.
That's no longer the case, because `revert` should skip over any layer
styles that have been applied since the previous origin.
The build assumed QT or AppKit are the only build UI frameworks. This
extends the default assumption away from that to start experimenting
with building on other platforms.
The `gn` build did not generate the CMake configuration file for the
backtrace module. Update the rules to configure the generated macros
mirroring the CMake build.
It's difficult to know what we need to implement if we silently ignore
these endpoints. Let's log the endpoints and their parameters, and clean
up the wall of FIXME comments to be easier to grok.
The file is not committed to disk until the close which occurs at the
termination of the scope. Extract the `rename` to outside the scope
allowing this to work on Windows. The `download_file` utility downloads
a file in the `gn` build.
WPT uses Python's http.client.HTTPConnection to send/receive WebDriver
messages. For some reason, on Linux, we see an ~0.04s delay between the
WPT server receiving the WebDriver response headers and its body. There
are tests which make north of 1100 of these requests, which adds up to
~44s.
These connections are almost always going to be over localhost and able
the be sent in a single write. So let's send the response all at once.
On my Linux machine, this reduces the runtime of /cookies/name/name.html
from 45-60s down to 3-4s.
If we don't recognize a given transition-property value as a known CSS
property (one that we know about, not necessarily an invalid one),
we should not extrapolate the other transition-foo values for it.
Fixes#1480
https://github.com/whatwg/console/pull/240 is an editorial change to use
the term "implementation-defined" more consistently. This seems to be
the only instance in the spec text which we quote verbatim.
On macOS, it's a bit trickier to not install them, as we're using the
MACOSX_PACKAGE_LOCATION file property to get them into the build
directory and install tree in the same way.
This mainly uses forward declarations as appropriate for input element
related files. This reduces the number of targets being built when we
change HTMLInputElement.h from 430 to 44.
Previously, using `ladybird.sh run` with any target that was part of the
MacOS app bundle would try to run the given executable from the wrong
directory.
This confirmed works on Xcode 16, and Xcode 16.1 Beta 2, with CMake 3.28
or higher.
On linux, the 6.0.0 release from swiftly is still missing my libstdc++
workaround, so it needs a snapshot to work.
When the detected SDK for CMAKE_OSX_SYSROOT and friends has the same
version as your current macOS system version, CMake helpfully doesn't
set CMAKE_OSX_DEPLOYMENT_TARGET. Unfortunately, in this case, swiftc
will default to macOS 10.4, which is absolutely ancient. Grab the target
triple from the -print-target-info JSON when CMAKE_OSX_DEPLOYMENT_TARGET
is not provided at configure time.
Previously, we would crash when attempting to establish a web socket
connection from inside a worker, as we were assuming that the ESO's
global object was a `Window`.
Previously, some otherwise unimplemented WebDriver endpoints were
indicating that they had executed successful, this was causing a large
number of Web Platform Tests to time out when they should have failed.
The thread pool test is currently flakey and takes over 2 minutes to run
on CI. It also currently has no users now that RequestServer uses curl,
so let's just remove it for now. If we need it in the future, we can
revive it from git history.
If we decide to fetch another linked resource, we don't care about the
earlier fetch and can safely abort it.
This fixes an issue on GitHub where we'd load the same style sheet
multiple times and invalidate style for the entire document every time
it finished fetching.
By aborting the ongoing fetch, no excess invalidation happens.
As useful as they may be to web developers, :has() selectors complicate
the style invalidation process quite a lot.
Let's have StyleComputer keep track of whether they are present at all
in the current set of active style sheets. This will allow us to
implement fast-path optimizations when there are no :has() selectors.
When an element is invalidated, it's possible for any subsequent sibling
or any of their descendants to also need invalidation. (Due to the CSS
sibling combinators, `+` and `~`)
For DOM node insertion/removal, we must also invalidate preceding
siblings, since they could be affected by :first-child, :last-child or
:nth-child() selectors.
This increases the amount of invalidation we do, but it's more correct.
In the future, we will implement optimizations that drastically reduce
the number of elements invalidated.
The expensive part of creating a segmenter is doing the locale and UCD
data lookups at creation time. Instead of doing this once per text node,
cache the segmenters on the document, and clone them as needed (cloning
is much, much cheaper).
On a profile loading Ladybird's GitHub repo, the following hot methods
changed as follows:
ChunkIterator ctor: 6.08% -> 0.21%
Segmenter factory: 5.86% -> 0%
Segmenter clone: N/A -> 0.09%
We now expand shorthands into their respective longhand values when
assigning to a shorthand named property on a CSSStyleDeclaration.
We also make sure that shorthands can be round-tripped by correctly
routing named property access through the getPropertyValue() AO,
and expanding it to handle shorthands as well.
A lot of WPT tests for CSS parsing rely on these mechanisms and should
now start working. :^)
Note that multi-level recursive shorthands like `border` don't work
100% correctly yet. We're going to need a bunch more logic to properly
serialize e.g `border-width` or `border` itself.
The algorithm for starting a transition requires us to examine the
before-change and after-change values of the property, without taking
any current animations into account.
..and delay static position calculation in IFC until trailing
whitespace are removed, because otherwise it's not possible to correctly
calculate x offset.
Containing block for abspos grid items depends on their grid placement:
- if element has definite grid position, then corresponding grid area
should be used as a containing block
- if element does not have definite grid position, then padding edge of
grid container should be used as a containing block
So offset should be adjusted for paddings only for boxes without
definite grid position.
The web server for WPT has a tendency to just disconnect after sending
us a resource. This makes curl think an error occurred, but it's
actually still recoverable and we have the data.
So instead of just bailing, do what we already do for other kinds of
resources and try to parse the data we got. If it works out, great!
It would be nice to solve this in the networking layer instead, but
I'll leave that as an exercise for our future selves.
This fixes an issue where document.write() with only text input would
leave all the character data as unflushed text in the parser.
This fixes many of the WPT tests for document.write().
...instead of directly mutating Gfx::Bitmap.
This change is preparation for using GPU-backend for canvas painting
where direct mutating of backing storage that bypasses painter is no
longer possible.
Our current text iterator is not aware of multi-code point graphemes.
Instead of simply incrementing an iterator one code point at a time, use
our Unicode grapheme segmenter to break text into fragments.
Instead of trying to locate the relevant StyleSheetList on style element
removal from the DOM, we now simply keep a pointer to the list instead.
This fixes an issue where using attachShadow() on an element that had
a declarative shadow DOM would cause any style elements present to use
the wrong StyleSheetList when removing themselves from the tree.
When a block container has `clear` set and some clearance is applied,
that clearance prevents margins from adjoining and therefore resets
the margin state. But when a floating box has `clear` set, that
clearance only goes between floating boxes so should not reset margin
state. BlockFormattingContexts already do that correctly, and this PR
changes InlineFormattingContext to do the same.
Fixes#1462; adds reduced input from that issue as test.
This is not that easy to use for test developers, as forgetting to set
the url back to its original state after testing your specific API will
cause future navigations to fail in inexplicable ways.
This patch implements `Range::getClientRects` and
`Range::getBoundingClientRect`. Since the rects returned by invoking
getClientRects can be accessed without adding them to the Selection,
`ViewportPaintable::recompute_selection_states` has been updated to
accept a Range as a parameter, rather than acquiring it through the
Document's Selection.
With this change, the following tests now pass:
- wpt[css/cssom-view/range-bounding-client-rect-with-nested-text.html]
- wpt[css/cssom-view/DOMRectList.html]
Note: The test
"css/cssom-view/range-bounding-client-rect-with-display-contents.html"
still fails due to an issue with Element::getClientRects, which will
be addressed in a future commit.
The current min/max zoom levels are supposed to be: 30% and 500%.
Before, due to floating point error accumulation in incremental addition
of zoom-step into zoom-level, one extra zoom step would get allowed,
enabling user to zoom 20%-to-510%.
Now, using rounding, the intermediate zoom-level values should be as
close to the theoretical value as FP32 can represent. E.g. zoom-level of
70% (theoretical multiplier 0.7) is 0.69... .
The IPCs to request a page's text, layout tree, etc. are currently all
synchronous. This can result in a deadlock when WebContent also makes
a synchronous IPC call, as both ends will be waiting on each other.
This replaces the page info IPCs with a single, asynchronous IPC. This
new IPC is promise-based, much like our screenshot IPC.
If we already destroyed our timer during destruction, and then curl
tries to flush its timeouts when we tear down the multi, we can just
ignore the timer callbacks.
DOM nodes that didn't have a layout node before being removed from the
DOM are not going to change the shape of the layout tree after being
removed.
Observing this, we can avoid a full layout tree rebuild on some DOM node
removals.
This avoids a bunch of tree building work when loading https://x.com/
This makes a big difference on macOS, where the default buffer size
for local sockets is 8 KiB. With bigger buffers, we don't have to
block on IPC nearly as often.
To prevent deadlocks when both IPC peers are trying to send to each
other but both sides have too much in their buffer already, we now
move the send operation to a secondary thread where it can block until
the peer is able to handle it.
Attributes have a max value length of 1024. So we theoretically need to
support values in the range -${"9".repeat(1023)} to ${"9".repeat(1024)}.
These obviously do not fit in an i64, so we were previously failing to
parse the attribute.
We will now cap the parsed value to the numeric limits of an i64, after
ensuring that the attribute value is indeed a number.
We were only looking at the current top-level navigable and its children
when searching for the specified window handle. We need to search *all*
known navigables if the handle belongs to a window not in the current
tree.
We very much assume that the SQL storage backend runs in a singleton
process. When this is not the case, and multiple UI processes try to
write to the database at the same time, one of them will fail.
Since --force-new-process is a testing mode flag, let's just disable the
SQL backend when that flag is present.
Previously, attempting to get the computed value for a
grid-template-rows or grid-template-columns property would cause a crash
if the element had no associated paintable.
Computing the "contained text auto directionality" is now its own
algorithm, with an extra parameter, and is additionally called from
step 2.1.3.2 instead of calling "auto directionality".
At least on my Linux machine using zsh, this line was interpreted as
( cd "$build_dir" || echo ... ) && exit 1
instead of the intended
cd "$build_dir" || ( echo ... && exit 1 )
...meaning that it always exited regardless of whether it found the
build dir or not. So, let's make the intended precedence explicit.
This has been implemented in Qt for quite some time. This patch adds the
same feature to AppKit. This is needed to run many WPT subtests with the
AppKit chrome. This is also needed to handle window.open, target=_blank
link clicks, etc.
This is overriding the URL passed to e.g. window.open and link clicks on
an <a target=_blank> element.
Note: This alone is not enough to support such use cases. We will also
need to actually implement opening child web views. But getting this fix
out of the way first makes that patch a bit simpler.
...because calculate_inner_width() assumes layout state has resolved
paddings that could be used to account for "box-sizing: border-box".
Fixes regression introduced in 5f74da6ae8
Function is defined as `round(<rounding-strategy>?, A, B?)`
With this change resolved type is `typeof(resolve(A))`, instead of
`typeof(A)`.
For example `round(up, 20%, 1px)` with 200px percentage basis is now
correctly resolved in 40px instead of 40%.
Progress on https://www.notion.so/ landing page.
Before this change, each BFC child that established an FC root was laid
out at least twice: the first time to perform a normal layout, and the
second time to perform an intrinsic layout to determine the automatic
content height. With this change, we avoid the second run by querying
the formatting context for the height it used after performing the
normal layout.
The `calculate_inner_width()` and `calculate_inner_height()` resolve
percentage paddings using the width returned by
`containing_block_width_for()`. However, this function does not account
for grids where the containing block is defined by the grid area to
which an item belongs.
This change fixes the issue by modifying `calculate_inner_width()` and
`calculate_inner_height()` to use the already resolved paddings from the
layout state. Corresponding changes ensure that paddings are resolved
and saved in the state before box-sizing is handled.
As a side effect, this change also improves abspos layout for BFC where
now paddings are resolved using padding box of containing block instead
of content box of containing block.
Fixes yet another case of GFC bug, where Node::containing_block() should
not be used for grid items, because their containing block is grid area
which is not represented in layout tree.
We currently implement the official cookie RFC, which was last updated
in 2011. Unfortunately, web reality conflicts with the RFC. For example,
all of the major browsers allow nameless cookies, which the RFC forbids.
There has since been draft versions of the RFC published to address such
issues. This patch implements the latest draft.
Major differences include:
* Allowing nameless or valueless (but not both) cookies
* Formal cookie length limits
* Formal same-site rules (not fully implemented here)
* More rules around cookie domains
This is one of the few endpoints that does not ensure a top-level BC is
open. It's a bit of an implementation-defined endpoint, so let's protect
against a non-existent BC explicitly.
Reftest screenshots are now captured using the dimensions specified in
the draw a bounding box from the framebuffer AO defined in the
WebDriver specification.
Although the parameter is named "available size," it is always supposed
to represent the containing block size whenever it has a definite value.
Therefore, it is possible to simply use this value instead of performing
a containing block lookup.
This change actually improves correctness for grid items whose
containing block is defined by the grid area, as
`Node::containing_block()` does not account for this.
Our abspos layout code assumes that available space is containing block
size, so this change aligns us with the spec by using grid area for this
value.
This change does not have attached test because it is required for
upcoming fix in calculate_inner_height() that will reveal the problem.
compute_width() could never be invoked for abspos boxes because they
are skipped during normal layout and processed in
parent_context_did_dimension_child_root_box()
All places where text shaping happens, the callback is used to simply
append a glyph into the end of glyphs vector. This change removes the
callback parameter and makes the text shaping function return a glyph
run.
When we create a WebDriverConnection object, we currently hand it the
page client for which it was opened, and perform all actions on that
client. However, some WebDriver endpoints change the browsing context
(and therefore page client) on which future commands should be executed.
For example, the switch-frame endpoint will switch the current browsing
context to a frame/iframe context.
This patch implements the current browsing context (and current top-
level browsing context) concepts. They are initialized to that of the
original page. Most of this patch is making sure we execute actions on
the correct context.
This change allows the user to specify the format of the log file to be
generated by the `WPT.sh` script. Multiple logging arguments may now be
specified.
The supported logging arguments are: `--log-raw`, `--log-unittest`,
`--log-xunit`, `--log-html`, `--log-mach`, `--log-tbpl`,
`--log-grouped`, `--log-chromium`, `--log-wptreport` and
`--log-wptscreenshot`. These arguments act the same as the equivalent
arguments supported by `wpt run`.
The short `--log` argument may also be used as an alias for `--log-raw`.
We currently spin the platform event loop while awaiting scripts to
complete. This causes WebContent to hang if another component is also
spinning the event loop. The particular example that instigated this
patch was the navigable's navigation loop (which spins until the fetch
process is complete), triggered by a form submission to an iframe.
So instead of spinning, we now return immediately from the script
executors, after setting up listeners for either the script's promise to
be resolved or for a timeout. The HTTP request to WebDriver must finish
synchronously though, so now the WebDriver process spins its event loop
until WebContent signals that the script completed. This should be ok -
the WebDriver process isn't expected to be doing anything else in the
meantime.
Also, as a consequence of these changes, we now actually handle time
outs. We were previously creating the timeout timer, but not starting
it.
Use this cached pointer to the containing block's used values when
obviously possible. This avoids a hash lookup each time, and these
hash lookups do show up in profiles.
Change try_compute_width() to check whether min-width/max-width or width
is auto instead of always using `computed_values.width()`.
`grid/min-max-content.html` test is affected but it's progression.
UI event handlers currently return a boolean where false means the event
was cancelled by a script on the page, or otherwise dropped. It has been
a point of confusion for some time now, as it's not particularly clear
what should be returned in some special cases, or how the UI process
should handle the response.
This adds an enumeration with a few states that indicate exactly how the
WebContent process handled the event. This should remove all ambiguity,
and let us properly handle these states going forward.
There should be no behavior change with this patch. It's meant to only
introduce the enum, not change any of our decisions based on the result.
Logging a parse error when the attribute is not present, is not useful,
but does fill the debug log with errors that hide any real parsing
errors. This patch introduces an early-out in this situation to prevent
this spam.
The following spec algorithms had changed since we implemented them:
- "parse a sizes attribute"
- "update the source set"
- "create a source set"
This commit brings them up to date, as well as adding some additional
logging when parsing the sizes attribute fails in some way.
Before this change, a formatting context was responsible for layout of
absolutely positioned boxes whose FC root box was their parent (either
directly or indirectly). This only worked correctly when the containing
block of the absolutely positioned child did not escape the FC root.
This is because the width and height of an absolutely positioned box are
resolved based on the size of its containing block, so we needed to
ensure that the containing block's layout was completed before laying
out an absolutely positioned box.
With this change, the layout of absolutely positioned boxes is delayed
until the FC responsible for the containing block's layout is complete.
This has affected the way we calculate the static position. It is no
longer possible to ask the FC for a box's static position, as this FC's
state might be gone by the time the layout for absolutely positioned
elements occurs. Instead, the "static position rectangle" (a concept
from the spec) is saved in the layout state, along with information on
how to align the box within this rectangle when its width and height are
resolved.
Absolutely positioned boxes do not affect the size of the formatting
context box they belong to, so it's safe to skip their layout entirely
when calculating intrinsic size.
FormattingContext::run() does not allow reentrancy, so it's safe to
save and access layout mode from FC object. This avoids need to drill it
through methods of a formatting context and makes it clear that this
value could never be changed after FC construction.
Root formatting context box is passed into constructor and saved in FC,
so it's possible to access it from there instead of passing the same
box into run().
This adds a new script for linting WebIDL files, and adds it to the set
of scripts Meta/lint-ci.sh runs. Initially, this script does just one
thing: normalizes IDL definition lines so they start with four spaces.
This change takes all existing WebIDL files in the repo that had
definition lines without four leading spaces, and fixes them so they
have four leading spaces.
This change ensures that the value sanitization algorithm is run and
the text cursor is set to the correct position when the type attribute
of an input is changed.
Instead of throwing all pseudo-element rules in one bucket, let's have
one bucket per pseudo-element.
This means we only run ::before rules for ::before pseudo-elements,
only ::after rules for ::after, etc.
Average style update time on https://tailwindcss.com/ 250ms -> 215ms.
Once we know the final value of the `content` property for a
pseudo-element, we can bail early if the value is `none` or `normal`
(note that `normal` only applies to ::before and ::after).
In those cases, no pseudo-element will be generated, so everything
that follows in StyleComputer would be wasted work.
This noticeably improves performance on many pages, such as
https://tailwindcss.com/ where style updates go from 360ms -> 250ms.
This makes the way we've implemented the CSS `revert` keyword a lot less
expensive.
Until now, we were making a deep copy of all property values at the
start of each cascade origin. (Those are the values that `revert` would
bring us back to if encountered.)
With this patch, the revert property set becomes a shallow copy, and we
only clone the property set if the cascade ends up writing something.
This knocks a 5% profile item down to 1.3% on https://tailwindcss.com
Instead of asking Skia for the family name every time we're called,
just cache the string once and make subsequent calls fast.
This knocks a 3.2% item off the profile entirely on
https://tailwindcss.com (at least on macOS with the CoreText backend)
This brings back the optimization we had in the old OpenType
implementation where we cache lookup tables for code point / glyph ID
mappings.
This noticeably improves performance on https://tailwindcss.com/ by
knocking an 8% profile item down to 0.2%. :^)
Skia is more permissive when it comes to font loading, compared to our
own OpenType implementation, which it has superseded, parsing an invalid
TTF does not result in an error but rather produces a font that is
incorrectly displayed. This change updates the FontLoader to address
this behavior and to stop attempting to parse a font as a last resort
when format detection has failed.
Fixes regression on x.com when text is not displayed introduced in
a9d5a99568
This script only checks Tests/AK, and verifies that all source files
that match Tests/AK/*.cpp are listed in the CMakeLists.txt.
This is a bit excessive. We don't have this check for any other test
files. This sort of error will definitely ™️ be caught in review.
Also, remove blank lines. (https://w3c.github.io/aria/#ARIAMixin source
doesn’t have any blank lines, and it’s not clear that the blank lines in
ours follow any intended structure/logic.)
This change adds the [CEReactions] attributes to all ARIA attributes in
the ARIAMixin WebIDL — as required by the WebIDL in the current spec at
https://w3c.github.io/aria/#ARIAMixin, and by the WPT test case at
http://wpt.live/custom-elements/reactions/AriaMixin-string-attributes.html,
and as implemented in other existing engines.
Otherwise, without this change, Ladybird doesn’t conform to the current
spec, fails all those tests, and isn’t interoperable with other engines.
CMake reads CMakePresets.json, which is before it reads CMakeLists.txt.
This causes CMake Error: Unrecognized "version" field if the version of
CMake is older than support for presets, or the version field of
presets.
The fix is to check CMake version in ladybird.sh before trying to create
the build directory.
Co-Authored-By: Andrew Kaster <andrew@ladybird.org>
If grid-template-rows or grid-template-columns queried for a box that is
not a grid container, the result should be computed value instead of
null.
Fixes crashing in inspector.
Instead of only bucketing these by class name, let's also bucket by
tag name and ID.
Reduces the number of selectors evaluated on https://tailwindcss.com/
from 2.9% to 1.9%.
By filtering first, we end up allocating much less vector space
most of the time.
This is mostly helpful in pathological cases where there's a huge number
of rules present, but most of them get rejected early.
Previously, there was a bug in the specification that would cause an
assertion failure, due to the abort event being fired before all
dependent signals were aborted.
That's awkward, but getComputedStyle needs to return used track values
for gridTemplateColumns and gridTemplateRows properties. This change
implements it by saving style values with used values into layout state,
so it could be assigned to paintables during LayoutState::commit() and
later accessed by style_value_for_property().
I haven't seen it used in the wild, but WPT grid tests extensively use
it. For example this change helps to go from 0/10 to 8/10 on this test:
https://wpt.live/css/css-grid/layout-algorithm/grid-fit-content-percentage.html
Before this change, the ancestor filter would only reject rules that
required a certain set of descendant strings (class, ID or tag name)
to be present in the current element's ancestor chain.
An immediate child is also a descendant, so we can include this
relationship in the ancestor filter as well.
This substantially improves the efficiency of the ancestor filter on
websites using Tailwind CSS.
For example, https://tailwindcss.com/ itself goes from full style
updates taking ~1400ms to ~350ms. Still *way* too long, but a huge
improvement nonetheless.
By bucketing these seletors by class or ID, we can avoid running them
in more cases.
Before, we were only avoiding them if the context element wasn't a div.
Now we avoid them for any element that doesn't have that specific class
or ID.
This reduces the number of selectors ran on https://vercel.com by a bit
more, from 1.90% to 1.65%.
These are just roundabout ways of writing .foo, so we can still put them
in the rules-by-class bucket and skip running them when the element
doesn't have that class.
Note that :is(.foo .bar) is also bucketed as a class rule, since the
context element must have the `bar` class for the selector to match.
This is a massive speedup on https://vercel.com/ as it cuts the number
of selectors we actually evaluate from 7.0% to 1.9%.
Fixes implementation of the following line from the spec:
"However, limit the growth of any fit-content() tracks by their
fit-content() argument."
Now we correctly apply a limit to increased growth limit rather than to
the planned increase.
Change in "Tests/LibWeb/Layout/input/grid/fit-content-2.html" is a
progression and "Item as wide as the content." is actually as wide as a
content.
Fixes a bug when "'Await' expression is not allowed in formal parameters
of an async function" is thrown for "await" encountered in a function
definition assigned to a default function parameter.
Fixes loading of https://excalidraw.com/
Added the following Routes, IPC definitions, and boilerplates for the
missing endpoints:
- Switch To Frame
- Switch To Parent Frame
- Element Clear
- Element Send Keys
Getting the first font in a font cascade list with an empty font list
results in an OOTB index error. If the font list is empty, the last
resort font should be returned instead.
This didn't make any sense, and was already handled by pushing a new
execution context anyway.
By simply removing these bogus lines of code, we fix a bug where
throwing inside a function whose bytecode was shorter than the calling
function would crash trying to generate an Error stack trace (because
the bytecode offset we were trying to symbolicate was actually from
the longer caller function, and not valid in the callee function.)
This makes --log-all-js-exceptions less crash prone and more helpful.
This change enables using the rebaseline-libweb-test script with Debug
and Sanitizer builds — and allows specifying which build to use when
using rebaseline-libweb-test to generate new test-expectations files.
The mechanism used is to check the BUILD_PRESET environment variable.
Otherwise, without this change, there’s no way to use the
rebaseline-libweb-test script with Debug and Sanitizer builds — except
by manually hacking the script locally to hardcode a directory name.
This permits the user to use shift and the arrow/home/end keys to mutate
the document selection. Arrow key presses on non-editable text without
the shift key held has no effect.
This behavior differs browser-to-browser. The behavior here most closely
matches Firefox, though all browsers support this to some degree.
When the user clicks on a text node, the event handler sets the cursor
position to the location that was clicked. But it would then be set back
to 0 in the DOM node's focus handler. Leave the cursor alone, unless the
the DOM node was never set as the cursor position node (which will occur
when the user clicks on the DOM node, but outside the shadow text node).
In that case, move the cursor to the end of the text node.
The end result here is that the cursor is placed where the user clicked,
or set to the end of node if the user clicked outside of the shadow text
node.
This allows rendering the elements with a dark color in dark mode. We
must also assign a `fill` color to the <select> element's chevron SVG
to match the text color.
When setting an element attribute to the value it already had, we don't
need to update style or invalidate anything that depends on the DOM
version counter.
This was a source of much pointless busywork.
You can now build with STYLE_INVALIDATION_DEBUG and get a debug stream
of reasons why style invalidations are happening and where.
I've rewritten this code many times, so instead of throwing it away once
again, I figured we should at least have it behind a flag.
Instead of switching on the PropertyID and doing a boatload of
comparisons, we reorder the PropertyID enum so that all inherited
properties are in two contiguous ranges (one for shorthands,
one for longhands).
This replaces the switch statement with two simple range checks.
Note that the property order change is observable via
window.getComputedStyle(), but the order of those properties is
implementation defined anyway.
Removes a 1.5% item from the profile when loading https://hemnet.se/
It would be nice if we could somehow move this work to the GPU, but even
with some basic local optimization (mostly coalescing bounds checks and
inlining pixel data access), this knocks a 13% item down to 9% in a
profile of loading https://vercel.com/
When accessed on the root/document element, the following properties are
derived from the viewport, not layout-dependent metrics:
- scrollLeft
- scrollTop
- scrollWidth
- scrollHeight
We now avoid synchronous layout in such cases. This was causing some
unnecessary layout work when loading https://vercel.com/
Before this change, we were cascading custom properties for each layer,
and then replacing any previously cascaded properties for the element
with only the set from this latest layer.
The patch fixes the issue by making each pass of the custom property
cascade add to the same set, and then finally assigning that set of
properties to the element.
Looking at the spec it doesn't seem like there's a chance for a service
worker client to be an environment but not an environment settings
object. In the case that that changes in the implementation, we can
move it.
This API is a relic from the time when it was important for objects to
have easy access to the Window, and to know it was the global object.
We now have more spec-related concepts like relevant_global_object and
current_global_object to pull the Window out of thin air.
This adds a storage tab which contains just a cookie viewer for now. In
the future, storage like Local Storage and Indexed DB can be added here
as well.
In this patch, the cookie table is read-only.
Cookies are typically deleted by setting their expiry time to an ancient
time stamp (i.e. this is how WebDriver is required to delete cookies).
Previously, we would update the cookie in the cookie jar, which would
mark the cookie as dirty. We would then purge expired cookies from the
jar's transient storage, which removed the cookie from the dirty list.
If the cookie was also in the persisted storage, it would never become
expired there as it was no longer in the dirty list when the timer for
synchronization fired.
Now, we don't remove any cookies from the transient dirty list when we
purge expired cookies. We hold onto the dirty cookie until sync time,
where we now update the cookie in the persisted storage *before* we
delete expired cookies.
It's getting a bit unwieldy to maintain as an inlined string. Move it to
its own file so it can be edited with syntax highlighting and other IDE
features.
This header held a bunch of utility functions shared across several code
generators. The only user of any of these utilities now is the public
suffix generator. Move the one used function to that generator, and
remove the common header.
This change adds green and red pass/fail emoji indicators to an in-tree
test — to make it easier to manually scan through the test results and
quickly see which cases are passing, and which are failing.
This change makes the aria-relevant content attribute the ariaRelevant
IDL/DOM attribute get reflected — which makes the Ladybird behavior
interoperable with the implemented behavior in other existing engines.
Otherwise, without this change, Ladybird fails the relevant test case in
https://wpt.fyi/results/html/dom/aria-attribute-reflection.html — which
other existing engines all pass.
Before this change, we were only checking for actual glyph containment
in a font if unicode ranges were specified. However that is not
sufficient for emoji support, where we want to continue searching for
a font until one containing emojis is found.
This change updates `ExecuteScript::execute_script()` and
`ExecuteScript::execute_script()` to bring their behavior in line with
each other and the current specification text.
Instances of the variable `timeout` have also been renamed to
`timeout_ms`, for clarity.
I didn't want to add another set of boilerplatey tree-walking methods,
so here's a general-purpose one. :^)
`for_each_effective_rule()` walks the tree of effective style rules, and
runs the callback on each one, in either pre- or postorder. The
previous `for_each_effective_style/keyframes_rule()` methods of
`CSSStyleSheet` are then reimplemented in terms of
`for_each_effective_rule()`, and we can get rid of their equivalents
elsewhere.
Depending on usage, `@layer` has two forms, with two different CSSOM
types. One simply lists layer names and the other defines a layer with
its contained rules.
On Linux/Windows, the ctrl key is used in conjunction with arrow keys to
jump word-by-word in text documents. On macOS, the option key is used
(which is mapped to the alt key code).
The main motivator here was noticing that --disable-sql-database did not
work with AppKit. Rather than re-implementing this there, move ownership
of these classes to WebView::Application, so that each UI does not need
to individually worry about it.
A hard-coded value of 50px is too large for text boxes with a size that
is less than 50px. Reduce this to 24px, and further limit it by the size
of the overflown box.
This change should move us forward toward emoji support, as we are no
longer limited by our own OpenType implementation, which was failing
to parse the TrueType Collection format used to store emoji fonts
(at least on macOS).
This is a preparation for upcoming changes where Gfx::Typeface will
depend on `FontDatabase::should_force_fontconfig()`, so we will no
longer be able to construct typefaces from FontDatabase constructor
because of circular dependency.
Currently we rely on parser returning an error if encoded data cannot be
parsed into a valid WOFF or WOFF2 font, which is not going to be true
after switching to Skia that sometimes does not fail even if a data does
not represent a valid font.
When an editable node is focused and one of the arrow/home/end keys are
pressed while shift is held, we will now create or update the document's
selection. There is a bit of nuance to the behavior here, which matches
how the cursor behaves in other engines.
We will of course want to abstract this in the future to extend any non-
editable node text selections. This also does not implement holding ctrl
to jump by word, rather than grapheme.
AppKit uses Private Use Area code points for a large collection of
functional keys (arrows, home/end, etc.). Re-assign them to 0 to avoid
tripping up WebContent's key handler.
When performing a hit test of type TextCursor, it would check if the
position is around each fragment and not just inside it. This resulted
in always selecting the first fragment checked.
This commit computes the distance of each hit test result, and picks the
closest one.
When deciding if the grid containers min size should be limited by a
max size. Check for a max height or width depending on the dimension,
instead of just always checking for a max width.
If a key is pressed when the media player is in focus, which causes the
media player to perform some action, that key event is no longer
propagated further.
When we want to inject a CSS counter for a line, we need to be sure to
handle if we had previously opened a styled span for the current source
substring. For example, if we see a new line in the middle of a comment,
we will have previously opened the following tag:
<span class="comment">
So when injecting a new line and the <span class="line"> element (for
CSS counters), we need to close the previous span and insert a newly
opened tag to continue using the style.
If the Downloads directory exists, we will use it (note that this will
respect the XDG_DOWNLOAD_DIR environment variable).
Otherwise, we will ask the UI layer to retrieve a download directory
from the user. This directory is not saved, so it will be re-prompted
every time. Once a proper settings UI is complete, we will of course
integrate with that for persistent settings.
In some cases, we have a timestamp as a double in milliseconds. We then
would convert it to nanoseconds as a BigInt, just to bring it back to a
double for TZDB lookups. Add an overload to avoid this needless round
trip.
Even though the underlying time zone is already cached by LibUnicode, JS
performs additional expensive lookups with that time zone. There's no
need to do those lookups again until the system time zone has changed.
Note that we can currently only use simdutf for Base64 decoding if the
provided stopBeforePartial option is loose, which is the default. There
is an open issue for simdutf to implement strict and stop-before-partial
options. Until then, for those options, we implement a slow decoder that
is written exactly as the spec steps dictate.
See: https://github.com/simdutf/simdutf/issues/440
Some callers (LibJS) will want to control the size of the output buffer,
to decode up to a maximum length. They will also want to receive partial
results in the case of an error. This patch adds a method to provide
those capabilities, and makes the existing implementation use it.
Choosing options from the `<select>` will load and display that style
sheet's source text, with some checks to make sure that the text that
just loaded is the one we currently want.
The UI is a little goofy when scrolling, as it uses `position: sticky`
which we don't implement yet. But that's just more motivation to
implement it! :^)
This will be used by the inspector, for showing style sheet contents.
Identifying a specific style sheet is a bit tricky. Depending on where
it came from, a style sheet may have a URL, it might be associated with
a DOM element, both, or neither. This varied information is wrapped in
a new StyleSheetIdentifier struct.
This is to enable the inspector to show this source.
There's a fairly hefty FIXME here because duplicating the source text is
a significant waste of memory. But I don't want to get too sidetracked.
This is only used for CSS style sheets. One case wants it as a String,
and the others don't care, but will in future also want to have the
source as a String.
When trying to use pkgconfig for finding libjxl, the build fails
trying to link the cross-compiler's libc++.
Using this way libjxl also requires hwy library.
Findlibjxl.cmake was taken from SDL_image and altered to include its license.
On Android there's really no real way to provide command line flags.
We are using a dummy Main::Argument that only contains "ladybird"
as a name of the program.
The strings of Main::Argument cannot be empty, otherwise the program
throws an error. However the argc and argv can be set to 0 and nullptr
Calls to `Document::set_needs_display()` and
`Paintable::set_needs_display()` now invalidate the display list by
default. This behavior can be changed by passing
`InvalidateDisplayList::No` to the function where invalidating the
display list is not necessary.
By checking the lengths and then looking directly at the bytes, the
generated code becomes a lot nicer.
This gives a 1.23x speedup when parsing the JS from x.com
Use offset from ScrollFrame which is an actual value a box is shifted by
while painting.
Also change `update_paint_and_hit_testing_properties_if_needed()` to
refresh scroll frames state, because `getBoundingClientRect()` now
depends on them.
Fixes wrong file tree sidebar location and excessive layout
invalidations caused by some miscalculation on JS-side when wrong
bounding client rect is provided on Github PR pages like
https://github.com/LadybirdBrowser/ladybird/pull/1232/files
description:Create a report to help us reproduce and fix a bug.
body:
- type:textarea
id:summary
attributes:
label:Summary
description:Describe the problem in 1 or 2 short sentences.
placeholder:When I … in Ladybird, …
validations:
required:true
- type:dropdown
id:os
attributes:
label:Operating system
options:
- Linux
- macOS
- Windows
- Android
validations:
required:true
- type:textarea
id:reproduction-steps
attributes:
label:Steps to reproduce
description:Describe the exact steps we can follow to reproduce the problem.
value:|
1.
2.
3.
validations:
required:true
- type:textarea
id:expected-behavior
attributes:
label:Expected behavior
description:Describe what you expected to happen when you followed [the steps you described above](#description-reproduction-steps).
validations:
required:true
- type:textarea
id:actual-behavior
attributes:
label:Actual behavior
description:Describe what actually happened when you followed [the steps you described above](#description-reproduction-steps).
validations:
required:true
- type:markdown
id:reduced-test-case
attributes:
value:|
<!-- add some vertical whitespace -->
## Reduced test case
Either provide the URL for a [reduced test case](https://github.com/LadybirdBrowser/ladybird/blob/master/ISSUES.md#how-you-can-write-a-reduced-test-case) that reproduces the problem, or else HTML/SVG/etc. source for a reduced test case.
> [!IMPORTANT]
> A [reduced test case](https://github.com/LadybirdBrowser/ladybird/blob/master/ISSUES.md#how-you-can-write-a-reduced-test-case) may be the most important thing you can give us; without it, we’re much less likely to isolate the cause.
- type:input
id:reduced-test-case-url
attributes:
label:URL for a reduced test case
description:|
Provide the URL for a [reduced test case](https://github.com/LadybirdBrowser/ladybird/blob/master/ISSUES.md#how-you-can-write-a-reduced-test-case) that reproduces the problem (e.g., using a site such as [CodePen](https://codepen.io/pen/), [JS Bin](https://jsbin.com), or [JSFiddle](https://jsfiddle.net)). Or if you don’t have a [reduced test case](https://github.com/LadybirdBrowser/ladybird/blob/master/ISSUES.md#how-you-can-write-a-reduced-test-case), at least provide the URL for a website/page that causes the problem. Otherwise just enter `N/A` here.
validations:
required:true
- type:textarea
id:reduced-test-case-source
attributes:
label:HTML/SVG/etc. source for a reduced test case
description:If you’ve not provided the URL for a [reduced test case](https://github.com/LadybirdBrowser/ladybird/blob/master/ISSUES.md#how-you-can-write-a-reduced-test-case) that reproduces the problem, then paste in below the HTML/SVG/etc. source for a reduced test case. Otherwise just enter `N/A` here. What you paste in will be formatted as a code block — so, no need to put code fence backticks around it.
value:
render:html
validations:
required:true
- type:markdown
attributes:
value:|
<!-- add some vertical whitespace -->
- type:textarea
id:log-output
attributes:
label:|
Log output and (if possible) backtrace
description:|
Copy and paste the full log output from Ladybird — including error messages — as well as any backtrace that Ladybird reported.
What you paste in will be formatted as a code block — so, no need to put code fence backticks around it.
value:
render:shell
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots or screen recordings
description:Drag and drop in below any screenshots or screen recordings you’ve made which show the problem.
- type:textarea
id:build-flags
attributes:
label:Build flags or config settings
description:If you’re building with any non-default build flags or other non-default config settings in your environment, list them out below.
- type:checkboxes
id:will-patch
attributes:
label:Contribute a patch?
description:|
If you plan to contribute a patch for this issue yourself, please check the box below— to tell us and others looking at the issue that someone’s already working on it. If you do check this box, please try to send a pull request within 7 days or so.
options:
- label:I’ll contribute a patch for this myself.
- type:markdown
attributes:
value:|
<!-- add some vertical whitespace -->
## :heart: Become a Ladybird supporter
Ladybird is funded entirely by sponsorships and donations from people and companies who care about the open web.\
We accept one-time and recurring monthly donations via [**Donorbox**](https://donorbox.org/ladybird).
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.