Commit Graph

254 Commits

Author SHA1 Message Date
michael faith
5ccfcd17ff
feat(eslint-plugin-react-hooks): merge rule from eslint-plugin-react-compiler into react-hooks plugin (#32416)
This change merges the `react-compiler` rule from
`eslint-plugin-react-compiler` into the `eslint-plugin-react-hooks`
plugin. In order to do the move in a way that keeps commit history with
the moved files, but also no remove them from their origin until a
future cleanup change can be done, I did the `git mv` first, and then
recreated the files that were moved in their original places, as a
separate commit. Unfortunately GH shows the moved files as new instead
of the ones that are truly new. But in the IDE and `git blame`, commit
history is intact with the moved files.

Since this change adds new dependencies, and one of those dependencies
has a higher `engines` declaration for `node` than what the plugin
currently has, this is technically a breaking change and will have to go
out as part of a major release.

### Related Changes
- https://github.com/facebook/react/pull/32458

---------

Co-authored-by: Lauren Tan <poteto@users.noreply.github.com>
2025-03-12 21:43:06 -04:00
Sebastian Markbåge
e9252bcdcc
During a Swipe Gesture Render a Clone Offscreen and Animate it Onscreen (#32500)
This is really the essence mechanism of the `useSwipeTransition`
feature.

We don't want to immediately switch to the destination state when
starting a gesture. The effects remain mounted on the current state. We
want the current state to be "live". This is important to for example
allow a video to keeping playing while starting a swipe (think
TikTok/Reels) and not stop until you've committed the action. The only
thing that can be live is the "new" state. Therefore we treat the
destination as the "old" state and perform a reverse animation from
there.

Ideally we could apply the old state to the DOM tree, take a snapshot
and then revert it back in the mutation of `startViewTransition`.
Unfortunately, the way `startViewTransition` was designed it always
paints one frame of the "old" state which would lead this to cause a
flicker.

To work around this, we need to create a clone of any View Transition
boundary that might be mutated and then render that offscreen. That way
we can render the "current" state on screen and the "destination" state
offscreen for the screenshots. Being mutated can be either due to React
doing a DOM mutation or if a child boundary resizes that causes the
parent to relayout. We don't have to do this for insertions or deletions
since they only appear on one side.

The worst case scenario is that we have to clone the whole root. That's
what this first PR implements. We clone the container and if it's not
absolutely positioned, we position it on top of the current one. If the
container is `document` or `<html>` we instead clone the `<body>` tag
since it's the only one we can insert a duplicate of. If the container
is deep in the tree we clone just that even though technically we should
probably clone the whole document in that case. We just keep the impact
smaller. Ideally though we'd never hit this case. In fact, if we clone
the document we issue a warning (always for now) since you probably
should optimize this. In the future I intend to add optimizations when
affected View Transition boundaries are absolutely positioned since they
cannot possibly relayout the parent. This would be the ideal way to use
this feature most efficiently but it still works without it.

Since we render the "old" state outside the viewport, we need to then
adjust the animation to put it back into the viewport. This is the
trickiest part to get right while still preserving any customization of
the View Transitions done using CSS. This current approach reapplies all
the animations with adjusted keyframes.

In the case of an "exit" the pseudo-element itself is positioned outside
the viewport but since we can't programmatically update the style of the
pseudo-element itself we instead adjust all the keyframes to put it back
into the viewport. If there is no animation on the group we add one.

In the case of an "update" the pseudo-element is positioned on the new
state which is already inside the viewport. However, the auto-generated
animation of the group has a starting keyframe that starts outside the
viewport. In this case we need to adjust that keyframe.

In the future I might explore a technique that inserts stylesheets
instead of mutating the animations. It might be simpler. But whatever
hacks work to maximize the compatibility is best.
2025-03-04 20:10:08 -05:00
Sebastian "Sebbie" Silbermann
22e39ea72e
Include component name in "async/await is not supported" error message if available (#32435) 2025-02-25 10:48:44 +01:00
Sebastian Markbåge
88479c6fc3
Rerender useSwipeTransition when direction changes (#32379)
We can only render one direction at a time with View Transitions. When
the direction changes we need to do another render in the new direction
(returning previous or next).

To determine direction we store the position we started at and anything
moving to a lower value (left/up) is "previous" direction (`false`) and
anything else is "next" (`true`) direction.

For the very first render we won't know which direction you're going
since you're still on the initial position. It's useful to start the
render to allow the view transition to take control before anything
shifts around so we start from the original position. This is not
guaranteed though if the render suspends.

For now we start the first render by guessing the direction such as if
we know that prev/next are the same as current. With the upcoming auto
start mode we can guess more accurately there before we start. We can
also add explicit APIs to `startGesture` but ideally it wouldn't matter.
Ideally we could just start after the first change in direction from the
starting point.
2025-02-20 18:13:09 -05:00
Sebastian Markbåge
a53da6abe1
Add useSwipeTransition Hook Behind Experimental Flag (#32373)
This Hook will be used to drive a View Transition based on a gesture.

```js
const [value, startGesture] = useSwipeTransition(prev, current, next);
```

The `enableSwipeTransition` flag will depend on `enableViewTransition`
flag but we may decide to ship them independently. This PR doesn't do
anything interesting yet. There will be a lot more PRs to build out the
actual functionality. This is just wiring up the plumbing for the new
Hook.

This first PR is mainly concerned with how the whole starts (and stops).
The core API is the `startGesture` function (although there will be
other conveniences added in the future). You can call this to start a
gesture with a source provider. You can call this multiple times in one
event to batch multiple Hooks listening to the same provider. However,
each render can only handle one source provider at a time and so it does
one render per scheduled gesture provider.

This uses a separate `GestureLane` to drive gesture renders by marking
the Hook as having an update on that lane. Then schedule a render. These
renders should be blocking and in the same microtask as the
`startGesture` to ensure it can block the paint. So it's similar to
sync.

It may not be possible to finish it synchronously e.g. if something
suspends. If so, it just tries again later when it can like any other
render. This can also happen because it also may not be possible to
drive more than one gesture at a time like if we're limited to one View
Transition per document. So right now you can only run one gesture at a
time in practice.

These renders never commit. This means that we can't clear the
`GestureLane` the normal way. Instead, we have to clear only the root's
`pendingLanes` if we don't have any new renders scheduled. Then wait
until something else updates the Fiber after all gestures on it have
stopped before it really clears.
2025-02-13 16:06:01 -05:00
lauren
2c5fd26c07
[crud] Merge useResourceEffect into useEffect (#32205)
Merges the useResourceEffect API into useEffect while keeping the
underlying implementation the same. useResourceEffect will be removed in
the next diff.

To fork between behavior we rely on a `typeof` check for the updater or
destroy function in addition to the CRUD feature flag. This does now
have to be checked every time (instead of inlined statically like before
due to them being different hooks) which will incur some non-zero amount
(possibly negligble) of overhead for every effect.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/32205).
* #32206
* __->__ #32205
2025-02-11 14:18:50 -05:00
Josh Story
b25bcd460f
[Fizz] Support Suspense boundaries anywhere (#32069)
Suspense is meant to be composable but there has been a lonstanding
limitation with using Suspense above the `<body>` tag of an HTML
document due to peculiarities of how HTML is parsed. For instance if you
used Suspense to render an entire HTML document and had a fallback that
might flush an alternate Document the comment nodes which describe this
boundary scope won't be where they need to be in the DOM for client
React to properly hydrate them. This is somewhat a problem of our own
making in that we have a concept of a Preamble and we leave the closing
body and html tags behind until streaming has completed which produces a
valid HTML document that also matches the DOM structure that would be
parsed from it. However Preambles as a concept are too important to
features like Float to imagine moving away from this model and so we can
either choose to just accept that you cannot use Suspense anywhere
except inside the `<body>` or we can build special support for Suspense
into react-dom that has a coherent semantic with how HTML documents are
written and parsed.

This change implements Suspense support for react-dom/server by
correctly serializing boundaries during rendering, prerendering, and
resumgin on the server. It does not yet support Suspense everywhere on
the client but this will arrive in a subsequent change. In practice
Suspense cannot be used above the `<body>` tag today so this is not a
breaking change since no programs in the wild could be using this
feature anyway.

React's streaming rendering of HTML doesn't lend itself to replacing the
contents of the documentElement, head, or body of a Document. These are
already special cased in fiber as HostSingletons and similarly for Fizz
the values we render for these tags must never be updated by the Fizz
runtime once written. To accomplish these we redefine the Preamble as
the tags that represent these three singletons plus the contents of the
document.head. If you use Suspense above any part of the Preamble then
nothing will be written to the destination until the boundary is no
longer pending. If the boundary completes then the preamble from within
that boudnary will be output. If the boundary postpones or errors then
the preamble from the fallback will be used instead.

Additionally, by default anything that is not part of the preamble is
implicitly in body scope. This leads to the somewhat counterintuitive
consequence that the comment nodes we use to mark the borders of a
Suspense boundary in Fizz can appear INSIDE the preamble that was
rendered within it.

```typescript
render((
  <Suspense>
    <html lang="en">
      <body>
        <div>hello world</div>
      </body>
    </html>
  </Suspense>
))
```
will produce an HTML document like this
```html
<!DOCTYPE html>
<html lang="en">
  <head></head>
  <body>
    <!--$--> <-- this is the comment Node representing the outermost Suspense
    <div>hello world</div>
    <$--/$-->
  </body>
</html>
```

Later when I update Fiber to support Suspense anywhere hydration will
similarly start implicitly in the document body when the root is part of
the preamble (the document or one of it's singletons).
2025-01-17 10:54:11 -08:00
Sebastian Markbåge
a4d122f2d1
Add <ViewTransition> Component (#31975)
This will provide the opt-in for using [View
Transitions](https://developer.mozilla.org/en-US/docs/Web/API/View_Transition_API)
in React.

View Transitions only trigger for async updates like `startTransition`,
`useDeferredValue`, Actions or `<Suspense>` revealing from fallback to
content. Synchronous updates provide an opt-out but also guarantee that
they commit immediately which View Transitions can't.

There's no need to opt-in to View Transitions at the "cause" side like
event handlers or actions. They don't know what UI will change and
whether that has an animated transition described.

Conceptually the `<ViewTransition>` component is like a DOM fragment
that transitions its children in its own isolate/snapshot. The API works
by wrapping a DOM node or inner component:

```js
import {ViewTransition} from 'react';

<ViewTransition><Component /></ViewTransition>
```

The default is `name="auto"` which will automatically assign a
`view-transition-name` to the inner DOM node. That way you can add a
View Transition to a Component without controlling its DOM nodes styling
otherwise.

A difference between this and the browser's built-in
`view-transition-name: auto` is that switching the DOM nodes within the
`<ViewTransition>` component preserves the same name so this example
cross-fades between the DOM nodes instead of causing an exit and enter:

```js
<ViewTransition>{condition ? <ComponentA /> : <ComponentB />}</ViewTransition>
```

This becomes especially useful with `<Suspense>` as this example
cross-fades between Skeleton and Content:

```js
<ViewTransition>
  <Suspense fallback={<Skeleton />}>
    <Content />
  </Suspense>
</ViewTransition>
```

Where as this example triggers an exit of the Skeleton and an enter of
the Content:

```js
<Suspense fallback={<ViewTransition><Skeleton /></ViewTransition>}>
  <ViewTransition><Content /></ViewTransition>
</Suspense>
```

Managing instances and keys becomes extra important.

You can also specify an explicit `name` property for example for
animating the same conceptual item from one page onto another. However,
best practices is to property namespace these since they can easily
collide. It's also useful to add an `id` to it if available.

```js
<ViewTransition name="my-shared-view">
```

The model in general is the same as plain `view-transition-name` except
React manages a set of heuristics for when to apply it. A problem with
the naive View Transitions model is that it overly opts in every
boundary that *might* transition into transitioning. This is leads to
unfortunate effects like things floating around when unrelated updates
happen. This leads the whole document to animate which means that
nothing is clickable in the meantime. It makes it not useful for smaller
and more local transitions. Best practice is to add
`view-transition-name` only right before you're about to need to animate
the thing. This is tricky to manage globally on complex apps and is not
compositional. Instead we let React manage when a `<ViewTransition>`
"activates" and add/remove the `view-transition-name`. This is also when
React calls `startViewTransition` behind the scenes while it mutates the
DOM.

I've come up with a number of heuristics that I think will make a lot
easier to coordinate this. The principle is that only if something that
updates that particular boundary do we activate it. I hope that one day
maybe browsers will have something like these built-in and we can remove
our implementation.

A `<ViewTransition>` only activates if:

- If a mounted Component renders a `<ViewTransition>` within it outside
the first DOM node, and it is within the viewport, then that
ViewTransition activates as an "enter" animation. This avoids inner
"enter" animations trigger when the parent mounts.
- If an unmounted Component had a `<ViewTransition>` within it outside
the first DOM node, and it was within the viewport, then that
ViewTransition activates as an "exit" animation. This avoids inner
"exit" animations triggering when the parent unmounts.
- If an explicitly named `<ViewTransition name="...">` is deep within an
unmounted tree and one with the same name appears in a mounted tree at
the same time, then both are activated as a pair, but only if they're
both in the viewport. This avoids these triggering "enter" or "exit"
animations when going between parents that don't have a pair.
- If an already mounted `<ViewTransition>` is visible and a DOM
mutation, that might affect how it's painted, happens within its
children but outside any nested `<ViewTransition>`. This allows it to
"cross-fade" between its updates.
- If an already mounted `<ViewTransition>` resizes or moves as the
result of direct DOM nodes siblings changing or moving around. This
allows insertion, deletion and reorders into a list to animate all
children. It is only within one DOM node though, to avoid unrelated
changes in the parent to trigger this. If an item is outside the
viewport before and after, then it's skipped to avoid things flying
across the screen.
- If a `<ViewTransition>` boundary changes size, due to a DOM mutation
within it, then the parent activates (or the root document if there are
no more parents). This ensures that the container can cross-fade to
avoid abrupt relayout. This can be avoided by using absolutely
positioned children. When this can avoid bubbling to the root document,
whatever is not animating is still responsive to clicks during the
transition.

Conceptually each DOM node has its own default that activates the parent
`<ViewTransition>` or no transition if the parent is the root. That
means that if you add a DOM node like `<div><ViewTransition><Component
/></ViewTransition></div>` this won't trigger an "enter" animation since
it was the div that was added, not the ViewTransition. Instead, it might
cause a cross-fade of the parent ViewTransition or no transition if it
had no parent. This ensures that only explicit boundaries perform coarse
animations instead of every single node which is really the benefit of
the View Transitions model. This ends up working out well for simple
cases like switching between two pages immediately while transitioning
one floating item that appears on both pages. Because only the floating
item transitions by default.

Note that it's possible to add manual `view-transition-name` with CSS or
`style={{ viewTransitionName: 'auto' }}` that always transitions as long
as something else has a `<ViewTransition>` that activates. For example a
`<ViewTransition>` can wrap a whole page for a cross-fade but inside of
it an explicit name can be added to something to ensure it animates as a
move when something relates else changes its layout. Instead of just
cross-fading it along with the Page which would be the default.

There's more PRs coming with some optimizations, fixes and expanded
APIs. This first PR explores the above core heuristic.

---------

Co-authored-by: Sebastian "Sebbie" Silbermann <silbermann.sebastian@gmail.com>
2025-01-08 12:11:18 -05:00
Jack Pope
909ed63e0a
Clean up context access profiling experiment (#31806)
We introduced the `unstable_useContextWithBailout` API to run compiler
based experiments. This API was designed to be an experiment proxy for
alternative approaches which would be heavier to implement. The
experiment turned out to be inconclusive. Since most of our performance
critical usage is already optimized, we weren't able to find a clear win
with this approach.

Since we don't have further plans for this API, let's clean it up.
2024-12-16 12:32:07 -05:00
Ricky
5b0ef217ef
s/server action/server function (#31005)
## Overview

Changes the error message to say "Server Functions" instead of "Server
Actions" since this error can fire in cases like:

```
<button onClick={serverFunction} />
```

Which is calling a server function, not a server action.
2024-12-02 10:02:31 -05:00
lauren
c11c9510fa
[crud] Fix deps comparison bug (#31599)
Fixes a bug with the experimental `useResourceEffect` hook where we
would compare the wrong deps when there happened to be another kind of
effect preceding the ResourceEffect. To do this correctly we need to add
a pointer to the ResourceEffect's identity on the update.

I also unified the previously separate push effect impls for resource
effects since they are always pushed together as a unit.
2024-11-20 16:54:41 -05:00
Sebastian Markbåge
92c0f5f85f
Track separate SuspendedOnAction flag by rethrowing a separate SuspenseActionException sentinel (#31554)
This lets us track separately if something was suspended on an Action
using useActionState rather than suspended on Data.

This approach feels quite bloated and it seems like we'd eventually
might want to read more information about the Promise that suspended and
the context it suspended in. As a more general reason for suspending.

The way useActionState works in combination with the prewarming is quite
unfortunate because 1) it renders blocking to update the isPending flag
whether you use it or not 2) it prewarms and suspends the useActionState
3) then it does another third render to get back into the useActionState
position again.
2024-11-15 17:52:24 -05:00
Sebastian Silbermann
5c9243d153
Rename renderToMarkup to renderToHTML (#30689) 2024-08-14 19:35:16 +02:00
Sebastian Silbermann
e0a0e65412
Move react-html to react-markup (#30688) 2024-08-14 19:22:44 +02:00
Jack Pope
1350a85980
Add unstable context bailout for profiling (#30407)
**This API is not intended to ship. This is a temporary unstable hook
for internal performance profiling.**

This PR exposes `unstable_useContextWithBailout`, which takes a compare
function in addition to Context. The comparison function is run to
determine if Context propagation and render should bail out earlier.
`unstable_useContextWithBailout` returns the full Context value, same as
`useContext`.

We can profile this API against `useContext` to better measure the cost
of Context value updates and gather more data around propagation and
render performance.

The bailout logic and test cases are based on
https://github.com/facebook/react/pull/20646

Additionally, this implementation allows multiple values to be compared
in one hook by returning a tuple to avoid requiring additional Context
consumer hooks.
2024-07-26 14:38:24 -04:00
Sebastian Markbåge
c3cdbec0a7
[Flight] Add context for non null prototype error (#30293)
We already added this for other thrown errors, not just console.errors.
There's a production form of this. We just missed adding this context.

Mainly the best context is the line number though which comes from owner
stacks.
2024-07-08 22:51:59 -04:00
Sebastian Markbåge
6e169fc65d
[Flight] Allow String Chunks to Passthrough in Node streams and renderToMarkup (#30131)
It can be efficient to accept raw string chunks to pass through a stream
instead of encoding them into a binary copy first.

Previously our Flight parsers didn't accept receiving string chunks.
That's partly because we sometimes need to encode binary chunks anyway
so string only transport isn't enough but some chunks can be strings.
This adds a partial ability for chunks to be received as strings.

However, accepting strings comes with some downsides. E.g. if the
strings are split up we need to buffer it which compromises the perf for
the common case. If the chunk represents binary data, then we'd need to
encode it back into a typed array which would require a TextEncoder
dependency in the parser. If the string chunk represents a byte length
encoded string we don't know how many unicode characters to read without
measuring them in terms of binary - also requiring a TextEncoder.

This PR is mainly intended for use for pass-through within the same
memory. We can simplify the implementation by assuming that any string
chunk is passed as the original chunk. This requires that the server
stream config doesn't arbitrarily concatenate strings (e.g. large
strings should not be concatenated which is probably a good heuristic
anyway). It also means that this is not suitable to be used with for
example receiving string chunks on the client by passing them through
SSR hydration data - except if the encoding that way was only used with
chunks that were already encoded as strings by Flight.

Web streams mostly just work on binary data anyway so they can't use
this.

In Node.js streams we concatenate precomputed and small strings into
larger buffers. It might make sense to do that using string ropes
instead. However, in the meantime we can at least pass large strings
that are outside our buffer view size as raw strings. There's no benefit
to us eagerly encoding those.

Also, let Node accept string chunks as long as they're following our
expected constraints. This lets us test the mixed protocol using
pass-throughs. This can also be useful when the RSC server is in the
same environment as the SSR server as they don't have to go from strings
to typed arrays back to strings.

Now we can also use this in the pass-through used in renderToMarkup.
This lets us avoid the dependency on TextDecoder/TextEncoder in that
package.
2024-07-03 13:25:04 -04:00
Sebastian Markbåge
1e241f9d6c
Add renderToMarkup for Client Components (#30121)
Follow up to #30105.

This supports `renderToMarkup` in a non-RSC environment (not the
`react-server` condition).

This is just a Fizz renderer but it errors at runtime when you use
state, effects or event handlers that would require hydration - like the
RSC version would. (Except RSC can give early errors too.)

To do this I have to move the `react-html` builds to a new `markup`
dimension out of the `dom-legacy` dimension so that we can configure
this differently from `renderToString`/`renderToStaticMarkup`.
Eventually that dimension can go away though if deprecated. That also
helps us avoid dynamic configuration and we can just compile in the
right configuration so the split helps anyway.

One consideration is that if a compiler strips out useEffects or inlines
initial state from useState, then it would not get called an the error
wouldn't happen. Therefore to preserve semantics, a compiler would need
to inject some call that can check the current renderer and whether it
should throw.

There is an argument that it could be useful to not error for these
because it's possible to write components that works with SSR but are
just optionally hydrated. However, there's also an argument that doing
that silently is too easy to lead to mistakes and it's better to error -
especially for the e-mail use case where you can't take it back but you
can replay a queue that had failures. There are other ways to
conditionally branch components intentionally. Besides if you want it to
be silent you can still use renderToString (or better yet
renderToReadableStream).

The primary mechanism is the RSC environment and the client-environment
is really the secondary one that's only there to support legacy
environments. So this also ensures parity with the primary environment.
2024-06-28 09:25:10 -04:00
Sebastian Markbåge
ffec9ec5b5
Add new package with renderToMarkup export (#30105)
Name of the package is tbd (straw: `react-html`). It's a new package
separate from `react-dom` though and can be used as a standalone package
- e.g. also from a React Native app.

```js
import {renderToMarkup} from '...';
const html = await renderToMarkup(<Component />);
```

The idea is that this is a helper for rendering HTML that is not
intended to be hydrated. It's primarily intended to support a subset of
HTML that can be used as embedding and not served as HTML documents from
HTTP. For example as e-mails or in RSS/Atom feeds or other
distributions. It's a successor to `renderToStaticMarkup`.

A few differences:

- This doesn't support "Client Components". It can only use the Server
Components subset. No useEffect, no useState etc. since it will never be
hydrated. Use of those are errors.
- You also can't pass Client References so you can't use components
marked with `"use client"`.
- Unlike `renderToStaticMarkup` this does support async so you can
suspend and use data from these components.
- Unlike `renderToReadableStream` this does not support streaming or
Suspense boundaries and any error rejects the promise. Since there's no
feasible way to "client render" or patch up the document.
- Form Actions are not supported since in an embedded environment
there's no place to post back to across versions. You can render plain
forms with fixed URLs though.
- You can't use any resource preloading like `preload()` from
`react-dom`.

## Implementation

This first version in this PR only supports Server Components since
that's the thing that doesn't have an existing API. Might add a Client
Components version later that errors.

We don't want to maintain a completely separate implementation for this
use case so this uses the `dom-legacy` build dimension to wire up a
build that encapsulates a Flight Server -> Flight Client -> Fizz stream
to render Server Components that then get SSR:ed.

There's no problem to use a Flight Client in a Server Component
environment since it's already supported for Server-to-Server. Both of
these use a bundler config that just errors for Client References though
since we don't need any bundling integration and this is just a
standalone package.

Running Fizz in a Server Component environment is a problem though
because it depends on "react" and it needs the client version.
Therefore, for this build we embed the client version of "react" shared
internals into the build. It doesn't need anything to be able to use
those APIs since you can't call the client APIs anyway.

One unfortunate thing though is that since Flight currently needs to go
to binary and back, we need TextEncoder/TextDecoder to be available but
this shouldn't really be necessary. Also since we use the legacy stream
config, large strings that use byteLengthOfChunk errors atm. This needs
to be fixed before shipping. I'm not sure what would be the best
layering though that isn't unnecessarily burdensome to maintain. Maybe
some kind of pass-through protocol that would also be useful in general
- e.g. when Fizz and Flight are in the same process.

---------

Co-authored-by: Sebastian Silbermann <silbermann.sebastian@gmail.com>
2024-06-27 12:09:40 -04:00
Jan Kassens
b565373afd
lint: enable reportUnusedDisableDirectives and remove unused suppressions (#28721)
This enables linting against unused suppressions and removes the ones
that were unused.
2024-06-21 12:24:32 -04:00
Josh Story
c4b433f8cb
[Flight] Allow aborting during render (#29764)
Stacked on #29491

Previously if you aborted during a render the currently rendering task
would itself be aborted which will cause the entire model to be replaced
by the aborted error rather than just the slot currently being rendered.

This change updates the abort logic to mark currently rendering tasks as
aborted but allowing the current render to emit a partially serialized
model with an error reference in place of the current model.

The intent is to support aborting from rendering synchronously, in
microtasks (after an await or in a .then) and in lazy initializers. We
don't specifically support aborting from things like proxies that might
be triggered during serialization of props
2024-06-06 14:41:27 -07:00
Josh Story
def67b9b32
Fix stylesheet typo in 29693 (#29732)
stylehsheet -> stylesheet
2024-06-03 07:51:21 -07:00
Josh Story
47d0c30246
[Fiber][Float] Error when a host fiber changes "flavor" (#29693)
Host Components can exist as four semantic types

1. regular Components (Vanilla obv)
2. singleton Components
2. hoistable components
3. resources

Each of these component types have their own rules related to mounting
and reconciliation however they are not direclty modeled as their own
unique fiber type. This is partly for code size but also because
reconciling the inner type of these components would be in a very hot
path in fiber creation and reconciliation and it's just not practical to
do this logic check here.

Right now we have three Fiber types used to implement these 4 concepts
but we probably need to reconsider the model and think of Host
Components as a single fiber type with an inner implementation. Once we
do this we can regularize things like transitioning between a resource
and a regular component or a singleton and a hoistable instance. The
cases where these transitions happen today aren't particularly common
but they can be observed and currently the handling of these transitions
is incomplete at best and buggy at worst. The most egregious case is the
link type. This can be a regular component (stylesheet without
precedence) a hoistable component (non stylesheet link tags) or a
resource (stylesheet with a precedence) and if you have a single jsx
slot that tries to reconcile transitions between these types it just
doesn't work well.

This commit adds an error for when a Hoistable goes from Instance to
Resource. Currently this is only possible for `<link>` elements going to
and from stylesheets with precedence. Hopefully we'll be able to remove
this error and implement as an inner type before we encounter new
categories for the Hoistable types

detecting type shifting to and from regular components is harder to do
efficiently because we don't want to reevaluate the type on every update
for host components which is currently not required and would add
overhead to a very hot path

singletons can't really type shift in their one practical implementation
(DOM) so they are only a problem in theroy not practice
2024-06-03 07:47:45 -07:00
Sophie Alpert
38e3b23483
Tweak error message for "Should have a queue" (#29626) 2024-05-29 08:41:10 -07:00
Andrew Clark
681a4aa810
Throw if React and React DOM versions don't match (#29236)
Throw an error during module initialization if the version of the
"react-dom" package does not match the version of "react".

We used to be more relaxed about this, because the "react" package
changed so infrequently. However, we now have many more features that
rely on an internal protocol between the two packages, including Hooks,
Float, and the compiler runtime. So it's important that both packages
are versioned in lockstep.

Before this change, a version mismatch would often result in a cryptic
internal error with no indication of the root cause.

Instead, we will now compare the versions during module initialization
and immediately throw an error to catch mistakes as early as possible
and provide a clear error message.
2024-05-28 14:06:30 -04:00
Sebastian Markbåge
6c409acefd
[Flight Reply] Encode Objects Returned to the Client by Reference (#29010)
Stacked on #28997.

We can use the technique of referencing an object by its row + property
name path for temporary references - like we do for deduping. That way
we don't need to generate an ID for temporary references. Instead, they
can just be an opaque marker in the slot and it has the implicit ID of
the row + path.

Then we can stash all objects, even the ones that are actually available
to read on the server, as temporary references. Without adding anything
to the payload since the IDs are implicit. If the same object is
returned to the client, it can be referenced by reference instead of
serializing it back to the client. This also helps preserve object
identity.

We assume that the objects are immutable when they pass the boundary.

I'm not sure if this is worth it but with this mechanism, if you return
the `FormData` payload from a `useActionState` it doesn't have to be
serialized on the way back to the client. This is a common pattern for
having access to the last submission as "default value" to the form
fields. However you can still control it by replacing it with another
object if you want. In MPA mode, the temporary references are not
configured and so it needs to be serialized in that case. That's
required anyway for hydration purposes.

I'm not sure if people will actually use this in practice though or if
FormData will always be destructured into some other object like with a
library that turns it into typed data, and back. If so, the object
identity is lost.
2024-05-09 20:00:56 -04:00
Sebastian Markbåge
151cce3740
Track Stack of JSX Calls (#29032)
This is the first step to experimenting with a new type of stack traces
behind the `enableOwnerStacks` flag - in DEV only.

The idea is to generate stacks that are more like if the JSX was a
direct call even though it's actually a lazy call. Not only can you see
which exact JSX call line number generated the erroring component but if
that's inside an abstraction function, which function called that
function and if it's a component, which component generated that
component. For this to make sense it really need to be the "owner" stack
rather than the parent stack like we do for other component stacks. On
one hand it has more precise information but on the other hand it also
loses context. For most types of problems the owner stack is the most
useful though since it tells you which component rendered this
component.

The problem with the platform in its current state is that there's two
ways to deal with stacks:

1) `new Error().stack` 
2) `console.createTask()`

The nice thing about `new Error().stack` is that we can extract the
frames and piece them together in whatever way we want. That is great
for constructing custom UIs like error dialogs. Unfortunately, we can't
take custom stacks and set them in the native UIs like Chrome DevTools.

The nice thing about `console.createTask()` is that the resulting stacks
are natively integrated into the Chrome DevTools in the console and the
breakpoint debugger. They also automatically follow source mapping and
ignoreLists. The downside is that there's no way to extract the async
stack outside the native UI itself so this information cannot be used
for custom UIs like errors dialogs. It also means we can't collect this
on the server and then pass it to the client for server components.

The solution here is that we use both techniques and collect both an
`Error` object and a `Task` object for every JSX call.

The main concern about this approach is the performance so that's the
main thing to test. It's certainly too slow for production but it might
also be too slow even for DEV.

This first PR doesn't actually use the stacks yet. It just collects them
as the first step. The next step is to start utilizing this information
in error printing etc.

For RSC we pass the stack along across over the wire. This can be
concatenated on the client following the owner path to create an owner
stack leading back into the server. We'll later use this information to
restore fake frames on the client for native integration. Since this
information quickly gets pretty heavy if we include all frames, we strip
out the top frame. We also strip out everything below the functions that
call into user space in the Flight runtime. To do this we need to figure
out the frames that represents calling out into user space. The
resulting stack is typically just the one frame inside the owner
component's JSX callsite. I also eagerly strip out things we expect to
be ignoreList:ed anyway - such as `node_modules` and Node.js internals.
2024-05-09 12:23:05 -04:00
Sebastian Markbåge
6bac4f2f31
[Fizz] Fallback to client replaying actions if we're trying to serialize a Blob (#28987)
This follows the same principle as in #28611.

We cannot serialize Blobs of a form data into HTML because you can't
initialize a file input to some value. However the serialization of
state in an Action can contain blobs. In this case we do error but
outside the try/catch that recovers to error to client replaying instead
of MPA mode. This errors earlier to ensure that this works.

Testing this is a bit annoying because JSDOM doesn't have any of the
Blob methods but the Blob needs to be compatible with FormData and the
FormData needs to be compatible with `<form>` nodes in these tests. So I
polyfilled those in JSDOM with some hacks.

A possible future enhancement would be to encode these blobs in a base64
mode instead and have some way to receive them on the server. It's just
a matter of layering this. I think the RSC layer's `FORM_DATA`
implementation can pass some flag to encode as base64 and then have
decodeAction include some way to parse them. That way this case would
work in MPA mode too.
2024-05-07 21:53:10 -04:00
Sebastian Markbåge
3b551c8284
Rename the react.element symbol to react.transitional.element (#28813)
We have changed the shape (and the runtime) of React Elements. To help
avoid precompiled or inlined JSX having subtle breakages or deopting
hidden classes, I renamed the symbol so that we can early error if
private implementation details are used or mismatching versions are
used.

Why "transitional"? Well, because this is not the last time we'll change
the shape. This is just a stepping stone to removing the `ref` field on
the elements in the next version so we'll likely have to do it again.
2024-04-22 12:39:56 -04:00
Sebastian Markbåge
7909d8eabb
[Flight] Encode ReadableStream and AsyncIterables (#28847)
This adds support in Flight for serializing four kinds of streams:

- `ReadableStream` with objects as a model. This is a single shot
iterator so you can read it only once. It can contain any value
including Server Components. Chunks are encoded as is so if you send in
10 typed arrays, you get the same typed arrays out on the other side.
- Binary `ReadableStream` with `type: 'bytes'` option. This supports the
BYOB protocol. In this mode, the receiving side just gets `Uint8Array`s
and they can be split across any single byte boundary into arbitrary
chunks.
- `AsyncIterable` where the `AsyncIterator` function is different than
the `AsyncIterable` itself. In this case we assume that this might be a
multi-shot iterable and so we buffer its value and you can iterate it
multiple times on the other side. We support the `return` value as a
value in the single completion slot, but you can't pass values in
`next()`. If you want single-shot, return the AsyncIterator instead.
- `AsyncIterator`. These gets serialized as a single-shot as it's just
an iterator.

`AsyncIterable`/`AsyncIterator` yield Promises that are instrumented
with our `.status`/`.value` convention so that they can be synchronously
looped over if available. They are also lazily parsed upon read.

We can't do this with `ReadableStream` because we use the native
implementation of `ReadableStream` which owns the promises.

The format is a leading row that indicates which type of stream it is.
Then a new row with the same ID is emitted for every chunk. Followed by
either an error or close row.

`AsyncIterable`s can also be returned as children of Server Components
and then they're conceptually the same as fragment arrays/iterables.
They can't actually be used as children in Fizz/Fiber but there's a
separate plan for that. Only `AsyncIterable` not `AsyncIterator` will be
valid as children - just like sync `Iterable` is already supported but
single-shot `Iterator` is not. Notably, neither of these streams
represent updates over time to a value. They represent multiple values
in a list.

When the server stream is aborted we also close the underlying stream.
However, closing a stream on the client, doesn't close the underlying
stream.

A couple of possible follow ups I'm not planning on doing right now:

- [ ] Free memory by releasing the buffer if an Iterator has been
exhausted. Single shots could be optimized further to release individual
items as you go.
- [ ] We could clean up the underlying stream if the only pending data
that's still flowing is from streams and all the streams have cleaned
up. It's not very reliable though. It's better to do cancellation for
the whole stream - e.g. at the framework level.
- [ ] Implement smarter Binary Stream chunk handling. Currently we wait
until we've received a whole row for binary chunks and copy them into
consecutive memory. We need this to preserve semantics when passing
typed arrays. However, for binary streams we don't need that. We can
just send whatever pieces we have so far.
2024-04-16 12:20:07 -04:00
Andrew Clark
374b5d26c2
Scaffolding for requestFormReset API (#28808)
Based on:

- #28804 

---

This sets adds a new ReactDOM export called requestFormReset, including
setting up the export and creating a method on the internal ReactDOM
dispatcher. It does not yet add any implementation.

Doing this in its own commit for review purposes.

The API itself will be explained in the next PR.
2024-04-10 16:55:15 -04:00
Josh Story
4c12339ce3
[DOM] move flushSync out of the reconciler (#28500)
This PR moves `flushSync` out of the reconciler. there is still an
internal implementation that is used when these semantics are needed for
React methods such as `unmount` on roots.

This new isomorphic `flushSync` is only used in builds that no longer
support legacy mode.

Additionally all the internal uses of flushSync in the reconciler have
been replaced with more direct methods. There is a new
`updateContainerSync` method which updates a container but forces it to
the Sync lane and flushes passive effects if necessary. This combined
with flushSyncWork can be used to replace flushSync for all instances of
internal usage.

We still maintain the original flushSync implementation as
`flushSyncFromReconciler` because it will be used as the flushSync
implementation for FB builds. This is because it has special legacy mode
handling that the new isomorphic implementation does not need to
consider. It will be removed from production OSS builds by closure
though
2024-04-08 09:03:20 -07:00
Sebastian Markbåge
6090cab099
Use a Wrapper Error for onRecoverableError with a "cause" Field for the real Error (#28736)
We basically have four kinds of recoverable errors:

- Hydration mismatches.
- Server errored but client didn't.
- Hydration render errored but client render didn't (in Root or Suspense
boundary).
- Concurrent render errored but synchronous render didn't.

For the first three we log an additional error that the root or Suspense
boundary didn't error. This provides some context about what happened.
However, the problem is that for hydration mismatches that's unnecessary
extra context that is confusing. We also don't log any additional
context for concurrent render errors that could recover. This used to be
the only recoverable error so it didn't need extra context but now we
need to distinguish them. When we log these to `reportError` it's
confusing to just see the error because you didn't see anything error on
the page. It's also hard to group them together as one.

In this PR, I remove the unnecessary context for hydration mismatches.

For hydration and concurrent errors, I now wrap them in an error that
describes that what happened but then use the new `cause` field to link
the original error so we can keep that as the cause. The error that
happened was that hydration client rendered or you deopted to sync
render, the cause of that error is some other error.

For server errors, we control the Error object so I already had added
some context to that error object's message. Since we hide the message
in prod, it's nice not to have the raw message in DEV neither. We could
potentially split these into two errors for parity though.
2024-04-03 21:53:07 -04:00
Sebastian Markbåge
f7aa5e0aa3
Move Hydration Mismatch Errors to Throw or Log Once (Kind of) (#28502)
Stacked on #28476.

We used to `console.error` for every mismatch we found, up until the
error we threw for the hydration mismatch.

This changes it so that we build up a set of diffs up until we either
throw or complete hydrating the root/suspense boundary. If we throw, we
append the diff to the error message which gets passed to
onRecoverableError (which by default is also logged to console). If we
complete, we append it to a `console.error`.

Since we early abort when something throws, it effectively means that we
can only collect multiple diffs if there were preceding non-throwing
mismatches - i.e. only properties mismatched but tag name matched.

There can still be multiple logs if multiple siblings Suspense
boundaries all error hydrating but then they're separate errors
entirely.

We still log an extra line about something erroring but I think the goal
should be that it leads to a single recoverable or console.error.

This doesn't yet actually print the diff as part of this message. That's
in a follow up PR.
2024-03-26 20:01:41 -04:00
Sebastian Markbåge
72e02a8350
[Flight Reply] Don't allow Symbols to be passed to a reply (#28610)
As mentioned in #28609 there's a potential security risk if you allow a
passed value to the server to spoof Elements because it allows a hacker
to POST cross origin. This is only an issue if your framework allows
this which it shouldn't but it seems like we should provide an extra
layer of security here.

```js
function action(errors, payload) {
  try {
    ...
  } catch (x) {
    return [newError].concat(errors);
  }
}
```
```js
const [errors, formAction] = useActionState(action);
return <div>{errors}</div>;
```
This would allow you to construct a payload where the previous "errors"
set includes something like `<script src="danger.js" />`.

We could block only elements from being received but it could
potentially be a risk with creating other React types like Context too.
We use symbols as a way to securely brand these.

Most JS don't use this kind of branding with symbols like we do. They're
generally properties which we don't support anyway. However in theory
someone else could be using them like we do. So in an abundance of
carefulness I just ban all symbols from being passed (except by
temporary reference) - not just ours.

This means that the format isn't fully symmetric even beyond just React
Nodes.

#28611 allows code that includes symbols/elements to continue working
but may have to bail out to replaying instead of no JS sometimes.
However, you still can't access the symbols inside the server - they're
by reference only.
2024-03-21 16:53:02 -04:00
Sebastian Markbåge
83409a1fdd
[Flight] Encode React Elements in Replies as Temporary References (#28564)
Currently you can accidentally pass React Element to a Server Action. It
warns but in prod it actually works because we can encode the symbol and
otherwise it's mostly a plain object. It only works if you only pass
host components and no function props etc. which makes it potentially
error later. The first thing this does it just early hard error for
elements.

I made Lazy work by unwrapping though since that will be replaced by
Promises later which works.

Our protocol is not fully symmetric in that elements flow from Server ->
Client. Only the Server can resolve Components and only the client
should really be able to receive host components. It's not intended that
a Server can actually do something with them other than passing them to
the client.

In the case of a Reply, we expect the client to be stateful. It's
waiting for a response. So anything we can't serialize we can still pass
by reference to an in memory object. So I introduce the concept of a
TemporaryReferenceSet which is an opaque object that you create before
encoding the reply. This then stashes any unserializable values in this
set and encode the slot by id. When a new response from the Action then
returns we pass the same temporary set into the parser which can then
restore the objects. This lets you pass a value by reference to the
server and back into another slot.

For example it can be used to render children inside a parent tree from
a server action:

```
export async function Component({ children }) {
  "use server";
  return <div>{children}</div>;
}
```

(You wouldn't normally do this due to the waterfalls but for advanced
cases.)

A common scenario where this comes up accidentally today is in
`useActionState`.

```
export function action(state, formData) {
  "use server";
   if (errored) {
     return <div>This action <strong>errored</strong></div>;
   }
   return null;
}
```

```
const [errors, formAction] = useActionState(action);
return <div>{errors}<div>;
```

It feels like I'm just passing the JSX from server to client. However,
because `useActionState` also sends the previous state *back* to the
server this should not actually be valid. Before this PR this actually
worked accidentally. You get a DEV warning but it used to work in prod.
Once you do something like pass a client reference it won't work tho. We
could perhaps make client references work by stashing where we got them
from but it wouldn't work with all possible JSX.

By adding temporary references to the action implementation this will
work again - on the client. It'll also be more efficient since we don't
send back the JSX content that you shouldn't introspect on the server
anyway.

However, a flaw here is that the progressive enhancement of this case
won't work because we can't use temporary references for progressive
enhancement since there's no in memory stash. What is worse is that it
won't error if you hydrate. ~It also will error late in the example
above because the first state is "undefined" so invoking the form once
works - it errors on the second attempt when it tries to send the error
state back again.~ It actually errors on the first invocation because we
need to eagerly serialize "previous state" into the form. So at least
that's better.

I think maybe the solution to this particular pattern would be to allow
JSX to serialize if you have no temporary reference set, and remember
client references so that client references can be returned back to the
server as client references. That way anything you could send from the
server could also be returned to the server. But it would only deopt to
serializing it for progressive enhancement. The consequence of that
would be that there's a lot of JSX that might accidentally seem like it
should work but it's only if you've gotten it from the server before
that it works. This would have to have pair them somehow though since
you can't take a client reference from one implementation of Flight and
use it with another.
2024-03-19 16:59:52 -04:00
Josh Story
1c02b9d2bd
[DOM] disable legacy mode behind flag (#28468)
Adds a flag to disable legacy mode. Currently this flag is used to cause
legacy mode apis like render and hydrate to throw. This change also
removes render, hydrate, unmountComponentAtNode, and
unstable_renderSubtreeIntoContainer from the experiemntal entrypoint.
Right now for Meta builds this flag is off (legacy mode is still
supported). In OSS builds this flag matches __NEXT_MAJOR__ which means
it currently is on in experiemental. This means that after merging
legacy mode is effectively removed from experimental builds. While this
is a breaking change, experimental builds are not stable and users can
pin to older versions or update their use of react-dom to no longer use
legacy mode APIs.
2024-03-04 08:19:17 -08:00
Ricky
1940cb27b2
Update /link URLs to react.dev (#28477)
Depends on https://github.com/reactjs/react.dev/pull/6670 [merged]
2024-03-03 17:34:33 -05:00
Andrew Clark
2f8f776022
Move ref type check to receiver (#28464)
The runtime contains a type check to determine if a user-provided ref is
a valid type — a function or object (or a string, when
`disableStringRefs` is off). This currently happens during child
reconciliation. This changes it to happen only when the ref is passed to
the component that the ref is being attached to.

This is a continuation of the "ref as prop" change — until you actually
pass a ref to a HostComponent, class, etc, ref is a normal prop that has
no special behavior.
2024-02-29 21:26:36 -05:00
Sebastian Markbåge
d579e77482
Remove method name prefix from warnings and errors (#28432)
This pattern is a petpeeve of mine. I don't consider this best practice
and so most don't have these prefixes. Very inconsistent.

At best this is useless and noisey that you have to parse because the
information is also in the stack trace.

At worse these are misleading because they're highlighting something
internal (like validateDOMNesting) which even suggests an internal bug.
Even the ones public to React aren't necessarily what you called because
you might be calling a wrapper around it.

That would be properly reflected in a stack trace - which can also
properly ignore list so that the first stack you see is your callsite,

Which might be like `render()` in react-testing-library rather than
`createRoot()` for example.
2024-02-23 15:16:54 -05:00
Sebastian Markbåge
2e84e16299
[Flight] Better error message if you pass a function as a child to a client component (#28367)
Similar to #28362 but if you pass it to a client component.
2024-02-19 12:36:16 -05:00
Sebastian Markbåge
e41ee9ea70
Throw a better error when Lazy/Promise is used in React.Children (#28280)
We could in theory actually support this case by throwing a Promise when
it's used inside a render. Allowing it to be synchronously unwrapped.
However, it's a bit sketchy because we officially only support this in
the render's child position or in `use()`.

Another alternative could be to actually pass the Promise/Lazy to the
callback so that you can reason about it and just return it again or
even unwrapping with `use()` - at least for the forEach case maybe.
2024-02-08 17:10:19 -05:00
Sebastian Markbåge
b229f540e2
[Flight] Emit debug info for a Server Component (#28272)
This adds a new DEV-only row type `D` for DebugInfo. If we see this in
prod, that's an error. It can contain extra debug information about the
Server Components (or Promises) that were compiled away during the
server render. It's DEV-only since this can contain sensitive
information (similar to errors) and since it'll be a lot of data, but
it's worth using the same stream for simplicity rather than a
side-channel.

In this first pass it's just the Server Component's name but I'll keep
adding more debug info to the stream, and it won't always just be a
Server Component's stack frame.

Each row can get more debug rows data streaming in as it resolves and
renders multiple server components in a row.

The data structure is just a side-channel and it would be perfectly fine
to ignore the D rows and it would behave the same as prod. With this
data structure though the data is associated with the row ID / chunk, so
you can't have inline meta data. This means that an inline Server
Component that doesn't get an ID otherwise will need to be outlined. The
way I outline Server Components is using a direct reference where it's
synchronous though so on the client side it behaves the same (i.e.
there's no lazy wrapper in this case).

In most cases the `_debugInfo` is on the Promises that we yield and we
also expose this on the `React.Lazy` wrappers. In the case where it's a
synchronous render it might attach this data to Elements or Arrays
(fragments) too.

In a future PR I'll wire this information up with Fiber to stash it in
the Fiber data structures so that DevTools can pick it up. This property
and the information in it is not limited to Server Components. The name
of the property that we look for probably shouldn't be `_debugInfo`
since it's semi-public. Should consider the name we use for that.

If it's a synchronous render that returns a string or number (text node)
then we don't have anywhere to attach them to. We could add a
`React.Lazy` wrapper for those but I chose to prioritize keeping the
data structure untouched. Can be useful if you use Server Components to
render data instead of React Nodes.
2024-02-08 11:01:32 -05:00
dan
472854820b
[Flight] Delete Server Context (#28225)
Server Context was never documented, and has been deprecated in
https://github.com/facebook/react/pull/27424.

This PR removes it completely, including the implementation code.

Notably, `useContext` is removed from the shared subset, so importing it
from a React Server environment would now should be a build error in
environments that are able to enforce that.
2024-02-05 22:39:15 +00:00
Josh Story
1219d57fc9
[Fizz] Support aborting with Postpone (#28183)
Semantically if you make your reason for aborting a Postpone instance
the render should not hit the error pathways but should instead follow
the postpone pathways. It's awkward today to actually get your hands on
a Postpone instance because you have to catch the throw from postpone
and then pass that into `abort()` or `AbortController.abort()`
(depending on the renderer API you are using)

This change makes it so that in most circumstances if you abort with a
postpone the `onPostpone` handler will be called and the Suspense
boundaries still pending will be put into client render mode with the
appropriate postpone digest to avoid trigger recoverable error pathways
on the client.

Similar to postponing in the shell during a resume or render however if
you abort before the shell is complete in a resume or render we will
fatally error. The fatal error is contextualized by React to avoid
passing the postpone object itself to the `onError` and related options.
2024-02-01 07:14:08 -08:00
Josh Story
2983249dd2
[Fizz] implement onHeaders and headersLengthHint options (#27641)
Adds a new option to `react-dom/server` entrypoints.

`onHeaders: (headers: Headers) => void` (non node envs)
`onHeaders: (headers: { Link?: string }) => void` (node envs)

When any `renderTo...` or `prerender...` function is called and this
option is provided the supplied function will be called sometime on or
before completion of the render with some preload link headers.

When provided during a `renderTo...` the callback will usually be called
after the first pass at work. The idea here is we want to get a set of
headers to start the browser loading well before the shell is ready. We
don't wait for the shell because if we did we may as well send the
preloads as tags in the HTML.

When provided during a `prerender...` the callback will be called after
the entire prerender is complete. The idea here is we are not responding
to a live request and it is preferable to capture as much as possible
for preloading as Headers in case the prerender was unable to finish the
shell.

Currently the following resources are always preloaded as headers when
the option is provided
1. prefetchDNS and preconnects
2. font preloads
3. high priority image preloads

Additionally if we are providing headers when the shell is incomplete
(regardless of whether it is render or prerender) we will also include
any stylesheet Resources (ones with a precedence prop)

There is a second option `maxHeadersLength?: number` which allows you to
specify the maximum length of the header content in unicode code units.
This is what you get when you read the length property of a string in
javascript. It's improtant to note that this is not the same as the
utf-8 byte length when these headers are serialized in a Response. The
utf8 representation may be the same size, or larger but it will never be
smaller.

If you do not supply a `maxHeadersLength` we defaul to `2000`. This was
chosen as half the value of the max headers length supported by commonly
known web servers and CDNs. many browser and web server can support
significantly more headers than this so you can use this option to
increase the headers limit. You can also of course use it to be even
more conservative. Again it is important to keep in mind there is no
direct translation between the max length and the bytelength and so if
you want to stay under a certain byte length you need to be potentially
more aggressive in the maxHeadersLength you choose.

Conceptually `onHeaders` could be called more than once as new headers
are discovered however if we haven't started flushing yet but since most
APIs for the server including the web standard Response only allow you
to set headers once the current implementation will only call it one
time
2023-11-07 10:16:33 -08:00
Andrew Clark
77c4ac2ce8
[useFormState] Allow sync actions (#27571)
Updates useFormState to allow a sync function to be passed as an action.

A form action is almost always async, because it needs to talk to the
server. But since we support client-side actions, too, there's no reason
we can't allow sync actions, too.

I originally chose not to allow them to keep the implementation simpler
but it's not really that much more complicated because we already
support this for actions passed to startTransition. So now it's
consistent: anywhere an action is accepted, a sync client function is a
valid input.
2023-10-31 23:32:31 -04:00
Sebastian Markbåge
e61a60fac0
[Flight] Enforce "simple object" rule in production (#27502)
We only allow plain objects that can be faithfully serialized and
deserialized through JSON to pass through the serialization boundary.

It's a bit too expensive to do all the possible checks in production so
we do most checks in DEV, so it's still possible to pass an object in
production by mistake. This is currently exaggerated by frameworks
because the logs on the server aren't visible enough. Even so, it's
possible to do a mistake without testing it in DEV or just testing a
conditional branch. That might have security implications if that object
wasn't supposed to be passed.

We can't rely on only checking if the prototype is `Object.prototype`
because that wouldn't work with cross-realm objects which is
unfortunate. However, if it isn't, we can check wether it has exactly
one prototype on the chain which would catch the common error of passing
a class instance.
2023-10-11 12:18:49 -04:00
Sebastian Markbåge
0fba3ecf73
[Fizz] Reset error component stack and fix error messages (#27456)
The way we collect component stacks right now are pretty fragile.

We expect that we'll call captureBoundaryErrorDetailsDev whenever an
error happens. That resets lastBoundaryErrorComponentStackDev to null
but if we don't, it just lingers and we don't set it to anything new
then which leaks the previous component stack into the next time we have
an error. So we need to reset it in a bunch of places.

This is still broken with erroredReplay because it has the inverse
problem that abortRemainingReplayNodes can call
captureBoundaryErrorDetailsDev more than one time. So the second
boundary won't get a stack.

We probably should try to figure out an alternative way to carry along
the stack. Perhaps WeakMap keyed by the error object.

This also fixes an issue where we weren't invoking the onShellReady
event if we error a replay. That event is a bit weird for resuming
because we're probably really supposed to just invoke it immediately if
we have already flushed the shell in the prerender which is always atm.
Right now, it gets invoked later than necessary because you could have a
resumed hole ready before a sibling in the shell is ready and that's
blocked.
2023-10-04 16:48:12 -04:00
Sebastian Markbåge
843ec07021
[Flight] Taint APIs (#27445)
This lets a registered object or value be "tainted", which we block from
crossing the serialization boundary. It's only allowed to stay
in-memory.

This is an extra layer of protection against mistakes of transferring
data from a data access layer to a client. It doesn't provide perfect
protection, because it doesn't trace through derived values and
substrings. So it shouldn't be used as the only security layer but more
layers are better.

`taintObjectReference` is for specific object instances, not any nested
objects or values inside that object. It's useful to avoid specific
objects from getting passed as is. It ensures that you don't
accidentally leak values in a specific context. It can be for security
reasons like tokens, privacy reasons like personal data or performance
reasons like avoiding passing large objects over the wire.

It might be privacy violation to leak the age of a specific user, but
the number itself isn't blocked in any other context. As soon as the
value is extracted and passed specifically without the object, it can
therefore leak.

`taintUniqueValue` is useful for high entropy values such as hashes,
tokens or crypto keys that are very unique values. In that case it can
be useful to taint the actual primitive values themselves. These can be
encoded as a string, bigint or typed array. We don't currently check for
this value in a substring or inside other typed arrays.

Since values can be created from different sources they don't just
follow garbage collection. In this case an additional object must be
provided that defines the life time of this value for how long it should
be blocked. It can be `globalThis` for essentially forever, but that
risks leaking memory for ever when you're dealing with dynamic values
like reading a token from a database. So in that case the idea is that
you pass the object that might end up in cache.

A request is the only thing that is expected to do any work. The
principle is that you can derive values from out of a tainted
entry during a request. Including stashing it in a per request cache.
What you can't do is store a derived value in a global module level
cache. At least not without also tainting the object.
2023-10-02 13:55:39 -04:00