When you use the `createFromFetch` API we assume that the start time of
the request is the same time as when you call `createFromFetch` but in
principle you could use it with a Promise that starts earlier and just
happens to resolve to a `Response`.
When you use `createFromReadableStream` that is almost definitely the
case. E.g. you might have started it way earlier and you don't call
`createFromReadableStream` until you get the headers back (the fetch
promise resolves).
This adds an option to pass in the start time for debug purposes if you
started the request before starting to parse it.
One thing that can suspend is the downloading of the RSC stream itself.
This tracks an I/O entry for each Promise (`SomeChunk<T>`) that
represents the request to the RSC stream. As the value we use the
`Response` for `createFromFetch` (or the `ReadableStream` for
`createFromReadableStream`). The start time is when you called those.
Since we're not awaiting the whole stream, each I/O entry represents the
part of the stream up until it got unblocked. However, in a production
environment with TLS packets and buffering in practice the chunks
received by the client isn't exactly at the boundary of each row. It's a
bit longer into larger chunks. From testing, it seems like multiples of
16kb or 64kb uncompressed are common. To simulate a production
environment we group into roughly 64kb chunks if they happen in rapid
sequence. Note that this might be too small to give a good idea because
of the throttle many boundaries might be skipped anyway so this might
show too many.
The React DevTools will see each I/O entry as separate but dedupe if an
outer boundary already depends on the same chunk. This deduping makes it
so that small boundaries that are blocked on the same chunk, don't get
treated as having unique suspenders. If you have a boundary with large
content, then that content will likely be in a separate chunk which is
not in the parent and then it gets marked as.
This is all just an approximation. The goal of this is just to highlight
that very large boundaries will very likely suspend even if they don't
suspend on any I/O on the server. In practice, these boundaries can
float around a lot and it's really any Suspense boundary that might
suspend but some are more likely than others which this is meant to
highlight.
It also just lets you inspect how many bytes needs to be transferred
before you can show a particular part of the content, to give you an
idea that it's not just I/O on the server that might suspend.
If you don't use the debug channel it can be misleading since the data
in development mode stream will have a lot more data in it which leads
to more chunking.
Similarly to "client references" these I/O infos don't have an "env"
since it's the client that has the I/O and so those are excluded from
flushing in the Server performance tracks.
Note that currently the same Response can appear many times in the same
Instance of SuspenseNode in DevTools when there are multiple chunks. In
a follow up I'll show only the last one per Response at any given level.
Note that when a separate debugChannel is used it has its own I/O entry
that's on the `_debugInfo` for the debug chunks in that channel.
However, if everything works correctly these should never leak into the
DevTools UI since they should never be propagated from a debug chunk to
the values waited by the runtime. This is easy to break though.
When the Flight Client is waiting for pending debug chunks, it drops the
debug info if there is no writable side of the debug channel defined.
However, it should instead check if there's no readable side defined.
Fixing this is not only important for browser clients that don't want or
need a return channel, but it's also crucial for server-side rendering,
because the Node and Edge clients only accept a readable side of the
debug channel. So they can't even define a noop writable side as a
workaround.
When a debug channel is defined, we must ensure that we don't close the
Flight Client's response when the debug channel's readable is done, but
the RSC stream is still flowing. Now, we wait for both streams to end
before closing the response.
When a debug channel is used between the Flight server and a browser
Flight client, we want to allow the same RSC stream to be used for
server-side rendering. To support this, the Edge and Node Flight clients
also need to accept a `debugChannel` option. Without it, debug
information would be missing (e.g. for SSR error stacks), and in some
cases this could result in `Connection closed` errors.
This PR adds support for the `debugChannel` option in the Edge and Node
clients for ESM, Parcel, Turbopack, and Webpack. Unlike the browser
clients, these clients only support a one-way channel, since the Flight
server’s return protocol is not designed for multiple clients.
The implementation follows the approach used in the browser clients, but
excludes the writable parts.
This lets us pass a writable on the server side and readable on the
client side to send debug info through a separate channel so that it
doesn't interfere with the main payload as much. The main payload refers
to chunks defined in the debug info which means it's still blocked on it
though. This ensures that the debug data has loaded by the time the
value is rendered so that the next step can forward the data.
This will be a bit fragile to race conditions until #33665 lands.
Another follow up needed is the ability to skip the debug channel on the
receiving side. Right now it'll block forever if you don't provide one
since we're blocking on the debug data.
This is used to register Server References that exist in the current
environment but also exists in the server it might call into. Such as a
remote server.
If the value comes from the remote server in the first place then this
is called automatically to ensure that you can pass a reference back to
where it came from - even if the `serverModuleMap` option is used. This
was already the case when `serverModuleMap` wasn't passed. This is how
you can pass server references back to the server. However, when we
added `serverModuleMap` that pass was skipped because we were getting
real functions instead of proxies.
For functions that wasn't yet passed from the remote server to the
current server, we can register them eagerly just like we do for
`import('/server').registerServerReference()`. You can now also do this
with `import('/client').registerServerReference()`. We could make them
shared so you only have to do this once but it might not be possible to
pass to the remote server and the remote server might not even be the
same RSC renderer. Therefore I split them. It's up to the compiler
whether it should do that or not. It has to know that any function you
might call might be able to receive it. This is currently global to a
specific RSC renderer.
Stacked on #31299.
We already have an option for resolving Client References to other
Client References when consuming an RSC payload on the server.
This lets you resolve Server References on the consuming side when the
environment where you're consuming the RSC payload also has access to
those Server References. Basically they becomes like Client References
for this consumer but for another consumer they wouldn't be.
Allow aborting encoding arguments to a Server Action if a Promise
doesn't resolve. That way at least part of the arguments can be used on
the receiving side. This leaves it unresolved in the stream rather than
encoding an error.
This should error on the receiving side when the stream closes but it
doesn't right now in the Edge/Browser versions because closing happens
immediately before we've had a chance to call `.then()` so the Chunks
are still in pending state. This is an existing bug also in
FlightClient.
This commit updates the file locations and bulid configurations for
flight in preparation for new static entrypoints. This follows a
structure similar to Fizz which has a unified build but exports methods
from different top level entrypoints. This PR doesn't actually add the
new top level entrypoints however, that will arrive in a later update.