This PR reorganizes the `react-dom` entrypoint to only pull in code that
is environment agnostic. Previously if you required anything from this
entrypoint in any environment the entire client reconciler was loaded.
In a prior release we added a server rendering stub which you could
alias in server environments to omit this unecessary code. After landing
this change this entrypoint should not load any environment specific
code.
While a few APIs are truly client (browser) only such as createRoot and
hydrateRoot many of the APIs you import from this package are only
useful in the browser but could concievably be imported in shared code
(components running in Fizz or shared components as part of an RSC app).
To avoid making these require opting into the client bundle we are
keeping them in the `react-dom` entrypoint and changing their
implementation so that in environments where they are not particularly
useful they do something benign and expected.
#### Removed APIs
The following APIs are being removed in the next major. Largely they
have all been deprecated already and are part of legacy rendering modes
where concurrent features of React are not available
* `render`
* `hydrate`
* `findDOMNode`
* `unmountComponentAtNode`
* `unstable_createEventHandle`
* `unstable_renderSubtreeIntoContainer`
* `unstable_runWithPrioirty`
#### moved Client APIs
These APIs were available on both `react-dom` (with a warning) and
`react-dom/client`. After this change they are only available on
`react-dom/client`
* `createRoot`
* `hydrateRoot`
#### retained APIs
These APIs still exist on the `react-dom` entrypoint but have normalized
behavior depending on which renderers are currently in scope
* `flushSync`: will execute the function (if provided) inside the
flushSync implemention of FlightServer, Fizz, and Fiber DOM renderers.
* `unstable_batchedUpdates`: This is a noop in concurrent mode because
it is now the only supported behavior because there is no legacy
rendering mode
* `createPortal`: This just produces an object. It can be called from
anywhere but since you will probably not have a handle on a DOM node to
pass to it it will likely warn in environments other than the browser
* preloading APIS such as `preload`: These methods will execute the
preload across all renderers currently in scope. Since we resolve the
Request object on the server using AsyncLocalStorage or the current
function stack in practice only one renderer should act upon the
preload.
In addition to these changes the server rendering stub now just rexports
everything from `react-dom`. In a future minor we will add a warning
when using the stub and in the next major we will remove the stub
altogether
This removes the automatic patching of the global `fetch` function in
Server Components environments to dedupe requests using `React.cache`, a
behavior that some RSC framework maintainers have objected to.
We may revisit this decision in the future, but for now it's not worth
the controversy.
Frameworks that have already shipped this behavior, like Next.js, can
reimplement it in userspace.
I considered keeping the implementation in the codebase and disabling it
by setting `enableFetchInstrumentation` to `false` everywhere, but since
that also disables the tests, it doesn't seem worth it because without
test coverage the behavior is likely to drift regardless. We can just
revert this PR later if desired.
Used this test scenario to clarify how callback refs work when detached
based on the availability of a cleanup function to update documentation
in https://github.com/reactjs/react.dev/pull/6770
Checking it in for additional test coverage and test-based documentation
Stacked on #28849, #28854, #28853. Behind a flag.
If you're following along from the side-lines. This is probably not what
you think it is.
It's NOT a way to get updates to a component over time. The
AsyncIterable works like an Iterable already works in React which is how
an Array works. I.e. it's a list of children - not the value of a child
over time.
It also doesn't actually render one component at a time. The way it
works is more like awaiting the entire list to become an array and then
it shows up. Before that it suspends the parent.
To actually get these to display one at a time, you have to opt-in with
`<SuspenseList>` to describe how they should appear. That's really the
interesting part and that not implemented yet.
Additionally, since these are effectively Async Functions and uncached
promises, they're not actually fully "supported" on the client yet for
the same reason rendering plain Promises and Async Functions aren't.
They warn. It's only really useful when paired with RSC that produces
instrumented versions of these. Ideally we'd published instrumented
helpers to help with map/filter style operations that yield new
instrumented AsyncIterables.
The way the implementation works basically just relies on unwrapThenable
and otherwise works like a plain Iterator.
There is one quirk with these that are different than just promises. We
ask for a new iterator each time we rerender. This means that upon retry
we kick off another iteration which itself might kick off new requests
that block iterating further. To solve this and make it actually
efficient enough to use on the client we'd need to stash something like
a buffer of the previous iteration and maybe iterator on the iterable so
that we can continue where we left off or synchronously iterate if we've
seen it before. Similar to our `.value` convention on Promises.
In Fizz, I had to do a special case because when we render an iterator
child we don't actually rerender the parent again like we do in Fiber.
However, it's more efficient to just continue on where we left off by
reusing the entries from the thenable state from before in that case.
We have changed the shape (and the runtime) of React Elements. To help
avoid precompiled or inlined JSX having subtle breakages or deopting
hidden classes, I renamed the symbol so that we can early error if
private implementation details are used or mismatching versions are
used.
Why "transitional"? Well, because this is not the last time we'll change
the shape. This is just a stepping stone to removing the `ref` field on
the elements in the next version so we'll likely have to do it again.
Stacked on #28853 and #28854.
React supports rendering `Iterable` and will soon support
`AsyncIterable`. As long as it's multi-shot since during an update we
may have to rerender with new inputs an loop over the iterable again.
Therefore the `Iterator` and `AsyncIterator` types are not supported
directly as a child of React - and really it shouldn't pass between
Hooks or components neither for this reason. For parity, that's also the
case when used in Server Components.
However, there is a special case when the component rendered itself is a
generator function. While it returns as a child an `Iterator`, the React
Element itself can act as an `Iterable` because we can re-evaluate the
function to create a new generator whenever we need to.
It's also very convenient to use generator functions over constructing
an `AsyncIterable`. So this is a proposal to special case the
`Generator`/`AsyncGenerator` returned by a (Async) Generator Function.
In Flight this means that when we render a Server Component we can
serialize this value as an `Iterable`/`AsyncIterable` since that's
effectively what rendering it on the server reduces down to. That way if
Fiber can receive the result in any position.
For SuspenseList this would also need another special case because the
children of SuspenseList represent "rows".
`<SuspenseList><Component /></SuspenseList>` currently is a single "row"
even if the component renders multiple children or is an iterator. This
is currently different if Component is a Server Component because it'll
reduce down to an array/AsyncIterable and therefore be treated as one
row per its child. This is different from `<SuspenseList><Component
/><Component /></SuspenseList>` since that has a wrapper array and so
this is always two rows.
It probably makes sense to special case a single-element child in
`SuspenseList` to represent a component that generates rows. That way
you can use an `AsyncGeneratorFunction` to do this.
For [`AsyncIterable`](https://github.com/facebook/react/pull/28847) we
encode `AsyncIterator` as a separate tag.
Previously we encoded `Iterator` as just an Array. This adds a special
encoding for this. Technically this is a breaking change.
This is kind of an edge case that you'd care about the difference but it
becomes more important to treat these correctly for the warnings here
#28853.
This doesn't change production behavior. We always render Iterables to
our best effort in prod even if they're Iterators.
But this does change the DEV warnings which indicates which are valid
patterns to use.
It's a footgun to use an Iterator as a prop when you pass between
components because if an intermediate component rerenders without its
parent, React won't be able to iterate it again to reconcile and any
mappers won't be able to re-apply. This is actually typically not a
problem when passed only to React host components but as a pattern it's
a problem for composability.
We used to warn only for Generators - i.e. Iterators returned from
Generator functions. This adds a warning for Iterators created by other
means too (e.g. Flight or the native Iterator utils). The heuristic is
to check whether the Iterator is the same as the Iterable because that
means it's not possible to get new iterators out of it. This case used
to just yield non-sense like empty sets in DEV but not in prod.
However, a new realization is that when the Component itself is a
Generator Function, it's not actually a problem. That's because the
React Element itself works as an Iterable since we can ask for new
generators by calling the function again. So this adds a special case to
allow the Generator returned from a Generator Function's direct child.
The principle is “don’t pass iterators around” but in this case there is
no iterator floating around because it’s between React and the JS VM.
Also see #28849 for context on AsyncIterables.
Related to this, but Hooks should ideally be banned in these for the
same reason they're banned in Async Functions.
This disables symbol renaming in production builds. The original
variable and function names are preserved. All other forms of
compression applied by Closure (dead code elimination, inlining, etc)
are unchanged — the final program is identical to what we were producing
before, just in a more readable form.
The motivation is to make it easier to debug React issues that only
occur in production — the same reason we decided to start shipping
sourcemaps in #28827 and #28827.
However, because most apps run their own minification step on their npm
dependencies, it's not necessary for us to minify the symbols before
publishing — it'll be handled the app, if desired.
This is the same strategy Meta has used to ship React for years. The
React build itself has unminified symbols, but they get minified as part
of Meta's regular build pipeline.
Even if an app does not minify their npm dependencies, gzip covers most
of the cost of symbol renaming anyway.
This saves us from having to ship sourcemaps, which means even apps that
don't have sourcemaps configured will be able to debug the React build
as easily as they would any other npm dependency.
Adds an experimental feature flag to the implementation of useMemoCache,
the internal cache used by the React Compiler (Forget).
When enabled, instead of treating the cache as copy-on-write, like we do
with fibers, we share the same cache instance across all render
attempts, even if the component is interrupted before it commits.
If an update is interrupted, either because it suspended or because of
another update, we can reuse the memoized computations from the previous
attempt. We can do this because the React Compiler performs atomic
writes to the memo cache, i.e. it will not record the inputs to a
memoization without also recording its output.
This gives us a form of "resuming" within components and hooks.
This only works when updating a component that already mounted. It has
no impact during initial render, because the memo cache is stored on the
fiber, and since we have not implemented resuming for fibers, it's
always a fresh memo cache, anyway.
However, this alone is pretty useful — it happens whenever you update
the UI with fresh data after a mutation/action, which is extremely
common in a Suspense-driven (e.g. RSC or Relay) app.
So the impact of this feature is faster data mutations/actions (when the
React Compiler is used).
Meta uses various tools built on top of the "react-reconciler" package
but that package needs to match the version of the "react" package.
This means that it should be synced at the same time. However, more than
that the feature flags between the "react" package and the
"react-reconciler" package needs to line up. Since FB has custom feature
flags, it can't use the OSS version of react-reconciler.
In #26446 we started publishing non-minified versions of our production
build artifacts, along with source maps, for easier debugging of React
when running in production mode.
The way it's currently set up is that these builds are generated
*before* Closure compiler has run. Which means it's missing many of the
optimizations that are in the final build, like dead code elimination.
This PR changes the build process to run Closure on the non-minified
production builds, too, by moving the sourcemap generation to later in
the pipeline.
The non-minified builds will still preserve the original symbol names,
and we'll use Prettier to add back whitespace. This is the exact same
approach we've been using for years to generate production builds for
Meta.
The idea is that the only difference between the minified and non-
minified builds is whitespace and symbol mangling. The semantic
structure of the program should be identical.
To implement this, I disabled symbol mangling when running Closure
compiler. Then, in a later step, the symbols are mangled by Terser. This
is when the source maps are generated.
Stacked on #28872
renderToStaticNodeStream was not originally deprecated when
renderToNodeStream was deprecated because it did not yet have a clear
analog in the modern streaming implementation for SSR. In React 19 we
have already removed renderToNodeStream. This change removes
renderToStaticNodeStream as well because you can replicate it's
semantics using renderToPipeableStream with onAllReady or
renderToReadableStream with await stream.allready.
This commit adds warnings indicating that `renderToStaticNodeStream`
will be removed in an upcoming React release. This API has been legacy,
is not widely used (renderToStaticMarkup is more common) and has
semantically eqiuvalent implementations with renderToReadableStream and
renderToPipeableStream.
stacked on #28870
inline script children have been encoded as HTML for a while now but
this can easily break script parsing so practically if you were
rendering inline scripts you were using dangerouslySetInnerHTML. This is
not great because now there is no escaping at all so you have to be even
more careful. While care should always be taken when rendering untrusted
script content driving users to use dangerous APIs is not the right
approach and in this PR the escaping functionality used for
bootstrapScripts and importMaps is being extended to any inline script.
the approach is to escape 's' or 'S" with the appropriate unicode code
point if it is inside a <script or </script sequence. This has the nice
benefit of minimally escaping the text for readability while still
preserving full js parsing capabilities. As articulated when we
introduced this escaping for prior use cases this is only safe because
we are escaping the entire script content. It would be unsafe if we were
not escaping the entirety of the script because we would no longer be
able to ensure there are no earlier or later <script sequences that put
the parser in unexpected states.
style text content has historically been escaped as HTML which is
non-sensical and often leads users to using dangerouslySetInnerHTML as a
matter of course. While rendering untrusted style rules is a security
risk React doesn't really provide any special protection here and
forcing users to use a completely unescaped API is if anything worse. So
this PR updates the style escaping rules for Fizz to only escape the
text content to ensure the tag scope cannot be closed early. This is
accomplished by encoding "s" and "S" as hexadecimal unicode
representation "\73 " and "\53 " respectively when found within a
sequence like </style>. We have to be careful to support casing here
just like with the script closing tag regex for bootstrap scripts.
## Summary
The Forget codename needs to be hidden from the UI to avoid confusion.
Going forward, we'll be referring to this set of features as part of the
larger React compiler. We'll be describing the primary feature that
we've built so far as auto-memoization, and this badge helps devs see
which components have been automatically memoized by the compiler.
## How did you test this change?
- force Forget badge on with and without the presence of other badges
- confirm colors/UI in light and dark modes
- force badges on for `ElementBadges`, `InspectableElementBadges`,
`IndexableElementBadges`
- Running yarn start in packages/react-devtools-shell
[demo
video](https://github.com/facebook/react/assets/973058/fa829018-7644-4425-8395-c5cd84691f3c)
For fbsource we've historically used a separate repo for imports due to
internal limitations in Diff Train. Those have been lifted so we can now
commit this branch here and then we can import from this repo (and get
rid of the other repo)
Previously, the `refs` property of a class component instance was
read-only by user code — only React could write to it, and until/unless
a string ref was used, it pointed to a shared empty object that was
frozen in dev to prevent userspace mutations.
Because string refs are deprecated, we want users to be able to codemod
all their string refs to callback refs. The safest way to do this is to
output a callback ref that assigns to `this.refs`.
So to support this, we need to make `this.refs` writable by userspace.
## Summary
This PR adds early return to the `diff` function. We don't need to go
through all the entries of `nextProps`, process and deep-diff the values
if `nextProps` is the same object as `prevProps`. Roughly 6% of all
`diffProperties` calls can be skipped.
## How did you test this change?
RNTester.
<!--
Thanks for submitting a pull request!
We appreciate you spending the time to work on these changes. Please
provide enough information so that others can review your pull request.
The three fields below are mandatory.
Before submitting a pull request, please make sure the following is
done:
1. Fork [the repository](https://github.com/facebook/react) and create
your branch from `main`.
2. Run `yarn` in the repository root.
3. If you've fixed a bug or added code that should be tested, add tests!
4. Ensure the test suite passes (`yarn test`). Tip: `yarn test --watch
TestName` is helpful in development.
5. Run `yarn test --prod` to test in the production environment. It
supports the same options as `yarn test`.
6. If you need a debugger, run `yarn test --debug --watch TestName`,
open `chrome://inspect`, and press "Inspect".
7. Format your code with
[prettier](https://github.com/prettier/prettier) (`yarn prettier`).
8. Make sure your code lints (`yarn lint`). Tip: `yarn linc` to only
check changed files.
9. Run the [Flow](https://flowtype.org/) type checks (`yarn flow`).
10. If you haven't already, complete the CLA.
Learn more about contributing:
https://reactjs.org/docs/how-to-contribute.html
-->
## Summary
<!--
Explain the **motivation** for making this change. What existing problem
does the pull request solve?
-->
## How did you test this change?
<!--
Demonstrate the code is solid. Example: The exact commands you ran and
their output, screenshots / videos if the pull request changes the user
interface.
How exactly did you verify that your PR solves the issue you wanted to
solve?
If you leave this empty, your PR will very likely be closed.
-->
## Summary
This pull request converts the CircleCI workflows to GitHub actions
workflows. [Github Actions
Importer](https://github.com/github/gh-actions-importer) was used to
convert the workflows initially, then I edited them manually to correct
errors in translation.
**Issues**
1. facebook/react/devtools_regression_tests
The scripts that this workflow calls need to be modified.
## How did you test this change?
I tested these changes in a forked repo. You can [view the logs of this
workflow in my fork](https://github.com/robandpdx/react/actions).
https://fburl.com/workplace/f6mz6tmw
In React 19 React will finally stop publishing UMD builds. This is
motivated primarily by the lack of use of UMD format and the added
complexity of maintaining build infra for these releases. Additionally
with ESM becoming more prevalent in browsers and services like esm.sh
which can host React as an ESM module there are other options for doing
script tag based react loading.
This PR removes all the UMD build configs and forks.
There are some fixtures that still have references to UMD builds however
many of them already do not work (for instance they are using legacy
features like ReactDOM.render) and rather than block the removal on
these fixtures being brought up to date we'll just move forward and fix
or removes fixtures as necessary in the future.
Fixes issue where if the first letter of the expected string appeared
anywhere in actual message, the assertion would pass, leading to false
negatives. We should check the entire expected string.
---------
Co-authored-by: Ricky <rickhanlonii@gmail.com>
This wasn't clearly articulated and tested why the code structure is
like this but I think the logic is correct - or at least consistent with
the weird semantics.
We place this top-level fragment check inside the recursion so that you
can resolve how many every Lazy or Usable wrappers you want and it still
preserves the same semantics if they weren't there (which they might not
be as a matter of a race condition).
However, we don't actually recurse with the top-level fragment
unwrapping itself because nesting a bunch of keyless fragments isn't the
same as a single fragment/element.
So that when we end up referring to it in more places, it's only one.
We don't do this same pattern for regular `Symbol.iterator` because we
also support the string `"@@iterator"` for backwards compatibility.
This adds support in Flight for serializing four kinds of streams:
- `ReadableStream` with objects as a model. This is a single shot
iterator so you can read it only once. It can contain any value
including Server Components. Chunks are encoded as is so if you send in
10 typed arrays, you get the same typed arrays out on the other side.
- Binary `ReadableStream` with `type: 'bytes'` option. This supports the
BYOB protocol. In this mode, the receiving side just gets `Uint8Array`s
and they can be split across any single byte boundary into arbitrary
chunks.
- `AsyncIterable` where the `AsyncIterator` function is different than
the `AsyncIterable` itself. In this case we assume that this might be a
multi-shot iterable and so we buffer its value and you can iterate it
multiple times on the other side. We support the `return` value as a
value in the single completion slot, but you can't pass values in
`next()`. If you want single-shot, return the AsyncIterator instead.
- `AsyncIterator`. These gets serialized as a single-shot as it's just
an iterator.
`AsyncIterable`/`AsyncIterator` yield Promises that are instrumented
with our `.status`/`.value` convention so that they can be synchronously
looped over if available. They are also lazily parsed upon read.
We can't do this with `ReadableStream` because we use the native
implementation of `ReadableStream` which owns the promises.
The format is a leading row that indicates which type of stream it is.
Then a new row with the same ID is emitted for every chunk. Followed by
either an error or close row.
`AsyncIterable`s can also be returned as children of Server Components
and then they're conceptually the same as fragment arrays/iterables.
They can't actually be used as children in Fizz/Fiber but there's a
separate plan for that. Only `AsyncIterable` not `AsyncIterator` will be
valid as children - just like sync `Iterable` is already supported but
single-shot `Iterator` is not. Notably, neither of these streams
represent updates over time to a value. They represent multiple values
in a list.
When the server stream is aborted we also close the underlying stream.
However, closing a stream on the client, doesn't close the underlying
stream.
A couple of possible follow ups I'm not planning on doing right now:
- [ ] Free memory by releasing the buffer if an Iterator has been
exhausted. Single shots could be optimized further to release individual
items as you go.
- [ ] We could clean up the underlying stream if the only pending data
that's still flowing is from streams and all the streams have cleaned
up. It's not very reliable though. It's better to do cancellation for
the whole stream - e.g. at the framework level.
- [ ] Implement smarter Binary Stream chunk handling. Currently we wait
until we've received a whole row for binary chunks and copy them into
consecutive memory. We need this to preserve semantics when passing
typed arrays. However, for binary streams we don't need that. We can
just send whatever pieces we have so far.
Per team discussion, this upgrades the `initialValue` argument for
`useDeferredValue` from experimental to canary.
- Original implementation PR:
https://github.com/facebook/react/pull/27500
- API documentation PR: https://github.com/reactjs/react.dev/pull/6747
I left it disabled at Meta for now in case there's old code somewhere
that is still passing an `options` object as the second argument.
With the enableBinaryFlight flag on we should encode typed arrays and
blobs in the Reply direction too for parity.
It's already possible to pass Blobs inside FormData but you should be
able to pass them inside objects too.
We encode typed arrays as blobs and then unwrap them automatically to
the right typed array type.
Unlike the other protocol, I encode the type as a reference tag instead
of row tag. Therefore I need to rename the tags to avoid conflicts with
other tags in references. We are running out of characters though.
This hasn't been updated in a long time, and it getting really large.
You can view the authors locally via the git history with:
``
git shortlog -se | perl -spe 's/^\s+\d+\s+//'
``
## Overview
There's currently a bug in RN now that we no longer re-throw errors. The
`showErrorDialog` function in React Native only logs the errors as soft
errors, and never a fatal. RN was depending on the global handler for
the fatal error handling and logging.
Instead of fixing this in `ReactFiberErrorDialog`, we can implement the
new root options in RN to handle caught/uncaught/recoverable in the
respective functions, and delete ReactFiberErrorDialog. I'll follow up
with a RN PR to implement these options and fix the error handling.
<!--
Thanks for submitting a pull request!
We appreciate you spending the time to work on these changes. Please
provide enough information so that others can review your pull request.
The three fields below are mandatory.
Before submitting a pull request, please make sure the following is
done:
1. Fork [the repository](https://github.com/facebook/react) and create
your branch from `main`.
2. Run `yarn` in the repository root.
3. If you've fixed a bug or added code that should be tested, add tests!
4. Ensure the test suite passes (`yarn test`). Tip: `yarn test --watch
TestName` is helpful in development.
5. Run `yarn test --prod` to test in the production environment. It
supports the same options as `yarn test`.
6. If you need a debugger, run `yarn test --debug --watch TestName`,
open `chrome://inspect`, and press "Inspect".
7. Format your code with
[prettier](https://github.com/prettier/prettier) (`yarn prettier`).
8. Make sure your code lints (`yarn lint`). Tip: `yarn linc` to only
check changed files.
9. Run the [Flow](https://flowtype.org/) type checks (`yarn flow`).
10. If you haven't already, complete the CLA.
Learn more about contributing:
https://reactjs.org/docs/how-to-contribute.html
-->
## Summary
<!--
Explain the **motivation** for making this change. What existing problem
does the pull request solve?
-->
The ReadableStreamController for [direct
streams](https://bun.sh/docs/api/streams#direct-readablestream) in Bun
supports a flush() method to flush all buffered items to its underlying
sink.
Without manually calling flush(), all buffered items are only flushed to
the underlying sink when the stream is closed. This behavior causes the
shell rendered against Suspense boundaries never to be flushed to the
underlying sink.
## How did you test this change?
<!--
Demonstrate the code is solid. Example: The exact commands you ran and
their output, screenshots / videos if the pull request changes the user
interface.
How exactly did you verify that your PR solves the issue you wanted to
solve?
If you leave this empty, your PR will very likely be closed.
-->
A lot of changes to the test runner will need to be made in order to
support the Bun runtime. A separate test was manually run in order to
ensure that the changes made are correct.
The test works by sanity-checking that the shell rendered against
Suspense boundaries are emitted first in the stream.
This test was written and run on Bun v1.1.3.
```ts
import { Suspense } from "react";
import { renderToReadableStream } from "react-dom/server";
if (!import.meta.resolveSync("react-dom/server").endsWith("server.bun.js")) {
throw new Error("react-dom/server is not the correct version:\n " + import.meta.resolveSync("react-dom/server"));
}
const A = async () => {
await new Promise(resolve => setImmediate(resolve));
return <div>hi</div>;
};
const B = async () => {
return (
<Suspense fallback={<div>loading</div>}>
<A />
</Suspense>
);
};
const stream = await renderToReadableStream(<B />);
let text = "";
let count = 0;
for await (const chunk of stream) {
text += new TextDecoder().decode(chunk);
count++;
}
if (
text !==
`<!--$?--><template id="B:0"></template><div>loading</div><!--/$--><div hidden id="S:0"><div>hi</div></div><script>$RC=function(b,c,e){c=document.getElementById(c);c.parentNode.removeChild(c);var a=document.getElementById(b);if(a){b=a.previousSibling;if(e)b.data="$!",a.setAttribute("data-dgst",e);else{e=b.parentNode;a=b.nextSibling;var f=0;do{if(a&&8===a.nodeType){var d=a.data;if("/$"===d)if(0===f)break;else f--;else"$"!==d&&"$?"!==d&&"$!"!==d||f++}d=a.nextSibling;e.removeChild(a);a=d}while(a);for(;c.firstChild;)e.insertBefore(c.firstChild,a);b.data="$"}b._reactRetry&&b._reactRetry()}};$RC("B:0","S:0")</script>`
) {
throw new Error("unexpected output");
}
if (count !== 2) {
throw new Error("expected 2 chunks from react ssr stream");
}
```
The useMemoCache polyfill doesn't have access to the fiber, and it
simply uses state, which does not work with the existing devtools
badge for the compiler.
With this PR, devtools will look on the very first hook's state for the
memo cache sentinel and display the Forget badge if present.
The polyfill will add this sentinel to it's state (the cache array).