Ensures that effects are well-formed with respect to the rules:
* For a given instruction, each place is only initialized once (w one of
Create, CreateFrom, Assign)
* Ensures that Alias targets are already initialized within the same
instruction (should have a Create before them)
* Preserves Create and similar instructions
* Avoids duplicate instructions when inferring effects of function
expressions
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33558).
* #33571
* __->__ #33558
* #33547
Adds some typed helpers to represent aliasing, assign, capture,
createfrom, and mutate effects along with representative runtime
behavior, and then adds tests to demonstrate that we model
capture->createfrom and createfrom->capture correctly.
There is one case (createfrom->capture in a lambda) where we infer a
less precise effect, but in the more conservative direction (we include
more code/deps than necesssary rather than fewer).
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33543).
* #33571
* #33558
* #33547
* __->__ #33543
Now that we have support for defining aliasing signatures in
moduleTypeProvider, which uses string names for
receiver/args/returns/etc, we can reuse that same form for builtin
declarations. The declarations are written in the unparsed form and than
parsed/validated when registered (in the addFunction/addHook call).
This also required flushing out configs/schemas for more effect types.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33530).
* #33571
* #33558
* #33547
* #33543
* #33533
* #33532
* __->__ #33530
In comparing compilation output of the old/new inference models I found
this case (heavily distilled into a fixture). Roughly speaking the
scenario is:
* Create a mutable object `x`
* Extract part of that object and pass it to a hook/jsx so that _part_
becomes frozen
* Mutate `x`, even indirectly.
In the old model we can still independently memoize the value from the
middle step, since we assume that part of the larger value is not
changing. In the new model, the mutation from the later step effectively
overrides the freeze effect in step 2, and considers the value to have
changed later anyway.
We've already rolled out and vetted the previous behavior, confirming
that the heuristic of "that part of the mutable object is fozen now" is
generally safe. I'll fix in a follow-up.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33522).
* #33571
* #33558
* #33547
* #33543
* #33533
* #33532
* #33530
* #33526
* __->__ #33522
* #33518
The previous error for hoisting violations pointed only to the variable
declaration, but didn't show where the value was accessed before that
declaration. We now track where each hoisted variable is first accessed
and report two errors, one for the reference and one for the
declaration. When we improve our diagnostic infra to support reporting
errors at multiple locations we can merge these into a single conceptual
error.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33514).
* #33571
* #33558
* #33547
* #33543
* #33533
* #33532
* #33530
* #33526
* #33522
* #33518
* __->__ #33514
* #33573
The previous error message was generic, because the old style function
signature didn't support a way to specify a reason alongside a freeze
effect. This meant we could only say why a value was frozen for
instructions, but not hooks which use function signatures. By defining a
new aliasing signature for custom hooks we can specify a reason and
provide a better error message.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33513).
* #33571
* #33558
* #33547
* #33543
* #33533
* #33532
* #33530
* #33526
* #33522
* #33518
* #33514
* __->__ #33513
AnalyzeFunctions had logic to reset the mutable ranges of context
variables after visiting inner function expressions. However, there was
a bug in that logic: InferReactiveScopeVariables makes all the
identifiers in a scope point to the same mutable range instance. That
meant that it was possible for a later function expression to indirectly
cause an earlier function expressions' context variables to get a
non-zero mutable range.
The fix is to not just reset start/end of context var ranges, but assign
a new range instance. Thanks for the help on debugging, @mofeiz!
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33500).
* #33571
* #33558
* #33547
* #33543
* #33533
* #33532
* #33530
* #33526
* #33522
* #33518
* #33514
* #33513
* #33512
* #33504
* __->__ #33500
* #33497
* #33496
Squashed, review-friendly version of the stack from
https://github.com/facebook/react/pull/33488.
This is new version of our mutability and inference model, designed to
replace the core algorithm for determining the sets of instructions
involved in constructing a given value or set of values. The new model
replaces InferReferenceEffects, InferMutableRanges (and all of its
subcomponents), and parts of AnalyzeFunctions. The new model does not
use per-Place effect values, but in order to make this drop-in the end
_result_ of the inference adds these per-Place effects.
I'll write up a larger document on the model, first i'm doing some
housekeeping to rebase the PR.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33494).
* #33571
* #33558
* #33547
* #33543
* #33533
* #33532
* #33530
* #33526
* #33522
* #33518
* #33514
* #33513
* #33512
* #33504
* #33500
* #33497
* #33496
* #33495
* __->__ #33494
* #33572
This was really meant to be there from the beginning. A `cache()`:ed
entry has a life time. On the server this ends when the render finishes.
On the client this ends when the cache of that scope gets refreshed.
When a cache is no longer needed, it should be possible to abort any
outstanding network requests or other resources. That's what
`cacheSignal()` gives you. It returns an `AbortSignal` which aborts when
the cache lifetime is done based on the same execution scope as a
`cache()`ed function - i.e. `AsyncLocalStorage` on the server or the
render scope on the client.
```js
import {cacheSignal} from 'react';
async function Component() {
await fetch(url, { signal: cacheSignal() });
}
```
For `fetch` in particular, a patch should really just do this
automatically for you. But it's useful for other resources like database
connections.
Another reason it's useful to have a `cacheSignal()` is to ignore any
errors that might have triggered from the act of being aborted. This is
just a general useful JavaScript pattern if you have access to a signal:
```js
async function getData(id, signal) {
try {
await queryDatabase(id, { signal });
} catch (x) {
if (!signal.aborted) {
logError(x); // only log if it's a real error and not due to cancellation
}
return null;
}
}
```
This just gets you a convenient way to get to it without drilling
through so a more idiomatic code in React might look something like.
```js
import {cacheSignal} from "react";
async function getData(id) {
try {
await queryDatabase(id);
} catch (x) {
if (!cacheSignal()?.aborted) {
logError(x);
}
return null;
}
}
```
If it's called outside of a React render, we normally treat any cached
functions as uncached. They're not an error call. They can still load
data. It's just not cached. This is not like an aborted signal because
then you couldn't issue any requests. It's also not like an infinite
abort signal because it's not actually cached forever. Therefore,
`cacheSignal()` returns `null` when called outside of a React render
scope.
Notably the `signal` option passed to `renderToReadableStream` in both
SSR (Fizz) and RSC (Flight Server) is not the same instance that comes
out of `cacheSignal()`. If you abort the `signal` passed in, then the
`cacheSignal()` is also aborted with the same reason. However, the
`cacheSignal()` can also get aborted if the render completes
successfully or fatally errors during render - allowing any outstanding
work that wasn't used to clean up. In the future we might also expand on
this to give different
[`TaskSignal`](https://developer.mozilla.org/en-US/docs/Web/API/TaskSignal)
to different scopes to pass different render or network priorities.
On the client version of `"react"` this exposes a noop (both for
Fiber/Fizz) due to `disableClientCache` flag but it's exposed so that
you can write shared code.
As discussed in chat, this is a simple fix to stop introducing labels
inside expressions.
The useMemo-with-optional test was added in
d70b2c2c4e
and crashes for the same reason- an unexpected label as a value block
terminal.
---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/facebook/react/pull/33548).
* __->__ #33548
* #33546
Previously the experimental workflow relied on the canary one running
first to avoid race conditions. However, I didn't account for the fact
that the canary one can now be skipped.
It may be useful at times to publish only specific packages as an
experimental tag. For example, if we need to cherry pick some fixes for
an old release, we can first do so by creating that as an experimental
release just for that package to allow for quick testing by downstream
projects.
Similar to .github/workflows/runtime_releases_from_npm_manual.yml I
added three options (`dry`, `only_packages`, `skip_packages`) to
`runtime_prereleases.yml` which both the manual and nightly workflows
reuse. I also added a discord notification when the manual workflow is
run.
## Summary
The devtools Components tab's component tree view currently has a
behavior where the indentation of each level of the tree scales based on
the available width of the view. If the view is narrow or component
names are long, all indentation showing the hierarchy of the tree scales
down with the view width until there is no indentation at all. This
makes it impossible to see the nesting of the tree, making the tree view
much less useful. With long component names and deep hierarchies this
issue is particularly egregious. For comparison, the Chrome Dev Tools
Elements panel uses a fixed indentation size, so it doesn't suffer from
this issue.
This PR adds a minimum pixel value for the indentation width, so that
even when the window is narrow some indentation will still be visible,
maintaining the visual representation of the component tree hierarchy.
Alternatively, we could match the behavior of the Chrome Dev Tools and
just use a constant indentation width.
## How did you test this change?
- tests (yarn test-build-devtools)
- tested in browser:
- added an alternate left/right split pane layout to
react-devtools-shell to test with
(https://github.com/facebook/react/pull/33516)
- tested resizing the tree view in different layout modes
### before this change:
https://github.com/user-attachments/assets/470991f1-dc05-473f-a2cb-4f7333f6bae4
with a long component name:
https://github.com/user-attachments/assets/1568fc64-c7d7-4659-bfb1-9bfc9592fb9d
### after this change:
https://github.com/user-attachments/assets/f60bd7fc-97f6-4680-9656-f0db3d155411
with a long component name:
https://github.com/user-attachments/assets/6ac3f58c-42ea-4c5a-9a52-c3b397f37b45
## Summary
This PR adds a 'Layout' selector to the devtools shell main example, as
well as a resizable split pane, allowing more realistic testing of how
the devtools behaves when used in a vertical or horizontal layout and at
different sizes (e.g. when resizing the Chrome Dev Tools pane).
## How did you test this change?
https://github.com/user-attachments/assets/81179413-7b46-47a9-bc52-4f7ec414e8be
This bug was reported via our wg and appears to only affect values
created as a ref.
Currently, postfix operators used in a callback gets compiled to:
```js
modalId.current = modalId.current + 1; // 1
const id = modalId.current; // 1
return id;
```
which is semantically incorrect. The postfix increment operator should
return the value before incrementing. In other words something like this
should have been compiled instead:
```js
const id = modalId.current; // 0
modalId.current = modalId.current + 1; // 1
return id;
```
This bug does not trigger when the incremented value is a plain
primitive, instead there is a TODO bailout.
Stacked on #33482.
There's a flaw with getting information from the execution context of
the ping. For the soft-deprecated "throw a promise" technique, this is a
bit unreliable because you could in theory throw the same one multiple
times. Similarly, a more fundamental flaw with that API is that it
doesn't allow for tracking the information of Promises that are already
synchronously able to resolve.
This stops tracking the async debug info in the case of throwing a
Promise and only when you render a Promise. That means some loss of data
but we should just warn for throwing a Promise anyway.
Instead, this also adds support for tracking `use()`d thenables and
forwarding `_debugInfo` from then. This is done by extracting the info
from the Promise after the fact instead of in the resolve so that it
only happens once at the end after the pings are done.
This also supports passing the same Promise in multiple places and
tracking the debug info at each location, even if it was already
instrumented with a synchronous value by the time of the second use.
Previously you weren't guaranteed to have only advancing time entries,
you could jump back in time, but now it omits unnecessary duplicates and
clamps automatically if you emit a previous time entry to enforce
forwards order only.
The reason I didn't do this originally is because `await` can jump in
the order because we're trying to encode a graph into a flat timeline
for simplicity of the protocol and consumers.
```js
async function a() {
await fetch1();
await fetch2();
}
async function b() {
await fetch3();
}
async function foo() {
const p = a();
await b();
return p;
}
```
This can effectively create two parallel sequences:
```
--1.................----2.......--
------3......---------------------
```
This can now be flattened to either:
```
--1.................3---2.......--
```
Or:
```
------3......1......----2.......--
```
Depending on which one we visit first. Regardless, information is lost.
I'd say that the second one is worse encoding of this scenario because
it pretends that we weren't waiting for part of the timespan that we
were. To solve this I think we should probably make `emitAsyncSequence`
create a temporary flat list and then sort it by start time before
emitting.
Although we weren't actually blocked since there was some CPU time that
was able to proceed to get to 3. So maybe the second one is actually
better. If we wanted that consistently we'd have to figure out what the
intersection was.
---------
Co-authored-by: Hendrik Liebau <mail@hendrik-liebau.de>
Includes #31412.
The issue is that `pushTreeFork` stores some global state when reconcile
children. This gets popped by `popTreeContext` in `completeWork`.
Normally `completeWork` returns its own `Fiber` again if it wants to do
a second pass which will call `pushTreeFork` again in the next pass.
However, `SuspenseList` doesn't return itself, it returns the next child
to work on.
The fix is to keep track of the count and push it again it when we
return the next child to attempt.
There are still some outstanding issues with hydration. Like the
backwards test still has the wrong behavior in it because it hydrates
backwards and so it picks up the DOM nodes in reverse order.
`tail="hidden"` also doesn't work correctly.
There's also another issue with `useId` and `AsyncIterable` in
SuspenseList when there's an unknown number of children. We don't
support those showing one at a time yet though so it's not an issue yet.
To fix it we need to add variable total count to the `useId` algorithm.
E.g. by falling back to varint encoding.
---------
Co-authored-by: Rick Hanlon <rickhanlonii@fb.com>
Co-authored-by: Ricky <rickhanlonii@gmail.com>
This adds some I/O to go get the third party thing to test how it
overlaps.
With #33482, this is what it looks like. The await gets cut off when the
third party component starts rendering. I.e. after the latency to start.
<img width="735" alt="Screenshot 2025-06-08 at 5 42 46 PM"
src="https://github.com/user-attachments/assets/f68d9a84-05a1-4125-b3f0-8f3e4eaaa5c1"
/>
This doesn't fully simulate everything because it should actually also
simulate each chunk of the stream coming back too. We could wrap the
ReadableStream to simulate that. In that scenario, it would probably get
some awaits on the chunks at the end too.
## Summary
In react-native props that are passed as function get converted to a
boolean (`true`). This is the default pattern for event handlers in
react-native.
However, there are reasons for why you might want to opt-out of this
behavior, and instead, pass along the actual function as the prop.
Right now, there is no way to do this, and props that are functions
always get set to `true`.
The `ViewConfig` attributes already have the API for a `process`
function. I simply moved the check for the process function up, so if a
ViewConfig's prop attribute configured a process function this is always
called first.
This provides an API to opt out of the default behavior.
This is the accompanied PR for react-native:
- https://github.com/facebook/react-native/pull/48777
## How did you test this change?
<!--
Demonstrate the code is solid. Example: The exact commands you ran and
their output, screenshots / videos if the pull request changes the user
interface.
How exactly did you verify that your PR solves the issue you wanted to
solve?
If you leave this empty, your PR will very likely be closed.
-->
I modified the code manually in a template react-native app and
confirmed its working. This is a code path you only need in very special
cases, thus it's a bit hard to provide a test for this. I recorded a
video where you can see that the changes are active and the prop is
being passed as native value.
For this I created a custom native component with a view config that
looked like this:
```js
const viewConfig = {
uiViewClassName: 'CustomView',
bubblingEventTypes: {},
directEventTypes: {},
validAttributes: {
nativeProp: {
process: (nativeProp) => {
// Identity function that simply returns the prop function callback
// to opt out of this prop being set to `true` as its a function
return nativeProp
},
},
},
}
```
https://github.com/user-attachments/assets/493534b2-a508-4142-a760-0b1b24419e19
Additionally I made sure that this doesn't conflict with any existing
view configs in react native. In general, this shouldn't be a breaking
change, as for existing view configs it didn't made a difference if you
simply set `myProp: true` or `myProp: { process: () => {...} }` because
as soon as it was detected that the prop is a function the config
wouldn't be used (which is what this PR fixes).
Probably everyone, including the react-native core components use
`myProp: true` for callback props, so this change should be fine.
Technically the async call graph spans basically all the way back to the
start of the app potentially, but we don't want to include everything.
Similarly we don't want to include everything from previous components
in every child component. So we need some heuristics for filtering out
data.
We roughly want to be able to inspect is what might contribute to a
Suspense loading sequence even if it didn't this time e.g. due to a race
condition.
One flaw with the previous approach was that awaiting a cached promise
in a sibling that happened to finish after another sibling would be
excluded. However, in a different race condition that might end up being
used so I wanted to include an empty "await" in that scenario to have
some association from that component.
However, for data that resolved fully before the request even started,
it's a little different. This can be things that are part of the start
up sequence of the app or externally cached data. We decided that this
should be excluded because it doesn't contribute to the loading sequence
in the expected scenario. I.e. if it's cached. Things that end up being
cache misses would still be included. If you want to test externally
cached data misses, then it's up to you or the framework to simulate
those. E.g. by dropping the cache. This also helps free up some noise
since static / cached data can be excluded in visualizations.
I also apply this principle to forwarding debug info. If you reuse a
cached RSC payload, then the Server Component render time and its awaits
gets clamped to the caller as if it has zero render/await time. The I/O
entry is still back dated but if it was fully resolved before we started
then it's completely excluded.
I noticed that the ThirdPartyComponent in the fixture was showing the
wrong stack and the `"use third-party"` is in the wrong location.
<img width="628" alt="Screenshot 2025-06-06 at 11 22 11 PM"
src="https://github.com/user-attachments/assets/f0013380-d79e-4765-b371-87fd61b3056b"
/>
When creating the initial JSX inside the third party server, we should
make sure that it has no owner. In a real cross-server environment you
get this by default by just executing in different context. But since
the fixture example is inside the same AsyncLocalStorage as the parent
it already has an owner which gets transferred. So we should make sure
that were we create the JSX has no owner to simulate this.
When we then parse a null owner on the receiving side, we replace its
owner/stack with the owner/stack of the call to `createFrom...` to
connect them. This worked fine with only two environments. The bug was
that when we did this and then transferred the result to a third
environment we took the original parsed stack trace. We should instead
parse a new one from the replaced stack in the current environment.
The second bug was that the `"use third-party"` badge ends up in the
wrong place when we do this kind of thing. Because the stack of the
thing entering the new environment is the call to `createFrom...` which
is in the old environment even though the component itself executes in
the new environment. So to see if there's a change we should be
comparing the current environment of the task to the owner's environment
instead of the next environment after the task.
After:
<img width="494" alt="Screenshot 2025-06-07 at 1 13 28 AM"
src="https://github.com/user-attachments/assets/e2e870ba-f125-4526-a853-bd29f164cf09"
/>
This effectively lets us consume Web Streams in a Node build. In fact
the Node entry point is now just adding Node stream APIs.
For the client, this is simple because the configs are not actually
stream type specific. The server is a little trickier.