Stacked on #29807.
This lets the nearest Suspense/Error Boundary handle it even if that
boundary is defined by the model itself.
It also ensures that when we have an error during serialization of
properties, those can be associated with the nearest JSX element and
since we have a stack/owner for that element we can use it to point to
the source code of that line. We can't track the source of any nested
arbitrary objects deeper inside since objects don’t track their stacks
but close enough. Ideally we have the property path but we don’t have
that right now. We have a partial in the message itself.
<img width="813" alt="Screenshot 2024-06-09 at 10 08 27 PM"
src="https://github.com/facebook/react/assets/63648/917fbe0c-053c-4204-93db-d68a66e3e874">
Note: The component name (Counter) is lost in the first message because
we don't print it in the Task. We use `"use client"` instead because we
expect the next stack frame to have the name. We also don't include it
in the actual error message because the Server doesn't know the
component name yet. Ideally Client References should be able to have a
name. If the nearest is a Host Component then we do use the name though.
However, it's not actually inside that Component that the error happens
it's in App and that points to the right line number.
An interesting case is that if something that's actually going to be
consumed by the props to a Suspense/Error Boundary or the Client
Component that wraps them fails, then it can't be handled by the
boundary. However, a counter intuitive case might be when that's on the
`children` props. E.g.
`<ErrorBoundary>{clientReferenceOrInvalidSerialization}</ErrorBoundary>`.
This value can be inspected by the boundary so it's not safe to pass it
so if it's errored it is not caught.
## Implementation
The first insight is that this is best solved on the Client rather than
in the Server because that way it also covers Client References that end
up erroring.
The key insight is that while we don't have a true stack when using
`JSON.parse` and therefore no begin/complete we can still infer these
phases for Elements because the first child of an Element is always
`'$'` which is also a leaf. In depth first that's our begin phase. When
the Element itself completes, we have the complete phase. Anything in
between is within the Element.
Using this idea I was able to refactor the blocking tracking mechanism
to stash the blocked information on `initializingHandler` and then on
the way up do we let whatever is nearest handle it - whether that's an
Element or the root Chunk. It's kind of like an Algebraic Effect.
cc @unstubbable This is something you might want to deep dive into to
find more edge cases. I'm sure I've missed something.
---------
Co-authored-by: eps1lon <sebastian.silbermann@vercel.com>
Stacked on #29807.
Conceptually the error's owner/task should ideally be captured when the
Error constructor is called but neither `console.createTask` does this,
nor do we override `Error` to capture our `owner`. So instead, we use
the nearest parent as the owner/task of the error. This is usually the
same thing when it's thrown from the same async component but not if you
await a promise started from a different component/task.
Before this stack the "owner" and "task" of a Lazy that errors was the
nearest Fiber but if the thing erroring is a Server Component, we need
to get that as the owner from the inner most part of debugInfo.
To get the Task for that Server Component, we need to expose it on the
ReactComponentInfo object. Unfortunately that makes the object not
serializable so we need to special case this to exclude it from
serialization. It gets restored again on the client.
Before (Shell):
<img width="813" alt="Screenshot 2024-06-06 at 5 16 20 PM"
src="https://github.com/facebook/react/assets/63648/7da2d4c9-539b-494e-ba63-1abdc58ff13c">
After (App):
<img width="811" alt="Screenshot 2024-06-08 at 12 29 23 AM"
src="https://github.com/facebook/react/assets/63648/dbf40bd7-c24d-4200-81a6-5018bef55f6d">
## Overview
Updates `eslint-plugin-jest` and enables the recommended rules with some
turned off that are unhelpful.
The main motivations is:
a) we have a few duplicated tests, which this found an I deleted
b) making sure we don't accidentally commit skipped tests
This lets us ensure that we use the original V8 format and it lets us
skip source mapping. Source mapping every call can be expensive since we
do it eagerly for server components even if an error doesn't happen.
In the case of an error being thrown we don't actually always do this in
practice because if a try/catch before us touches it or if something in
onError touches it (which the default console.error does), it has
already been initialized. So we have to be resilient to thrown errors
having other formats.
These are not as perf sensitive since something actually threw but if
you want better perf in these cases, you can simply do something like
`onError(error) { console.error(error.message) }` instead.
The server has to be aware whether it's looking up original or compiled
output. I currently use the file:// check to determine if it's referring
to a source mapped file or compiled file in the fixture. A bundled app
can more easily check if it's a bundle or not.
This one should be fully behind the `enableOwnerStacks` flag.
Instead of printing the parent Component stack all the way to the root,
this now prints the owner stack of every JSX callsite. It also includes
intermediate callsites between the Component and the JSX call so it has
potentially more frames. Mainly it provides the line number of the JSX
callsite. In terms of the number of components is a subset of the parent
component stack so it's less information in that regard. This is usually
better since it's more focused on components that might affect the
output but if it's contextual based on rendering it's still good to have
parent stack. Therefore, I still use the parent stack when printing DOM
nesting warnings but I plan on switching that format to a diff view
format instead (Next.js already reformats the parent stack like this).
__Follow ups__
- Server Components show up in the owner stack for client logs but logs
done by Server Components don't yet get their owner stack printed as
they're replayed. They're also not yet printed in the server logs of the
RSC server.
- Server Component stack frames are formatted as the server and added to
the end but this might be a different format than the browser. E.g. if
server is running V8 and browser is running JSC or vice versa. Ideally
we can reformat them in terms of the client formatting.
- This doesn't yet update Fizz or DevTools. Those will be follow ups.
Fizz still prints parent stacks in the server side logs. The stacks
added to user space `console.error` calls by DevTools still get the
parent stacks instead.
- It also doesn't yet expose these to user space so there's no way to
get them inside `onCaughtError` for example or inside a custom
`console.error` override.
- In another follow up I'll use `console.createTask` instead and
completely remove these stacks if it's available.
In React 19 React will finally stop publishing UMD builds. This is
motivated primarily by the lack of use of UMD format and the added
complexity of maintaining build infra for these releases. Additionally
with ESM becoming more prevalent in browsers and services like esm.sh
which can host React as an ESM module there are other options for doing
script tag based react loading.
This PR removes all the UMD build configs and forks.
There are some fixtures that still have references to UMD builds however
many of them already do not work (for instance they are using legacy
features like ReactDOM.render) and rather than block the removal on
these fixtures being brought up to date we'll just move forward and fix
or removes fixtures as necessary in the future.
This adds support in Flight for serializing four kinds of streams:
- `ReadableStream` with objects as a model. This is a single shot
iterator so you can read it only once. It can contain any value
including Server Components. Chunks are encoded as is so if you send in
10 typed arrays, you get the same typed arrays out on the other side.
- Binary `ReadableStream` with `type: 'bytes'` option. This supports the
BYOB protocol. In this mode, the receiving side just gets `Uint8Array`s
and they can be split across any single byte boundary into arbitrary
chunks.
- `AsyncIterable` where the `AsyncIterator` function is different than
the `AsyncIterable` itself. In this case we assume that this might be a
multi-shot iterable and so we buffer its value and you can iterate it
multiple times on the other side. We support the `return` value as a
value in the single completion slot, but you can't pass values in
`next()`. If you want single-shot, return the AsyncIterator instead.
- `AsyncIterator`. These gets serialized as a single-shot as it's just
an iterator.
`AsyncIterable`/`AsyncIterator` yield Promises that are instrumented
with our `.status`/`.value` convention so that they can be synchronously
looped over if available. They are also lazily parsed upon read.
We can't do this with `ReadableStream` because we use the native
implementation of `ReadableStream` which owns the promises.
The format is a leading row that indicates which type of stream it is.
Then a new row with the same ID is emitted for every chunk. Followed by
either an error or close row.
`AsyncIterable`s can also be returned as children of Server Components
and then they're conceptually the same as fragment arrays/iterables.
They can't actually be used as children in Fizz/Fiber but there's a
separate plan for that. Only `AsyncIterable` not `AsyncIterator` will be
valid as children - just like sync `Iterable` is already supported but
single-shot `Iterator` is not. Notably, neither of these streams
represent updates over time to a value. They represent multiple values
in a list.
When the server stream is aborted we also close the underlying stream.
However, closing a stream on the client, doesn't close the underlying
stream.
A couple of possible follow ups I'm not planning on doing right now:
- [ ] Free memory by releasing the buffer if an Iterator has been
exhausted. Single shots could be optimized further to release individual
items as you go.
- [ ] We could clean up the underlying stream if the only pending data
that's still flowing is from streams and all the streams have cleaned
up. It's not very reliable though. It's better to do cancellation for
the whole stream - e.g. at the framework level.
- [ ] Implement smarter Binary Stream chunk handling. Currently we wait
until we've received a whole row for binary chunks and copy them into
consecutive memory. We need this to preserve semantics when passing
typed arrays. However, for binary streams we don't need that. We can
just send whatever pieces we have so far.
## Summary
We want to enable the new event loop in React Native
(https://github.com/react-native-community/discussions-and-proposals/pull/744)
for all users in the new architecture (determined by the use of
bridgeless, not by the use of Fabric). In order to leverage that, we
need to also set the flag for the React reconciler to use microtasks for
scheduling (so we'll execute them at the right time in the new event
loop).
This migrates from the previous approach using a dynamic flag (to be
used at Meta) with the check of a global set by React Native. The reason
for doing this is:
1) We still need to determine this dynamically in OSS (based on
Bridgeless, not on Fabric).
2) We still need the ability to configure the behavior at Meta, and for
internal build system reasons we cannot access the flag that enables
microtasks in
[`ReactNativeFeatureFlags`](6c28c87c4d/packages/react-native/src/private/featureflags/ReactNativeFeatureFlags.js (L121)).
## How did you test this change?
Manually synchronized the changes to React Native and ran all tests for
the new architecture on it. Also tested manually.
> [!NOTE]
> This change depends on
https://github.com/facebook/react-native/pull/43397 which has been
merged already
Stacked on https://github.com/facebook/react/pull/28351, please review
only the last commit.
Top-level description of the approach:
1. Once user selects an element from the tree, frontend asks backend to
return the inspected element, this is where we simulate an error
happening in `render` function of the component and then we parse the
error stack. As an improvement, we should probably migrate from custom
implementation of error stack parser to `error-stack-parser` from npm.
2. When frontend receives the inspected element and this object is being
propagated, we create a Promise for symbolicated source, which is then
passed down to all components, which are using `source`.
3. These components use `use` hook for this promise and are wrapped in
Suspense.
Caching:
1. For browser extension, we cache Promises based on requested resource
+ key + column, also added use of
`chrome.devtools.inspectedWindow.getResource` API.
2. For standalone case (RN), we cache based on requested resource url,
we cache the content of it.
This Flow upgrade includes 2 fixes:
- Remove `React$StatelessFunctionalComponent` as that was replaced by
just `React$AbstractComponent` as Flow doesn't make any guarantees, see
the Flow change here:
521317c48f
- Flow no longer allows `number` type indexing into objects which
discovered an incorrect type that is actually an array of the data.
Used this command to upgrade
```
yarn add -W flow-bin flow-remove-types hermes-parser hermes-eslint
```
and ran `yarn flow-ci` to check for errors in different configurations.
This wires up the use of `async_hooks` in the Node build (as well as the
Edge build when a global is available) in DEV mode only. This will be
used to track debug info about what suspended during an RSC pass.
Enabled behind a flag for now.
This PR adds a new FB-specific configuration of Flight. We also need to
bundle a version of ReactSharedSubset that will be used for running
Flight on the server.
This initial implementation does not support server actions yet.
The FB-Flight still uses the text protocol on the server (the flag
`enableBinaryFlight` is set to false). It looks like we need some
changes in Hermes to properly support this binary format.
This lets a registered object or value be "tainted", which we block from
crossing the serialization boundary. It's only allowed to stay
in-memory.
This is an extra layer of protection against mistakes of transferring
data from a data access layer to a client. It doesn't provide perfect
protection, because it doesn't trace through derived values and
substrings. So it shouldn't be used as the only security layer but more
layers are better.
`taintObjectReference` is for specific object instances, not any nested
objects or values inside that object. It's useful to avoid specific
objects from getting passed as is. It ensures that you don't
accidentally leak values in a specific context. It can be for security
reasons like tokens, privacy reasons like personal data or performance
reasons like avoiding passing large objects over the wire.
It might be privacy violation to leak the age of a specific user, but
the number itself isn't blocked in any other context. As soon as the
value is extracted and passed specifically without the object, it can
therefore leak.
`taintUniqueValue` is useful for high entropy values such as hashes,
tokens or crypto keys that are very unique values. In that case it can
be useful to taint the actual primitive values themselves. These can be
encoded as a string, bigint or typed array. We don't currently check for
this value in a substring or inside other typed arrays.
Since values can be created from different sources they don't just
follow garbage collection. In this case an additional object must be
provided that defines the life time of this value for how long it should
be blocked. It can be `globalThis` for essentially forever, but that
risks leaking memory for ever when you're dealing with dynamic values
like reading a token from a database. So in that case the idea is that
you pass the object that might end up in cache.
A request is the only thing that is expected to do any work. The
principle is that you can derive values from out of a tainted
entry during a request. Including stashing it in a per request cache.
What you can't do is store a derived value in a global module level
cache. At least not without also tainting the object.
I do this by simply renaming the secret export name in the "subset"
bundle and this renamed version is what the FlightServer uses.
This requires us to be more diligent about always using the correct
instance of "react" in our tests so there's a bunch of clean up for
that.
stacked on #27314
Turbopack requires a different module loading strategy than Webpack and
as such this PR implements a new package `react-server-dom-turbopack`
which largely follows the `react-server-dom-webpack` but is implemented
for this new bundler
To support MPA-style form submissions, useFormState sends down a key
that represents the identity of the hook on the page. It's based on the
key path of the component within the React tree; for deeply nested
hooks, this keypath can become very long. We can hash the key to make it
shorter.
Adds a method called createFastHash to the Stream Config interface.
We're not using this for security or obfuscation, only to generate a
more compact key without sacrificing too much collision resistance.
- In Node.js builds, createFastHash uses the built-in crypto module.
- In Bun builds, createFastHash uses Bun.hash. See:
https://bun.sh/docs/api/hashing#bun-hash
I have not yet implemented createFastHash in the Edge, Browser, or FB
(Hermes) stream configs because those environments do not have a
built-in hashing function that meets our requirements. (We can't use the
web standard `crypto` API because those methods are async, and yielding
to the main thread is too costly to be worth it for this particular use
case.) We'll likely use a pure JS implementation in those environments;
for now, they just return the original string without hashing it. I'll
address this in separate PRs.
This exposes, but does not yet implement, a new experimental API called
useFormState. It's gated behind the enableAsyncActions flag.
useFormState has a similar signature to useReducer, except instead of a
reducer it accepts an (async) action function. React will wait until the
promise resolves before updating the state:
```js
async function action(prevState, payload) {
// ..
}
const [state, dispatch] = useFormState(action, initialState)
```
When used in combination with Server Actions, it will also support
progressive enhancement — a form that is submitted before it has
hydrated will have its state transferred to the next page. However, like
the other action-related hooks, it works with fully client-driven
actions, too.
This exposes a `resume()` API to go with the `prerender()` (only in
experimental). It doesn't work yet since we don't yet emit the postponed
state so not yet tested.
The main thing this does is rename ResponseState->RenderState and
Resources->ResumableState. We separated out resources into a separate
concept preemptively since it seemed like separate enough but probably
doesn't warrant being a separate concept. The result is that we have a
per RenderState in the Config which is really just temporary state and
things that must be flushed completely in the prerender. Most things
should be ResumableState.
Most options are specified in the `prerender()` and transferred into the
`resume()` but certain options that are unique per request can't be.
Notably `nonce` is special. This means that bootstrap scripts and
external runtime can't use `nonce` in this mode. They need to have a CSP
configured to deal with external scripts, but not inline.
We need to be able to restore state of things that we've already emitted
in the prerender. We could have separate snapshot/restore methods that
does this work when it happens but that means we have to explicitly do
that work. This design is trying to keep to the principle that we just
work with resumable data structures instead so that we're designing for
it with every feature. It also makes restoring faster since it's just
straight into the data structure.
This is not yet a serializable format. That can be done in a follow up.
We also need to vet that each step makes sense. Notably stylesToHoist is
a bit unclear how it'll work.
This uses the same mechanism as [large
strings](https://github.com/facebook/react/pull/26932) to encode chunks
of length based binary data in the RSC payload behind a flag.
I introduce a new BinaryChunk type that's specific to each stream and
ways to convert into it. That's because we sometimes need all chunks to
be Uint8Array for the output, even if the source is another array buffer
view, and sometimes we need to clone it before transferring.
Each type of typed array is its own row tag. This lets us ensure that
the instance is directly in the right format in the cached entry instead
of creating a wrapper at each reference. Ideally this is also how
Map/Set should work but those are lazy which complicates that approach a
bit.
We assume both server and client use little-endian for now. If we want
to support other modes, we'd convert it to/from little-endian so that
the transfer protocol is always little-endian. That way the common
clients can be the fastest possible.
So far this only implements Server to Client. Still need to implement
Client to Server for parity.
NOTE: This is the first time we make RSC effectively a binary format.
This is not compatible with existing SSR techniques which serialize the
stream as unicode in the HTML. To be compatible, those implementations
would have to use base64 or something like that. Which is what we'll do
when we move this technique to be built-in to Fizz.
This isn't really meant to be actually used, there are many issues with
this approach, but it shows the capabilities as a proof-of-concept.
It's a new reference implementation package `react-server-dom-esm` as
well as a fixture in `fixtures/flight-esm` (fork of `fixtures/flight`).
This works pretty much the same as pieces we already have in the Webpack
implementation but instead of loading modules using Webpack on the
client it uses native browser ESM.
To really show it off, I don't use any JSX in the fixture and so it also
doesn't use Babel or any compilation of the files.
This works because we don't actually bundle the server in the reference
implementation in the first place. We instead use [Node.js
Loaders](https://nodejs.org/api/esm.html#loaders) to intercept files
that contain `"use client"` and `"use server"` and replace them. There's
a simple check for those exact bytes, and no parsing, so this is very
fast.
Since the client isn't actually bundled, there's no module map needed.
We can just send the file path to the file we want to load in the RSC
payload for client references.
Since the existing reference implementation for Node.js already used ESM
to load modules on the server, that all works the same, including Server
Actions. No bundling.
There is one case that isn't implemented here. Importing a `"use
server"` file from a Client Component. We don't have that implemented in
the Webpack reference implementation neither - only in Next.js atm. In
Webpack it would be implemented as a Webpack loader.
There are a few ways this can be implemented without a bundler:
- We can intercept the request from the browser importing this file in
the HTTP server, and do a quick scan for `"use server"` in the file and
replace it just like we do with loaders in Node.js. This is effectively
how Vite works and likely how anyone using this technique would have to
support JSX anyway.
- We can use native browser "loaders" once that's eventually available
in the same way as in Node.js.
- We can generate import maps for each file and replace it with a
pointer to a placeholder file. This requires scanning these ahead of
time which defeats the purposes.
Another case that's not implemented is the inline `"use server"` closure
in a Server Component. That would require the existing loader to be a
bit smarter but would still only "compile" files that contains those
bytes in the fast path check. This would also happen in the loader that
already exists so wouldn't do anything substantially different than what
we currently have here.
The bindings upstream in Relay has been removed so we don't need these
builds anymore. The idea is to revisit an FB integration of Flight but
it wouldn't use the Relay specific bindings. It's a bit unclear how it
would look but likely more like the OSS version so not worth keeping
these around.
The `dom-relay` name also included the FB specific Fizz implementation
of the streaming config so I renamed that to `dom-fb`. There's no Fizz
implementation for Native yet so I just removed `native-relay`.
We created a configurable fork for how to encode the output of Flight
and the Relay implementation encoded it as JSON objects instead of
strings/streams. The new implementation would likely be more stream-like
and just encode it directly as string/binary chunks. So I removed those
indirections so that this can just be declared inline in
ReactFlightServer/Client.
- substr is Annex B
- substring silently flips its arguments if they're in the "wrong order", which is confusing
- slice is better than sliced bread (no pun intended) and also it works the same way on Arrays so there's less to remember
---
> I'd be down to just lint and enforce a single form just for the potential compression savings by using a repeated string.
_Originally posted by @sebmarkbage in https://github.com/facebook/react/pull/26663#discussion_r1170455401_
Over the years, we've gradually aligned on a set of best practices for
for testing concurrent React features in this repo. The default in most
cases is to use `act`, the same as you would do when testing a real
React app. However, because we're testing React itself, as opposed to an
app that uses React, our internal tests sometimes need to make
assertions on intermediate states that `act` intentionally disallows.
For those cases, we built a custom set of Jest assertion matchers that
provide greater control over the concurrent work queue. It works by
mocking the Scheduler package. (When we eventually migrate to using
native postTask, it would probably work by stubbing that instead.)
A problem with these helpers that we recently discovered is, because
they are synchronous function calls, they aren't sufficient if the work
you need to flush is scheduled in a microtask — we don't control the
microtask queue, and can't mock it.
`act` addresses this problem by encouraging you to await the result of
the `act` call. (It's not currently required to await, but in future
versions of React it likely will be.) It will then continue flushing
work until both the microtask queue and the Scheduler queue is
exhausted.
We can follow a similar strategy for our custom test helpers, by
replacing the current set of synchronous helpers with a corresponding
set of async ones:
- `expect(Scheduler).toFlushAndYield(log)` -> `await waitForAll(log)`
- `expect(Scheduler).toFlushAndYieldThrough(log)` -> `await
waitFor(log)`
- `expect(Scheduler).toFlushUntilNextPaint(log)` -> `await
waitForPaint(log)`
These APIs are inspired by the existing best practice for writing e2e
React tests. Rather than mock all task queues, in an e2e test you set up
a timer loop and wait for the UI to match an expecte condition. Although
we are mocking _some_ of the task queues in our tests, the general
principle still holds: it makes it less likely that our tests will
diverge from real world behavior in an actual browser.
In this commit, I've implemented the new testing helpers and converted
one of the Suspense tests to use them. In subsequent steps, I'll codemod
the rest of our test suite.
Hermes parser is the preferred parser for Flow code going forward. We
need to upgrade to this parser to support new Flow syntax like function
`this` context type annotations or `ObjectType['prop']` syntax.
Unfortunately, there's quite a few upgrades here to make it work somehow
(dependencies between the changes)
- ~Upgrade `eslint` to `8.*`~ reverted this as the React eslint plugin
tests depend on the older version and there's a [yarn
bug](https://github.com/yarnpkg/yarn/issues/6285) that prevents
`devDependencies` and `peerDependencies` to different versions.
- Remove `eslint-config-fbjs` preset dependency and inline the rules,
imho this makes it a lot clearer what the rules are.
- Remove the turned off `jsx-a11y/*` rules and it's dependency instead
of inlining those from the `fbjs` config.
- Update parser and dependency from `babel-eslint` to `hermes-eslint`.
- `ft-flow/no-unused-expressions` rule replaces `no-unused-expressions`
which now allows standalone type asserts, e.g. `(foo: number);`
- Bunch of globals added to the eslint config
- Disabled `no-redeclare`, seems like the eslint upgrade started making
this more precise and warn against re-defined globals like
`__EXPERIMENTAL__` (in rollup scripts) or `fetch` (when importing fetch
from node-fetch).
- Minor lint fixes like duplicate keys in objects.
We've heard from multiple contributors that the Reconciler forking
mechanism was confusing and/or annoying to deal with. Since it's
currently unused and there's no immediate plans to start using it again,
this removes the forking.
Fully removing the fork is split into 2 steps to preserve file history:
**This PR**
- remove `enableNewReconciler` feature flag.
- remove `unstable_isNewReconciler` export
- remove eslint rules for cross fork imports
- remove `*.new.js` files and update imports
- merge non-suffixed files into `*.old` files where both exist
(sometimes types were defined there)
**#25775**
- rename `*.old` files
This extends the scope of the cache and fetch instrumentation using
AsyncLocalStorage for microtasks. This is an intermediate step. It sets
up the dispatcher only once. This is unique to RSC because it uses the
react.shared-subset module for its shared state.
Ideally we should support multiple renderers. We should also have this
take over from an outer SSR's instrumented fetch. We should also be able
to have a fallback to global state per request where AsyncLocalStorage
doesn't exist and then the whole client-side solutions. I'm still
figuring out the right wiring for that so this is a temporary hack.
* Extract `act` environment check into function
`act` checks the environment to determine whether to fire a warning.
We're changing how this check works in React 18. As a first step, this
refactors the logic into a single function. No behavior changes yet.
* Use IS_REACT_ACT_ENVIRONMENT to disable warnings
If `IS_REACT_ACT_ENVIRONMENT` is set to `false`, we will suppress
any `act` warnings. Otherwise, the behavior of `act` is the same as in
React 17: if `jest` is defined, it warns.
In concurrent mode, the plan is to remove the `jest` check and only warn
if `IS_REACT_ACT_ENVIRONMENT` is true. I have not implemented that
part yet.
* Hoist error codes import to module scope
When this code was written, the error codes map (`codes.json`) was
created on-the-fly, so we had to lazily require from inside the visitor.
Because `codes.json` is now checked into source, we can import it a
single time in module scope.
* Minify error constructors in production
We use a script to minify our error messages in production. Each message
is assigned an error code, defined in `scripts/error-codes/codes.json`.
Then our build script replaces the messages with a link to our
error decoder page, e.g. https://reactjs.org/docs/error-decoder.html/?invariant=92
This enables us to write helpful error messages without increasing the
bundle size.
Right now, the script only works for `invariant` calls. It does not work
if you throw an Error object. This is an old Facebookism that we don't
really need, other than the fact that our error minification script
relies on it.
So, I've updated the script to minify error constructors, too:
Input:
Error(`A ${adj} message that contains ${noun}`);
Output:
Error(formatProdErrorMessage(ERR_CODE, adj, noun));
It only works for constructors that are literally named Error, though we
could add support for other names, too.
As a next step, I will add a lint rule to enforce that errors written
this way must have a corresponding error code.
* Minify "no fallback UI specified" error in prod
This error message wasn't being minified because it doesn't use
invariant. The reason it didn't use invariant is because this particular
error is created without begin thrown — it doesn't need to be thrown
because it's located inside the error handling part of the runtime.
Now that the error minification script supports Error constructors, we
can minify it by assigning it a production error code in
`scripts/error-codes/codes.json`.
To support the use of Error constructors more generally, I will add a
lint rule that enforces each message has a corresponding error code.
* Lint rule to detect unminified errors
Adds a lint rule that detects when an Error constructor is used without
a corresponding production error code.
We already have this for `invariant`, but not for regular errors, i.e.
`throw new Error(msg)`. There's also nothing that enforces the use of
`invariant` besides convention.
There are some packages where we don't care to minify errors. These are
packages that run in environments where bundle size is not a concern,
like react-pg. I added an override in the ESLint config to ignore these.
* Temporarily add invariant codemod script
I'm adding this codemod to the repo temporarily, but I'll revert it
in the same PR. That way we don't have to check it in but it's still
accessible (via the PR) if we need it later.
* [Automated] Codemod invariant -> Error
This commit contains only automated changes:
npx jscodeshift -t scripts/codemod-invariant.js packages --ignore-pattern="node_modules/**/*"
yarn linc --fix
yarn prettier
I will do any manual touch ups in separate commits so they're easier
to review.
* Remove temporary codemod script
This reverts the codemod script and ESLint config I added temporarily
in order to perform the invariant codemod.
* Manual touch ups
A few manual changes I made after the codemod ran.
* Enable error code transform per package
Currently we're not consistent about which packages should have their
errors minified in production and which ones should.
This adds a field to the bundle configuration to control whether to
apply the transform. We should decide what the criteria is going
forward. I think it's probably a good idea to minify any package that
gets sent over the network. So yes to modules that run in the browser,
and no to modules that run on the server and during development only.
* Revise ESLint rules for string coercion
Currently, react uses `'' + value` to coerce mixed values to strings.
This code will throw for Temporal objects or symbols.
To make string-coercion safer and to improve user-facing error messages,
This commit adds a new ESLint rule called `safe-string-coercion`.
This rule has two modes: a production mode and a non-production mode.
* If the `isProductionUserAppCode` option is true, then `'' + value`
coercions are allowed (because they're faster, although they may
throw) and `String(value)` coercions are disallowed. Exception:
when building error messages or running DEV-only code in prod
files, `String()` should be used because it won't throw.
* If the `isProductionUserAppCode` option is false, then `'' + value`
coercions are disallowed (because they may throw, and in non-prod
code it's not worth the risk) and `String(value)` are allowed.
Production mode is used for all files which will be bundled with
developers' userland apps. Non-prod mode is used for all other React
code: tests, DEV blocks, devtools extension, etc.
In production mode, in addiiton to flagging `String(value)` calls,
the rule will also flag `'' + value` or `value + ''` coercions that may
throw. The rule is smart enough to silence itself in the following
"will never throw" cases:
* When the coercion is wrapped in a `typeof` test that restricts to safe
(non-symbol, non-object) types. Example:
if (typeof value === 'string' || typeof value === 'number') {
thisWontReport('' + value);
}
* When what's being coerced is a unary function result, because unary
functions never return an object or a symbol.
* When the coerced value is a commonly-used numeric identifier:
`i`, `idx`, or `lineNumber`.
* When the statement immeidately before the coercion is a DEV-only
call to a function from shared/CheckStringCoercion.js. This call is a
no-op in production, but in DEV it will show a console error
explaining the problem, then will throw right after a long explanatory
code comment so that debugger users will have an idea what's going on.
The check function call must be in the following format:
if (__DEV__) {
checkXxxxxStringCoercion(value);
};
Manually disabling the rule is usually not necessary because almost all
prod use of the `'' + value` pattern falls into one of the categories
above. But in the rare cases where the rule isn't smart enough to detect
safe usage (e.g. when a coercion is inside a nested ternary operator),
manually disabling the rule will be needed.
The rule should also be manually disabled in prod error handling code
where `String(value)` should be used for coercions, because it'd be
bad to throw while building an error message or stack trace!
The prod and non-prod modes have differentiated error messages to
explain how to do a proper coercion in that mode.
If a production check call is needed but is missing or incorrect
(e.g. not in a DEV block or not immediately before the coercion), then
a context-sensitive error message will be reported so that developers
can figure out what's wrong and how to fix the problem.
Because string coercions are now handled by the `safe-string-coercion`
rule, the `no-primitive-constructor` rule no longer flags `String()`
usage. It still flags `new String(value)` because that usage is almost
always a bug.
* Add DEV-only string coercion check functions
This commit adds DEV-only functions to check whether coercing
values to strings using the `'' + value` pattern will throw. If it will
throw, these functions will:
1. Display a console error with a friendly error message describing
the problem and the developer can fix it.
2. Perform the coercion, which will throw. Right before the line where
the throwing happens, there's a long code comment that will help
debugger users (or others looking at the exception call stack) figure
out what happened and how to fix the problem.
One of these check functions should be called before all string coercion
of user-provided values, except when the the coercion is guaranteed not
to throw, e.g.
* if inside a typeof check like `if (typeof value === 'string')`
* if coercing the result of a unary function like `+value` or `value++`
* if coercing a variable named in a whitelist of numeric identifiers:
`i`, `idx`, or `lineNumber`.
The new `safe-string-coercion` internal ESLint rule enforces that
these check functions are called when they are required.
Only use these check functions in production code that will be bundled
with user apps. For non-prod code (and for production error-handling
code), use `String(value)` instead which may be a little slower but will
never throw.
* Add failing tests for string coercion
Added failing tests to verify:
* That input, select, and textarea elements with value and defaultValue
set to Temporal-like objects which will throw when coerced to string
using the `'' + value` pattern.
* That text elements will throw for Temporal-like objects
* That dangerouslySetInnerHTML will *not* throw for Temporal-like
objects because this value is not cast to a string before passing to
the DOM.
* That keys that are Temporal-like objects will throw
All tests above validate the friendly error messages thrown.
* Use `String(value)` for coercion in non-prod files
This commit switches non-production code from `'' + value` (which
throws for Temporal objects and symbols) to instead use `String(value)`
which won't throw for these or other future plus-phobic types.
"Non-produciton code" includes anything not bundled into user apps:
* Tests and test utilities. Note that I didn't change legacy React
test fixtures because I assumed it was good for those files to
act just like old React, including coercion behavior.
* Build scripts
* Dev tools package - In addition to switching to `String`, I also
removed special-case code for coercing symbols which is now
unnecessary.
* Add DEV-only string coercion checks to prod files
This commit adds DEV-only function calls to to check if string coercion
using `'' + value` will throw, which it will if the value is a Temporal
object or a symbol because those types can't be added with `+`.
If it will throw, then in DEV these checks will show a console error
to help the user undertsand what went wrong and how to fix the
problem. After emitting the console error, the check functions will
retry the coercion which will throw with a call stack that's easy (or
at least easier!) to troubleshoot because the exception happens right
after a long comment explaining the issue. So whether the user is in
a debugger, looking at the browser console, or viewing the in-browser
DEV call stack, it should be easy to understand and fix the problem.
In most cases, the safe-string-coercion ESLint rule is smart enough to
detect when a coercion is safe. But in rare cases (e.g. when a coercion
is inside a ternary) this rule will have to be manually disabled.
This commit also switches error-handling code to use `String(value)`
for coercion, because it's bad to crash when you're trying to build
an error message or a call stack! Because `String()` is usually
disallowed by the `safe-string-coercion` ESLint rule in production
code, the rule must be disabled when `String()` is used.
* Move files
* Update paths
* Rename import variables
* Rename /server to /writer
This is mainly because "React Server Server" is weird so we need another
dimension.
* Use "react-server" convention to enforce that writer is only loaded in a server
* Add new effect fields to old fork
So that when comparing relative performance, we don't penalize the new
fork for using more memory.
* Add firstEffect, et al fields to new fork
We need to bisect the changes to the recent commit phase refactor. To
do this, we'll need to add back the effect list temporarily.
This only adds them to the Fiber type so that the memory is the same
as the old fork.
This adds a new dimension similar to dom-relay. It's different from
"native" which would be Flight for RN without Relay.
This has some copy-pasta that's the same between the two Relay builds but
the key difference will be Metro and we're not quite sure what other
differences there will be yet.
* bump package to latest
* update files to respect lint
* disable object-type-delimiter rule to work with prettier
* disable rule to let flow check pass
Also resolve an uncaught error in extension build (#18843).
Co-authored-by: Brian Vaughn <brian.david.vaughn@gmail.com>
Co-authored-by: Brian Vaughn <bvaughn@fb.com>