* First pass at scaffolding out the Node implementation of react-data.
While incomplete, this patch contains some changes to the react-data
package in order to start adding support for Node.
The first part of this change accounts for splitting react-data/fetch
into two discrete entries, adding (and defaulting to) the Node
implementation.
The second part is sketching out a rough approximation of `fetch` for
Node. This implementation is not complete by any means, but provides a
starting point.
* Remove NodeFetch module and put it directly into ReactDataFetchNode.
* Replaced react-data with react-fetch.
This patch shuffles around some of the scaffolding that was in
react-data in favor of react-fetch. It also removes the additional
"fetch" package in favor of something flatter.
* Tweak package organization
* Simplify and add a test
Co-authored-by: Dan Abramov <dan.abramov@me.com>
* Expose LegacyHidden type
I will use this internally at Facebook to migrate away from
<div hidden />. The end goal is to migrate to the Offscreen type, but
that has different semantics. This is an incremental step.
* Disable <div hidden /> API in new fork
Migrates to the unstable_LegacyHidden type instead. The old fork does
not support the new component type, so I updated the tests to use an
indirection that picks the correct API. I will remove this once the
LegacyHidden (and/or Offscreen) type has landed in both implementations.
* Add gated warning for `<div hidden />` API
Only exists so we can detect callers in www and migrate them to the new
API. Should not visible to anyone outside React Core team.
Adds several new experimental APIs to aid with automated testing.
Each of the methods below accepts an array of "selectors" that identifies a path (or paths) through a React tree. There are four basic selector types:
* Component: Matches Fibers with the specified React component type
* Role: Matches Host Instances matching the (explicit or implicit) accessibility role.
* Test name: Matches Host Instances with a data-testname attribute.
* Text: Matches Host Instances that directly contain the specified text.
* There is also a special lookahead selector type that enables further matching within a path (without actually including the path in the result). This selector type was inspired by the :has() CSS pseudo-class. It enables e.g. matching a <section> that contained a specific header text, then finding a like button within that <section>.
API
* findAllNodes(): Finds all Host Instances (e.g. HTMLElement) within a host subtree that match the specified selector criteria.
* getFindAllNodesFailureDescription(): Returns an error string describing the matched and unmatched portions of the selector query.
* findBoundingRects(): For all React components within a host subtree that match the specified selector criteria, return a set of bounding boxes that covers the bounds of the nearest (shallowed) Host Instances within those trees.
* observeVisibleRects(): For all React components within a host subtree that match the specified selector criteria, observe if it’s bounding rect is visible in the viewport and is not occluded.
* focusWithin(): For all React components within a host subtree that match the specified selector criteria, set focus within the first focusable Host Instance (as if you started before this component in the tree and moved focus forwards one step).
* Rename Flight to Transport
Flight is still the codename for the implementation details (like Fiber).
However, now the public package is react-transport-... which is only
intended to be used directly by integrators.
* Rename names
See PR #18796 for more information.
All of the changes I've made in this commit are behind the
`enableNewReconciler` flag. Merging this to master will not affect the
open source builds or the build that we ship to Facebook.
The only build that is affected is the `ReactDOMForked` build, which is
deployed to Facebook **behind an experimental flag (currently disabled
for all users)**. We will use this flag to gradually roll out the new
reconciler, and quickly roll it back if we find any problems.
Because we have those protections in place, what I'm aiming for with
this initial PR is the **smallest possible atomic change that lands
cleanly and doesn't rely on too many hacks**. The goal has not been to
get every single test or feature passing, and it definitely is not to
implement all the features that we intend to build on top of the new
model. When possible, I have chosen to preserve existing semantics and
defer changes to follow-up steps. (Listed in the section below.)
(I did not end up having to disable any tests, although if I had, that
should not have necessarily been a merge blocker.)
For example, even though one of the primary goals of this project is to
improve our model for parallel Suspense transitions, in this initial
implementation, I have chosen to keep the same core heuristics for
sequencing and flushing that existed in the ExpirationTimes model: low
priority updates cannot finish without also finishing high priority
ones.
Despite all these precautions, **because the scope of this refactor is
inherently large, I do expect we will find regressions.** The flip side
is that I also expect the new model to improve the stability of the
codebase and make it easier to fix bugs when they arise.
* Rename ReactCache -> ReactCacheOld
We still use it in some tests so I'm going to leave it for now. I'll start making the new one in parallel in the react package.
* Add react/unstable-cache entry point
* Add react-data entry point
* Initial implementation of cache and data/fetch
* Address review
* DevTools console override handles new component stack format
DevTools does not attempt to mimic the default browser console format for its component stacks but it does properly detect the new format for Chrome, Firefox, and Safari.
* Detect double stacks in the new format in tests
* Remove unnecessary uses of getStackByFiberInDevAndProd
These all execute in the right execution context already.
* Set the debug fiber around the cases that don't have an execution context
* Remove stack detection in our console log overrides
We never pass custom stacks as part of the args anymore.
* Bonus: Don't append getStackAddendum to invariants
We print component stacks for every error anyway so this is just duplicate
information.
We typecheck the reconciler against each one of our host configs.
`yarn flow dom` checks it against the DOM renderer, `yarn flow native`
checks it against the native renderer, and so on.
To do this, we generate separate flowconfig files.
Currently, there is no root-level host config, so running Flow
directly via `flow` CLI doesn't work. You have to use the `yarn flow`
command and pick a specific renderer.
A drawback of this design, though, is that our Flow setup doesn't work
with other tooling. Namely, editor integrations.
I think the intent of this was maybe so you don't run Flow against a
renderer than you intended, see it pass, and wrongly think you fixed
all the errors. However, since they all run in CI, I don't think this
is a big deal. In practice, I nearly always run Flow against the same
renderer (DOM), and I'm guessing that's the most common workflow for
others, too.
So what I've done in this commit is modify the `yarn flow` command to
copy the generated `.flowconfig` file into the root directory. The
editor integration will pick this up and show Flow information for
whatever was the last renderer you checked.
Everything else about the setup is the same, and all the renderers will
continue to be checked by CI.
* Migrate conditional tests to gate pragma
I searched through the codebase for this pattern:
```js
describe('test suite', () => {
if (!__EXPERIMENTAL__) { // or some other condition
test("empty test so Jest doesn't complain", () => {});
return;
}
// Unless we're in experimental mode, none of the tests in this block
// will run.
})
```
and converted them to the `@gate` pragma instead.
The reason this pattern isn't preferred is because you end up disabling
more tests than you need to.
* Add flag for www release channels
Using a heuristic where I check a flag that is known to only be enabled
in www. I left a TODO to instead set the release channel explicitly in
each test config.
* Add pragma for feature testing: @gate
The `@gate` pragma declares under which conditions a test is expected to
pass.
If the gate condition passes, then the test runs normally (same as if
there were no pragma).
If the conditional fails, then the test runs and is *expected to fail*.
An alternative to `it.experimental` and similar proposals.
Examples
--------
Basic:
```js
// @gate enableBlocksAPI
test('passes only if Blocks API is available', () => {/*...*/})
```
Negation:
```js
// @gate !disableLegacyContext
test('depends on a deprecated feature', () => {/*...*/})
```
Multiple flags:
```js
// @gate enableNewReconciler
// @gate experimental
test('needs both useEvent and Blocks', () => {/*...*/})
```
Logical operators (yes, I'm sorry):
```js
// @gate experimental && (enableNewReconciler || disableSchedulerTimeoutBasedOnReactExpirationTime)
test('concurrent mode, doesn\'t work in old fork unless Scheduler timeout flag is disabled', () => {/*...*/})
```
Strings, and comparion operators
No use case yet but I figure eventually we'd use this to gate on
different release channels:
```js
// @gate channel === "experimental" || channel === "modern"
test('works in OSS experimental or www modern', () => {/*...*/})
```
How does it work?
I'm guessing those last two examples might be controversial. Supporting
those cases did require implementing a mini-parser.
The output of the transform is very straightforward, though.
Input:
```js
// @gate a && (b || c)
test('some test', () => {/*...*/})
```
Output:
```js
_test_gate(ctx => ctx.a && (ctx.b || ctx.c, 'some test'), () => {/*...*/});
```
It also works with `it`, `it.only`, and `fit`. It leaves `it.skip` and
`xit` alone because those tests are disabled anyway.
`_test_gate` is a global method that I set up in our Jest config. It
works about the same as the existing `it.experimental` helper.
The context (`ctx`) argument is whatever we want it to be. I set it up
so that it throws if you try to access a flag that doesn't exist. I also
added some shortcuts for common gating conditions, like `old`
and `new`:
```js
// @gate experimental
test('experimental feature', () => {/*...*/})
// @gate new
test('only passes in new reconciler', () => {/*...*/})
```
Why implement this as a pragma instead of a runtime API?
- Doesn't require monkey patching built-in Jest methods. Instead it
compiles to a runtime function that composes Jest's API.
- Will be easy to upgrade if Jest ever overhauls their API or we switch
to a different testing framework (unlikely but who knows).
- It feels lightweight so hopefully people won't feel gross using it.
For example, adding or removing a gate pragma will never affect the
indentation of the test, unlike if you wrapped the test in a
conditional block.
* Compatibility with console error/warning tracking
We patch console.error and console.warning to track unexpected calls
in our tests. If there's an unexpected call, we usually throw inside
an `afterEach` hook. However, that's too late for tests that we
expect to fail, because our `_test_gate` runtime can't capture the
error. So I also check for unexpected calls inside `_test_gate`.
* Move test flags to dedicated file
Added some instructions for how the flags are set up and how to
use them.
* Add dynamic version of gate API
Receives same flags as the pragma.
If we ever decide to revert the pragma, we can codemod them to use
this instead.
* Implement component stack extraction hack
* Normalize errors in tests
This drops the requirement to include owner to pass the test.
* Special case tests
* Add destructuring to force toObject which throws before the side-effects
This ensures that we don't double call yieldValue or advanceTime in tests.
Ideally we could use empty destructuring but ES lint doesn't like it.
* Cache the result in DEV
In DEV it's somewhat likely that we'll see many logs that add component
stacks. This could be slow so we cache the results of previous components.
* Fixture
* Add Reflect to lint
* Log if out of range.
* Fix special case when the function call throws in V8
In V8 we need to ignore the first line. Normally we would never get there
because the stacks would differ before that, but the stacks are the same if
we end up throwing at the same place as the control.
Modules that belong to one fork should not import modules that belong to
the other fork.
Helps make sure you correctly update imports when syncing changes across
implementations.
Also could help protect against code size regressions that might happen
if one of the forks accidentally depends on two copies of the same
module.
Adds command `yarn merge-fork`.
```sh
yarn merge-fork --base-dir=packages/react-reconciler/src ReactFiberWorkLoop
```
This will take all the changes in `ReactFiberWorkLoop.new.js` and apply
them to `ReactFiberWorkLoop.old.js`.
You can merge multiple modules at a time:
```sh
yarn merge-fork \
--base-dir=packages/react-reconciler/src \
ReactFiberWorkLoop \
ReactFiberBeginWork \
ReactFiberCompleteWork \
ReactFiberCommitWork
```
You can provide explicit "old" and "new" file names. This only works
for one module at a time:
```sh
yarn merge-fork \
--base-dir=packages/react-reconciler/src \
--old=ReactFiberExpirationTime.js \
--new=ReactFiberLane.js
```
The default is to merge changes from the new module to the old one. To
merge changes in the opposite direction, use `--reverse`.
```sh
yarn merge-fork \
--reverse \
--base-dir=packages/react-reconciler/src \
ReactFiberWorkLoop
```
By default, the changes are compared to HEAD. You can use `--base-ref`
to compare to any rev. For example, while working on a PR, you might
make multiple commits to the new fork before you're ready to backport
them to the old one. In that case, you want to compare to the merge
base of your PR branch:
```sh
yarn merge-fork \
--base-ref=$(git merge-base HEAD origin/master)
--base-dir=packages/react-reconciler/src \
ReactFiberWorkLoop
```
Some of our internal reconciler types have leaked into other packages.
Usually, these types are treated as opaque; we don't read and write
to its fields. This is good.
However, the type is often passed back to a reconciler method. For
example, React DOM creates a FiberRoot with `createContainer`, then
passes that root to `updateContainer`. It doesn't do anything with the
root except pass it through, but because `updateContainer` expects a
full FiberRoot, React DOM is still coupled to all its fields.
I don't know if there's an idiomatic way to handle this in Flow. Opaque
types are simlar, but those only work within a single file. AFAIK,
there's no way to use a package as the boundary for opaqueness.
The immediate problem this presents is that the reconciler refactor will
involve changes to our internal data structures. I don't want to have to
fork every single package that happens to pass through a Fiber or
FiberRoot, or access any one of its fields. So my current plan is to
share the same Flow type across both forks. The shared type will be a
superset of each implementation's type, e.g. Fiber will have both an
`expirationTime` field and a `lanes` field. The implementations will
diverge, but not the types.
To do this, I lifted the type definitions into a separate module.
* Add useOpaqueIdentifier Hook
We currently use unique IDs in a lot of places. Examples are:
* `<label for="ID">`
* `aria-labelledby`
This can cause some issues:
1. If we server side render and then hydrate, this could cause an
hydration ID mismatch
2. If we server side render one part of the page and client side
render another part of the page, the ID for one part could be
different than the ID for another part even though they are
supposed to be the same
3. If we conditionally render something with an ID , this might also
cause an ID mismatch because the ID will be different on other
parts of the page
This PR creates a new hook `useUniqueId` that generates a different
unique ID based on whether the hook was called on the server or client.
If the hook is called during hydration, it generates an opaque object
that will rerender the hook so that the IDs match.
Co-authored-by: Andrew Clark <git@andrewclark.io>
* Lazily initialize models as they're read intead of eagerly when received
This ensures that we don't spend CPU cycles processing models that we're
not going to end up rendering.
This model will also allow us to suspend during this initialization if
data is not yet available to satisfy the model.
* Refactoring carefully to ensure bundles still compile to something optimal
* Remove generic from Response
The root model needs to be cast at one point or another same as othe
chunks. So we can parameterize the read instead of the whole Response.
* Read roots from the 0 key of the map
The special case to read the root isn't worth the field and code.
* Store response on each Chunk
Instead of storing it on the data tuple which is kind of dynamic, we store
it on each Chunk. This uses more memory. Especially compared to just making
initializeBlock a closure, but overall is simpler.
* Rename private fields to underscores
Response objects are exposed.
* Encode server components as delayed references
This allows us to stream in server components one after another over the
wire. It also allows parallelizing their fetches and resuming only the
server component instead of the whole parent block.
This doesn't yet allow us to suspend deeper while waiting on this content
because we don't have "lazy elements".
* Eject CRA from Flight
We need to eject because we're going to add a custom Webpack Plugin.
We can undo this once the plugin has upstreamed into CRA.
* Add Webpack plugin build
I call this entry point "webpack-plugin" instead of "plugin" even though
this is a webpack specific package. That's because there will also be a
Node.js plugin to do the server transform.
* Add Flight Webpack plugin to fixture
* Rm UMD builds
* Transform classes
* Rename webpack-plugin to plugin
This avoids the double webpack name. We're going to reuse this for both
server and client.
* Remove /dist/ UMD builds
We publish UMDs to npm (and we're considering stopping even that).
This means we'll stop publishing to http://react.zpao.com/builds/master/latest/
* Update fixture paths
* Enable prefer-const rule
Stylistically I don't like this but Closure Compiler takes advantage of
this information.
* Auto-fix lints
* Manually fix the remaining callsites
* Upgrade Closure
There are newer versions but they don't yet have corresponding releases
of google-closure-compiler-osx.
* Configure build
* Refactor ReactSymbols a bit
Provides a little better output.
This is equivalent to the jsx-runtime in that this is what the compiled
output on the server is supposed to target.
It's really just the same code for all the different Flights, but they
have different types in their arguments so each one gets their own entry
point. We might use this to add runtime warnings per entry point.
Unlike the client-side React.block call this doesn't provide the factory
function that curries the load function. The compiler is expected to wrap
this call in the currying factory.
* Formalize the Wakeable and Thenable types
We use two subsets of Promises throughout React APIs. This introduces
the smallest subset - Wakeable. It's the thing that you can throw to
suspend. It's something that can ping.
I also use a shared type for Thenable in the cases where we expect a value
so we can be a bit more rigid with our us of them.
* Make Chunks into Wakeables instead of using native Promises
This value is just going from here to React so we can keep it a lighter
abstraction throughout.
* Renamed thenable to wakeable in variable names
DevTools previously used the NPM events package for dispatching events. This package has an unfortunate flaw though- if a listener throws during event dispatch, no subsequent listeners are called. I've replaced that event dispatcher with my own implementation that ensures all listeners are called before it re-throws an error.
This commit replaces that event emitter with a custom implementation that calls all listeners before re-throwing an error.