This implements `findSourceMapURL` in react-server-dom-parcel, enabling
source maps for replayed server errors on the client. It utilizes a new
endpoint in the Parcel dev server that returns the source map for a
given bundle/file. The error overlay UI has also been updated to handle
these stacks. See https://github.com/parcel-bundler/parcel/pull/10082
Also updated the fixture to the latest Parcel canary. A few APIs have
changed. We do have a higher level library wrapper now (`@parcel/rsc`
added in https://github.com/parcel-bundler/parcel/pull/10074) but I left
the fixture using the lower level APIs directly here since it is easier
to see how react-server-dom-parcel is used.
Corresponding Parcel PR:
https://github.com/parcel-bundler/parcel/pull/10073
Parcel avoids [cascading cache
invalidation](https://philipwalton.com/articles/cascading-cache-invalidation/)
by injecting a bundle manifest containing a mapping of stable bundle ids
to hashed URLs. When using an HTML entry point, this is done (as of the
above PR) via a native import map. This means that if a bundle's hash
changes, only that bundle will be invalidated (plus the HTML itself
which typically has a short caching policy), not any other bundles that
reference it.
For RSCs, we cannot currently use native import maps because of client
side navigations, where a new HTML file is not requested. Eventually,
multiple `<script type="importmap">` elements will be supported
(https://github.com/whatwg/html/pull/10528) ([coming Chrome
133](https://chromestatus.com/feature/5121916248260608)), at which point
React could potentially inject them. In the meantime, I've added some
APIs to Parcel to polyfill this. With this change, an import map can be
sent along with a client reference, containing a mapping for any dynamic
imports and URL dependencies (e.g. images) that are referenced by the JS
bundles. On the client, the import map is extended with these new
mappings prior to executing the referenced bundles. This preserves the
caching advantages described above while supporting client navigations.
Followup to #31725
This implements `prepareDestinationForModule` in the Parcel Flight
client. On the Parcel side, the `<Resources>` component now only inserts
`<link>` elements for stylesheets (along with a bootstrap script when
needed), and React is responsible for inserting scripts. This ensures
that components that are conditionally dynamic imported during render
are also preloaded.
CSS must be added to the RSC tree using `<Resources>` to avoid FOUC.
This must be manually rendered in both the top-level page, and in any
component that is dynamic imported. It would be nice if there was a way
for React to automatically insert CSS as well, but unfortunately
`prepareDestinationForModule` only knows about client components and not
CSS for server components. Perhaps there could be a way we could
annotate components at code splitting boundaries with the resources they
need? More thoughts in this thread:
https://github.com/facebook/react/pull/31725#discussion_r1884867607
This adds a new `react-server-dom-parcel-package`, which is an RSC
integration for the Parcel bundler. It is mostly copied from the
existing webpack/turbopack integrations, with some changes to utilize
Parcel runtime APIs for loading and executing bundles/modules.
See https://github.com/parcel-bundler/parcel/pull/10043 for the Parcel
side of this, which includes the plugin needed to generate client and
server references. https://github.com/parcel-bundler/rsc-examples also
includes examples of various ways to use RSCs with Parcel.
Differences from other integrations:
* Client and server modules are all part of the same graph, and we use
Parcel's
[environments](https://parceljs.org/plugin-system/transformer/#the-environment)
to distinguish them. The server is the Parcel build entry point, and it
imports and renders server components in route handlers. When a `"use
client"` directive is seen, the environment changes and Parcel creates a
new client bundle for the page, combining all client modules together.
CSS from both client and server components are also combined
automatically.
* There is no separate manifest file that needs to be passed around by
the user. A [Runtime](https://parceljs.org/plugin-system/runtime/)
plugin injects client and server references as needed into the relevant
bundles, and registers server action ids using `react-server-dom-parcel`
automatically.
* A special `<Resources>` component is also generated by Parcel to
render the `<script>` and `<link rel="stylesheet">` elements needed for
a page, using the relevant info from the bundle graph.
Note: I've already published a 0.0.x version of this package to npm for
testing purposes but happy to add whoever needs access to it as well.
### Questions
* How to test this in the React repo. I'll have integration tests in
Parcel, but setting up all the different mocks and environments to
simulate that here seems challenging. I could try to copy how
Webpack/Turbopack do it but it's a bit different.
* Where to put TypeScript types. Right now I have some ambient types in
my [example
repo](https://github.com/parcel-bundler/rsc-examples/blob/main/types.d.ts)
but it would be nice for users not to copy and paste these. Can I
include them in the package or do they need to maintained separately in
definitelytyped? I would really prefer not to have to maintain code in
three different repos ideally.
---------
Co-authored-by: Sebastian Markbage <sebastian@calyptus.eu>
## Summary
This PR bumps Flow all the way to the latest 0.245.2.
Most of the suppressions comes from Flow v0.239.0's change to include
undefined in the return of `Array.pop`.
I also enabled `react.custom_jsx_typing=true` and added custom jsx
typing to match the old behavior that `React.createElement` is
effectively any typed. This is necessary since various builtin
components like `React.Fragment` is actually symbol in the React repo
instead of `React.AbstractComponent<...>`. It can be made more accurate
by customizing the `React$CustomJSXFactory` type, but I will leave it to
the React team to decide.
## How did you test this change?
`yarn flow` for all the renderers
This one should be fully behind the `enableOwnerStacks` flag.
Instead of printing the parent Component stack all the way to the root,
this now prints the owner stack of every JSX callsite. It also includes
intermediate callsites between the Component and the JSX call so it has
potentially more frames. Mainly it provides the line number of the JSX
callsite. In terms of the number of components is a subset of the parent
component stack so it's less information in that regard. This is usually
better since it's more focused on components that might affect the
output but if it's contextual based on rendering it's still good to have
parent stack. Therefore, I still use the parent stack when printing DOM
nesting warnings but I plan on switching that format to a diff view
format instead (Next.js already reformats the parent stack like this).
__Follow ups__
- Server Components show up in the owner stack for client logs but logs
done by Server Components don't yet get their owner stack printed as
they're replayed. They're also not yet printed in the server logs of the
RSC server.
- Server Component stack frames are formatted as the server and added to
the end but this might be a different format than the browser. E.g. if
server is running V8 and browser is running JSC or vice versa. Ideally
we can reformat them in terms of the client formatting.
- This doesn't yet update Fizz or DevTools. Those will be follow ups.
Fizz still prints parent stacks in the server side logs. The stacks
added to user space `console.error` calls by DevTools still get the
parent stacks instead.
- It also doesn't yet expose these to user space so there's no way to
get them inside `onCaughtError` for example or inside a custom
`console.error` override.
- In another follow up I'll use `console.createTask` instead and
completely remove these stacks if it's available.
In React 19 React will finally stop publishing UMD builds. This is
motivated primarily by the lack of use of UMD format and the added
complexity of maintaining build infra for these releases. Additionally
with ESM becoming more prevalent in browsers and services like esm.sh
which can host React as an ESM module there are other options for doing
script tag based react loading.
This PR removes all the UMD build configs and forks.
There are some fixtures that still have references to UMD builds however
many of them already do not work (for instance they are using legacy
features like ReactDOM.render) and rather than block the removal on
these fixtures being brought up to date we'll just move forward and fix
or removes fixtures as necessary in the future.
When developing in an RSC environment, you should be able to work in a
single environment as if it was a unified environment. With thrown
errors we already serialize them and then rethrow them on the client.
Since by default we log them via onError both in Flight and Fizz, you
can get the same log in the RSC runtime, the SSR runtime and on the
client.
With console logs made in SSR renders, you typically replay the same
code during hydration on the client. So for example warnings already
show up both in the SSR logs and on the client (although not guaranteed
to be the same). You could just spend your time in the client and you'd
be fine.
Previously, RSC logs would not be replayed because they don't hydrate.
So it's easy to miss warnings for example.
With this approach, we replay RSC logs both during SSR so they end up in
the SSR logs and on the client. That way you can just stay in the
browser window during normal development cycles. You shouldn't have to
care if your component is a server or client component when working on
logical things or iterating on a product.
With this change, you probably should mostly ignore the Flight log
stream and just look at the client or maybe the SSR one. Unless you're
digging into something specific. In particular if you just naively run
both Flight and Fizz in the same terminal you get duplicates. I like to
run out fixtures `yarn dev:region` and `yarn dev:global` in two separate
terminals.
Console logs may contain complex objects which can be inspected. Ideally
a DevTools inspector could reach into the RSC server and remotely
inspect objects using the remote inspection protocol. That way complex
objects can be loaded on demand as you expand into them. However, that
is a complex environment to set up and the server might not even be
alive anymore by the time you inspect the objects. Therefore, I do a
best effort to serialize the objects using the RSC protocol but limit
the depth that can be rendered.
This feature is only own in dev mode since it can be expensive.
In a follow up, I'll give the logs a special styling treatment to
clearly differentiate them from logs coming from the client. As well as
deal with stacks.
This wires up the use of `async_hooks` in the Node build (as well as the
Edge build when a global is available) in DEV mode only. This will be
used to track debug info about what suspended during an RSC pass.
Enabled behind a flag for now.
This upgrade made the `React$Element` type opaque, which is good for
product code where accessing props of elements is code smell, but React
needs to use that internally. I overrode the type to restore it.
This lets a registered object or value be "tainted", which we block from
crossing the serialization boundary. It's only allowed to stay
in-memory.
This is an extra layer of protection against mistakes of transferring
data from a data access layer to a client. It doesn't provide perfect
protection, because it doesn't trace through derived values and
substrings. So it shouldn't be used as the only security layer but more
layers are better.
`taintObjectReference` is for specific object instances, not any nested
objects or values inside that object. It's useful to avoid specific
objects from getting passed as is. It ensures that you don't
accidentally leak values in a specific context. It can be for security
reasons like tokens, privacy reasons like personal data or performance
reasons like avoiding passing large objects over the wire.
It might be privacy violation to leak the age of a specific user, but
the number itself isn't blocked in any other context. As soon as the
value is extracted and passed specifically without the object, it can
therefore leak.
`taintUniqueValue` is useful for high entropy values such as hashes,
tokens or crypto keys that are very unique values. In that case it can
be useful to taint the actual primitive values themselves. These can be
encoded as a string, bigint or typed array. We don't currently check for
this value in a substring or inside other typed arrays.
Since values can be created from different sources they don't just
follow garbage collection. In this case an additional object must be
provided that defines the life time of this value for how long it should
be blocked. It can be `globalThis` for essentially forever, but that
risks leaking memory for ever when you're dealing with dynamic values
like reading a token from a database. So in that case the idea is that
you pass the object that might end up in cache.
A request is the only thing that is expected to do any work. The
principle is that you can derive values from out of a tainted
entry during a request. Including stashing it in a per request cache.
What you can't do is store a derived value in a global module level
cache. At least not without also tainting the object.
stacked on #27314
Turbopack requires a different module loading strategy than Webpack and
as such this PR implements a new package `react-server-dom-turbopack`
which largely follows the `react-server-dom-webpack` but is implemented
for this new bundler
Currently when we SSR a Flight response we do not emit any resources for
module imports. This means that when the client hydrates it won't have
already loaded the necessary scripts to satisfy the Imports defined in
the Flight payload which will lead to a delay in hydration completing.
This change updates `react-server-dom-webpack` and
`react-server-dom-esm` to emit async script tags in the head when we
encounter a modules in the flight response.
To support this we need some additional server configuration. We need to
know the path prefix for chunk loading and whether the chunks will load
with CORS or not (and if so with what configuration).
To support MPA-style form submissions, useFormState sends down a key
that represents the identity of the hook on the page. It's based on the
key path of the component within the React tree; for deeply nested
hooks, this keypath can become very long. We can hash the key to make it
shorter.
Adds a method called createFastHash to the Stream Config interface.
We're not using this for security or obfuscation, only to generate a
more compact key without sacrificing too much collision resistance.
- In Node.js builds, createFastHash uses the built-in crypto module.
- In Bun builds, createFastHash uses Bun.hash. See:
https://bun.sh/docs/api/hashing#bun-hash
I have not yet implemented createFastHash in the Edge, Browser, or FB
(Hermes) stream configs because those environments do not have a
built-in hashing function that meets our requirements. (We can't use the
web standard `crypto` API because those methods are async, and yielding
to the main thread is too costly to be worth it for this particular use
case.) We'll likely use a pure JS implementation in those environments;
for now, they just return the original string without hashing it. I'll
address this in separate PRs.
If something throws as a result of `flushSync`, and there's remaining
work left in the queue, React should keep working until all the work is
complete.
If multiple errors are thrown, React will combine them into an
AggregateError object and throw that. In environments where
AggregateError is not available, React will rethrow in an async task.
(All the evergreen runtimes support AggregateError.)
The scenario where this happens is relatively rare, because `flushSync`
will only throw if there's no error boundary to capture the error.
This adds `encodeReply` to the Flight Client and `decodeReply` to the
Flight Server.
Basically, it's a reverse Flight. It serializes values passed from the
client to the server. I call this a "Reply". The tradeoffs and
implementation details are a bit different so it requires its own
implementation but is basically a clone of the Flight Server/Client but
in reverse. Either through callServer or ServerContext.
The goal of this project is to provide the equivalent serialization as
passing props through RSC to client. Except React Elements and
Components and such. So that you can pass a value to the client and back
and it should have the same serialization constraints so when we add
features in one direction we should mostly add it in the other.
Browser support for streaming request bodies are currently very limited
in that only Chrome supports it. So this doesn't produce a
ReadableStream. Instead `encodeReply` produces either a JSON string or
FormData. It uses a JSON string if it's a simple enough payload. For
advanced features it uses FormData. This will also let the browser
stream things like File objects (even though they're not yet supported
since it follows the same rules as the other Flight).
On the server side, you can either consume this by blocking on
generating a FormData object or you can stream in the
`multipart/form-data`. Even if the client isn't streaming data, the
network does. On Node.js busboy seems to be the canonical library for
this, so I exposed a `decodeReplyFromBusboy` in the Node build. However,
if there's ever a web-standard way to stream form data, or if a library
wins in that space we can support it. We can also just build a multipart
parser that takes a ReadableStream built-in.
On the server, server references passed as arguments are loaded from
Node or Webpack just like the client or SSR does. This means that you
can create higher order functions on the client or server. This can be
tokenized when done from a server components but this is a security
implication as it might be tempting to think that these are not fungible
but you can swap one function for another on the client. So you have to
basically treat an incoming argument as insecure, even if it's a
function.
I'm not too happy with the naming parity:
Encode `server.renderToReadableStream` Decode: `client.createFromFetch`
Decode `client.encodeReply` Decode: `server.decodeReply`
This is mainly an implementation details of frameworks but it's annoying
nonetheless. This comes from that `renderToReadableStream` does do some
"rendering" by unwrapping server components etc. The `create` part comes
from the parity with Fizz/Fiber where you `render` on the server and
`create` a root on the client.
Open to bike-shedding this some more.
---------
Co-authored-by: Josh Story <josh.c.story@gmail.com>
To wait for the microtask queue to empty, our internal test helpers
schedule an arbitrary task using `setImmediate`. It doesn't matter what
kind of task it is, only that it's a separate task from the current one,
because by the time it fires, the microtasks for the current event will
have already been processed.
The issue with `setImmediate` is that Jest mocks it. Which can lead to
weird behavior.
I've changed it to instead use a message event, via the MessageChannel
implementation exposed by the `node:worker_threads` module.
We should consider doing this in the public implementation of `act`,
too.
This splits out the Edge and Node implementations of Flight Client into
their own implementations. The Node implementation now takes a Node
Stream as input.
I removed the bundler config from the Browser variant because you're
never supposed to use that in the browser since it's only for SSR.
Similarly, it's required on the server. This also enables generating a
SSR manifest from the Webpack plugin. This is necessary for SSR so that
you can reverse look up what a client module is called on the server.
I also removed the option to pass a callServer from the server. We might
want to add it back in the future but basically, we don't recommend
calling Server Functions from render for initial render because if that
happened client-side it would be a client-side waterfall. If it's never
called in initial render, then it also shouldn't ever happen during SSR.
This might be considered too restrictive.
~This also compiles the unbundled packages as ESM. This isn't strictly
necessary because we only need access to dynamic import to load the
modules but we don't have any other build options that leave
`import(...)` intact, and seems appropriate that this would also be an
ESM module.~ Went with `import(...)` in CJS instead.
The old version of prettier we were using didn't support the Flow syntax
to access properties in a type using `SomeType['prop']`. This updates
`prettier` and `rollup-plugin-prettier` to the latest versions.
I added the prettier config `arrowParens: "avoid"` to reduce the diff
size as the default has changed in Prettier 2.0. The largest amount of
changes comes from function expressions now having a space. This doesn't
have an option to preserve the old behavior, so we have to update this.
This extends the scope of the cache and fetch instrumentation using
AsyncLocalStorage for microtasks. This is an intermediate step. It sets
up the dispatcher only once. This is unique to RSC because it uses the
react.shared-subset module for its shared state.
Ideally we should support multiple renderers. We should also have this
take over from an outer SSR's instrumented fetch. We should also be able
to have a fallback to global state per request where AsyncLocalStorage
doesn't exist and then the whole client-side solutions. I'm still
figuring out the right wiring for that so this is a temporary hack.
* Facebook -> Meta in copyright
rg --files | xargs sed -i 's#Copyright (c) Facebook, Inc. and its affiliates.#Copyright (c) Meta Platforms, Inc. and affiliates.#g'
* Manual tweaks
* Add fixture for comparing baseline render perf for renderToString and renderToPipeableStream
Modified from ssr2 and https://github.com/SuperOleg39/react-ssr-perf-test
* Implement buffering in pipeable streams
The previous implementation of pipeable streaming (Node) suffered some performance issues brought about by the high chunk counts and innefficiencies with how node streams handle this situation. In particular the use of cork/uncork was meant to alleviate this but these methods do not do anything unless the receiving Writable Stream implements _writev which many won't.
This change adopts the view based buffering techniques previously implemented for the Browser execution context. The main difference is the use of backpressure provided by the writable stream which is not implementable in the other context. Another change to note is the use of standards constructs like TextEncoder and TypedArrays.
* Implement encodeInto during flushCompletedQueues
encodeInto allows us to write directly to the view buffer that will end up getting streamed instead of encoding into an intermediate buffer and then copying that data.
* [RFC] Add onHydrationError option to hydrateRoot
This is not the final API but I'm pushing it for discussion purposes.
When an error is thrown during hydration, we fallback to client
rendering, without triggering an error boundary. This is good because,
in many cases, the UI will recover and the user won't even notice that
something has gone wrong behind the scenes.
However, we shouldn't recover from these errors silently, because the
underlying cause might be pretty serious. Server-client mismatches are
not supposed to happen, even if UI doesn't break from the users
perspective. Ignoring them could lead to worse problems later. De-opting
from server to client rendering could also be a significant performance
regression, depending on the scope of the UI it affects.
So we need a way to log when hydration errors occur.
This adds a new option for `hydrateRoot` called `onHydrationError`. It's
symmetrical to the server renderer's `onError` option, and serves the
same purpose.
When no option is provided, the default behavior is to schedule a
browser task and rethrow the error. This will trigger the normal browser
behavior for errors, including dispatching an error event. If the app
already has error monitoring, this likely will just work as expected
without additional configuration.
However, we can also expose additional metadata about these errors, like
which Suspense boundaries were affected by the de-opt to client
rendering. (I have not exposed any metadata in this commit; API needs
more design work.)
There are other situations besides hydration where we recover from an
error without surfacing it to the user, or notifying an error boundary.
For example, if an error occurs during a concurrent render, it could be
due to a data race, so we try again synchronously in case that fixes it.
We should probably expose a way to log these types of errors, too. (Also
not implemented in this commit.)
* Log all recoverable errors
This expands the scope of onHydrationError to include all errors that
are not surfaced to the UI (an error boundary). In addition to errors
that occur during hydration, this also includes errors that recoverable
by de-opting to synchronous rendering. Typically (or really, by
definition) these errors are the result of a concurrent data race;
blocking the main thread fixes them by prevents subsequent races.
The logic for de-opting to synchronous rendering already existed. The
only thing that has changed is that we now log the errors instead of
silently proceeding.
The logging API has been renamed from onHydrationError
to onRecoverableError.
* Don't log recoverable errors until commit phase
If the render is interrupted and restarts, we don't want to log the
errors multiple times.
This change only affects errors that are recovered by de-opting to
synchronous rendering; we'll have to do something else for errors
during hydration, since they use a different recovery path.
* Only log hydration error if client render succeeds
Similar to previous step.
When an error occurs during hydration, we only want to log it if falling
back to client rendering _succeeds_. If client rendering fails,
the error will get reported to the nearest error boundary, so there's
no need for a duplicate log.
To implement this, I added a list of errors to the hydration context.
If the Suspense boundary successfully completes, they are added to
the main recoverable errors queue (the one I added in the
previous step.)
* Log error with queueMicrotask instead of Scheduler
If onRecoverableError is not provided, we default to rethrowing the
error in a separate task. Originally, I scheduled the task with
idle priority, but @sebmarkbage made the good point that if there are
multiple errors logs, we want to preserve the original order. So I've
switched it to a microtask. The priority can be lowered in userspace
by scheduling an additional task inside onRecoverableError.
* Only use host config method for default behavior
Redefines the contract of the host config's logRecoverableError method
to be a default implementation for onRecoverableError if a user-provided
one is not provided when the root is created.
* Log with reportError instead of rethrowing
In modern browsers, reportError will dispatch an error event, emulating
an uncaught JavaScript error. We can do this instead of rethrowing
recoverable errors in a microtask, which is nice because it avoids any
subtle ordering issues.
In older browsers and test environments, we'll fall back
to console.error.
* Naming nits
queueRecoverableHydrationErrors -> upgradeHydrationErrorsToRecoverable
* Don't allocate the inner cache unnecessarily
We only need it when we're asking for text. I anticipate I'll want to avoid allocating it in other methods too when it's not strictly necessary.
* Add fs.access
* Add fs.lstat
* Add fs.stat
* Add fs.readdir
* Add fs.readlink
* Add fs.realpath
* Rename functions to disambiguate two caches
* [Flight] Add rudimentary FS binding
* Throw for unsupported
* Don't mess with hidden class
* Use absolute path as the key
* Warn on relative and non-normalized paths
* Rename "name"->"filepath" field on Webpack module references
This field name will get confused with the imported name or the module id.
* Switch back to transformSource instead of getSource
getSource would be more efficient in the cases where we don't need to read
the original file but we'll need to most of the time.
Even then, we can't return a JS file if we're trying to support non-JS
loader because it'll end up being transformed.
Similarly, we'll need to parse the file and we can't parse it before it's
transformed. So we need to chain with other loaders that know how.
* Add acorn dependency
This should be the version used by Webpack since we have a dependency on
Webpack anyway.
* Parse exported names of ESM modules
We need to statically resolve the names that a client component will
export so that we can export a module reference for each of the names.
For export * from, this gets tricky because we need to also load the
source of the next file to parse that. We don't know exactly how the
client is built so we guess it's somewhat default.
* Handle imported names one level deep in CommonJS using a Proxy
We use a proxy to see what property the server access and that will tell
us which property we'll want to import on the client.
* Add export name to module reference and Webpack map
To support named exports each name needs to be encoded as a separate
reference. It's possible with module splitting that different exports end
up in different chunks.
It's also possible that the export is renamed as part of minification.
So the map also includes a map from the original to the bundled name.
* Special case plain CJS requires and conditional imports using __esModule
This models if the server tries to import .default or a plain require.
We should replicate the same thing on the client when we load that
module reference.
* Dedupe acorn-related deps
Co-authored-by: Mateusz Burzyński <mateuszburzynski@gmail.com>
* Formalize the Wakeable and Thenable types
We use two subsets of Promises throughout React APIs. This introduces
the smallest subset - Wakeable. It's the thing that you can throw to
suspend. It's something that can ping.
I also use a shared type for Thenable in the cases where we expect a value
so we can be a bit more rigid with our us of them.
* Make Chunks into Wakeables instead of using native Promises
This value is just going from here to React so we can keep it a lighter
abstraction throughout.
* Renamed thenable to wakeable in variable names
In CI, we run our test suite against multiple build configurations. For
example, we run our tests in both dev and prod, and in both the
experimental and stable release channels. This is to prevent accidental
deviations in behavior between the different builds. If there's an
intentional deviation in behavior, the test author must account
for them.
However, we currently don't run tests against the www builds. That's
a problem, because it's common for features to land in www before they
land anywhere else, including the experimental release channel.
Typically we do this so we can gradually roll out the feature behind
a flag before deciding to enable it.
The way we test those features today is by mutating the
`shared/ReactFeatureFlags` module. There are a few downsides to this
approach, though. The flag is only overridden for the specific tests or
test suites where you apply the override. But usually what you want is
to run *all* tests with the flag enabled, to protect against unexpected
regressions.
Also, mutating the feature flags module only works when running the
tests against source, not against the final build artifacts, because the
ReactFeatureFlags module is inlined by the build script.
Instead, we should run the test suite against the www configuration,
just like we do for prod, experimental, and so on. I've added a new
command, `yarn test-www`. It automatically runs in CI.
Some of the www feature flags are dynamic; that is, they depend on
a runtime condition (i.e. a GK). These flags are imported from an
external module that lives in www. Those flags will be enabled for some
clients and disabled for others, so we should run the tests against
*both* modes.
So I've added a new global `__VARIANT__`, and a new test command `yarn
test-www-variant`. `__VARIANT__` is set to false by default; when
running `test-www-variant`, it's set to true.
If we were going for *really* comprehensive coverage, we would run the
tests against every possible configuration of feature flags: 2 ^
numberOfFlags total combinations. That's not practical, though, so
instead we only run against two combinations: once with `__VARIANT__`
set to `true`, and once with it set to `false`. We generally assume that
flags can be toggled independently, so in practice this should
be enough.
You can also refer to `__VARIANT__` in tests to detect which mode you're
running in. Or, you can import `shared/ReactFeatureFlags` and read the
specific flag you can about. However, we should stop mutating that
module going forward. Treat it as read-only.
In this commit, I have only setup the www tests to run against source.
I'll leave running against build for a follow up.
Many of our tests currently assume they run only in the default
configuration, and break when certain flags are toggled. Rather than fix
these all up front, I've hard-coded the relevant flags to the default
values. We can incrementally migrate those tests later.
* Update Flow to 0.84
* Fix violations
* Use inexact object syntax in files from fbsource
* Fix warning extraction to use a modern parser
* Codemod inexact objects to new syntax
* Tighten types that can be exact
* Revert unintentional formatting changes from codemod
* Don't bother including `unstable_` in error
The method names don't get stripped out of the production bundles
because they are passed as arguments to the error decoder.
Let's just always use the unprefixed APIs in the messages.
* Set up experimental builds
The experimental builds are packaged exactly like builds in the stable
release channel: same file structure, entry points, and npm package
names. The goal is to match what will eventually be released in stable
as closely as possible, but with additional features turned on.
Versioning and Releasing
------------------------
The experimental builds will be published to the same registry and
package names as the stable ones. However, they will be versioned using
a separate scheme. Instead of semver versions, experimental releases
will receive arbitrary version strings based on their content hashes.
The motivation is to thwart attempts to use a version range to match
against future experimental releases. The only way to install or depend
on an experimental release is to refer to the specific version number.
Building
--------
I did not use the existing feature flag infra to configure the
experimental builds. The reason is because feature flags are designed
to configure a single package. They're not designed to generate multiple
forks of the same package; for each set of feature flags, you must
create a separate package configuration.
Instead, I've added a new build dimension called the **release
channel**. By default, builds use the **stable** channel. There's
also an **experimental** release channel. We have the option to add more
in the future.
There are now two dimensions per artifact: build type (production,
development, or profiling), and release channel (stable or
experimental). These are separate dimensions because they are
combinatorial: there are stable and experimental production builds,
stable and experimental developmenet builds, and so on.
You can add something to an experimental build by gating on
`__EXPERIMENTAL__`, similar to how we use `__DEV__`. Anything inside
these branches will be excluded from the stable builds.
This gives us a low effort way to add experimental behavior in any
package without setting up feature flags or configuring a new package.
* Merged interaction-tracking package into react-scheduler
* Add tracking API to FB+www builds
* Added Rollup plugin to strip no-side-effect imports from Rollup bundles
* Re-bundle tracking and scheduling APIs on SECRET_INTERNALS object for UMD build (and provide lazy forwarding methods)
* Added some additional tests and fixtures
* Fixed broken UMD fixture in master (#13512)
* Remove EventListener fbjs utility
EventListener normalizes event subscription for <= IE8. This is no
longer necessary. element.addEventListener is sufficient.
* Remove an extra allocation for open source bundles
* Split into two functions to avoid extra runtime checks
* Revert unrelated changes
* Fix dead code elimination for feature flags
Turning flags into named exports fixes dead code elimination.
This required some restructuring of how we verify that flag types match up. I used the Check<> trick combined with import typeof, as suggested by @calebmer.
For www, we can no longer re-export `require('ReactFeatureFlags')` directly, and instead destructure it. This means flags have to be known at init time. This is already the case so it's not a problem. In fact it may be better since it removes extra property access in tight paths.
For things that we *want* to be dynamic on www (currently, only performance flag) we can export a function to toggle it, and then put it on the secret exports. In fact this is better than just letting everyone mutate the flag at arbitrary times since we can provide, e.g., a ref counting interface to it.
* Record sizes