## Overview
**Internal React repo tests only**
Depends on https://github.com/facebook/react/pull/28710
Adds three new assertions:
- `assertConsoleLogDev`
- `assertConsoleWarnDev`
- `assertConsoleErrorDev`
These will replace this pattern:
```js
await expect(async () => {
await expect(async () => {
await act(() => {
root.render(<Fail />)
});
}).toThrow();
}).toWarnDev('Warning');
```
With this:
```js
await expect(async () => {
await act(() => {
root.render(<Fail />)
});
}).toThrow();
assertConsoleWarnDev('Warning');
```
It works similar to our other `assertLog` matchers which clear the log
and assert on it, failing the tests if the log is not asserted before
the test ends.
## Diffs
There are a few improvements I also added including better log diffs and
more logging.
When there's a failure, the output will look something like:
<img width="655" alt="Screenshot 2024-04-03 at 11 50 08 AM"
src="https://github.com/facebook/react/assets/2440089/0c4bf1b2-5f63-4204-8af3-09e0c2d752ad">
Check out the test suite for snapshots of all the failures we may log.
`act` uses the `didScheduleLegacyUpdate` field to simulate the behavior
of batching in React <17 and below. It's a quirk leftover from a
previous implementation, not intentionally designed.
This sets `didScheduleLegacyUpdate` every time a legacy root receives an
update as opposed to only when the `executionContext` is empty. There's
no real reason to do it this way over some other way except that it's
how it used to work before #26512 and we should try our best to maintain
the existing behavior, quirks and all, since existing tests may have
come to accidentally rely on it.
This should fix some (though not all) of the internal Meta tests that
started failing after #26512 landed.
Will add a regression test before merging.
(This only affects our own internal repo; it's not a public API.)
I think most of us agree this is a less confusing name. It's possible
someone will confuse it with `console.log`. If that becomes a problem we
can warn in dev or something.
* Swap expect(ReactNoop) for expect(Scheduler)
In the previous commits, I upgraded our custom Jest matchers for the
noop and test renderers to use Scheduler under the hood.
Now that all these matchers are using Scheduler, we can drop
support for passing ReactNoop and test roots and always pass
Scheduler directly.
* Externalize Scheduler in noop and test bundles
I also noticed we don't need to regenerator runtime in noop anymore.
* Replace test renderer's fake Scheduler implementation with mock build
The test renderer has its own mock implementation of the Scheduler
interface, with the ability to partially render work in tests. Now that
this functionality has been lifted into a proper mock Scheduler build,
we can use that instead.
* Fix Profiler tests in prod
* Replace noop's fake Scheduler implementation with mock Scheduler build
The noop renderer has its own mock implementation of the Scheduler
interface, with the ability to partially render work in tests. Now that
this functionality has been lifted into a proper mock Scheduler build,
we can use that instead.
Most of the existing noop tests were unaffected, but I did have to make
some changes. The biggest one involved passive effects: previously, they
were scheduled on a separate queue from the queue that handles
rendering. After this change, both rendering and effects are scheduled
in the Scheduler queue. I think this is a better approach because tests
no longer have to worry about the difference; if you call `flushAll`,
all the work is flushed, both rendering and effects. But for those few
tests that do care to flush the rendering without the effects, that's
still possible using the `yieldValue` API.
Follow-up: Do the same for test renderer.
* Fix import to scheduler/unstable_mock
* Add new mock build of Scheduler with flush, yield API
Test environments need a way to take control of the Scheduler queue and
incrementally flush work. Our current tests accomplish this either using
dynamic injection, or by using Jest's fake timers feature. Both of these
options are fragile and rely too much on implementation details.
In this new approach, we have a separate build of Scheduler that is
specifically designed for test environments. We mock the default
implementation like we would any other module; in our case, via Jest.
This special build has methods like `flushAll` and `yieldValue` that
control when work is flushed. These methods are based on equivalent
methods we've been using to write incremental React tests. Eventually
we may want to migrate the React tests to interact with the mock
Scheduler directly, instead of going through the host config like we
currently do.
For now, I'm using our custom static injection infrastructure to create
the two builds of Scheduler — a default build for DOM (which falls back
to a naive timer based implementation), and the new mock build. I did it
this way because it allows me to share most of the implementation, which
isn't specific to a host environment — e.g. everything related to the
priority queue. It may be better to duplicate the shared code instead,
especially considering that future environments (like React Native) may
have entirely forked implementations. I'd prefer to wait until the
implementation stabilizes before worrying about that, but I'm open to
changing this now if we decide it's important enough.
* Mock Scheduler in bundle tests, too
* Remove special case by making regex more restrictive
* Throw in tests if work is done before emptying log
Test renderer already does this. Makes it harder to miss unexpected
behavior by forcing you to assert on every logged value.
* Convert ReactNoop tests to use jest matchers
The matchers warn if work is flushed while the log is empty. This is
the pattern we already follow for test renderer. I've used the same APIs
as test renderer, so it should be easy to switch between the two.