test_runner: add option to rerun only failed tests

PR-URL: https://github.com/nodejs/node/pull/59443
Reviewed-By: Benjamin Gruenbaum <benjamingr@gmail.com>
Reviewed-By: Pietro Marchini <pietro.marchini94@gmail.com>
Reviewed-By: Chemi Atlow <chemi@atlow.co.il>
This commit is contained in:
Moshe Atlow 2025-08-19 10:42:00 +03:00 committed by GitHub
parent ae0aaecfd7
commit 64355ae97e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
13 changed files with 359 additions and 2 deletions

View File

@ -2647,6 +2647,20 @@ changes:
The destination for the corresponding test reporter. See the documentation on
[test reporters][] for more details.
### `--test-rerun-failures`
<!-- YAML
added:
- REPLACEME
-->
A path to a file allowing the test runner to persist the state of the test
suite between runs. The test runner will use this file to determine which tests
have already succeeded or failed, allowing for re-running of failed tests
without having to re-run the entire test suite. The test runner will create this
file if it does not exist.
See the documentation on [test reruns][] for more details.
### `--test-shard`
<!-- YAML
@ -3508,6 +3522,7 @@ one is included in the list below.
* `--test-only`
* `--test-reporter-destination`
* `--test-reporter`
* `--test-rerun-failures`
* `--test-shard`
* `--test-skip-pattern`
* `--throw-deprecation`
@ -4082,6 +4097,7 @@ node --stack-trace-limit=12 -p -e "Error.stackTraceLimit" # prints 12
[snapshot testing]: test.md#snapshot-testing
[syntax detection]: packages.md#syntax-detection
[test reporters]: test.md#test-reporters
[test reruns]: test.md#rerunning-failed-tests
[test runner execution model]: test.md#test-runner-execution-model
[timezone IDs]: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
[tracking issue for user-land snapshots]: https://github.com/nodejs/node/issues/44014

View File

@ -153,6 +153,46 @@ test('skip() method with message', (t) => {
});
```
## Rerunning failed tests
The test runner supports persisting the state of the run to a file, allowing
the test runner to rerun failed tests without having to re-run the entire test suite.
Use the [`--test-rerun-failures`][] command-line option to specify a file path where the
state of the run is stored. if the state file does not exist, the test runner will
create it.
the state file is a JSON file that contains an array of run attempts.
Each run attempt is an object mapping successful tests to the attempt they have passed in.
The key identifying a test in this map is the test file path, with the line and column where the test is defined.
in a case where a test defined in a specific location is run multiple times,
for example within a function or a loop,
a counter will be appended to the key, to disambiguate the test runs.
note changing the order of test execution or the location of a test can lead the test runner
to consider tests as passed on a previous attempt,
meaning `--test-rerun-failures` should be used when tests run in a deterministic order.
example of a state file:
```json
[
{
"test.js:10:5": { "passed_on_attempt": 0, "name": "test 1" },
},
{
"test.js:10:5": { "passed_on_attempt": 0, "name": "test 1" },
"test.js:20:5": { "passed_on_attempt": 1, "name": "test 2" }
}
]
```
in this example, there are two run attempts, with two tests defined in `test.js`,
the first test succeeded on the first attempt, and the second test succeeded on the second attempt.
When the `--test-rerun-failures` option is used, the test runner will only run tests that have not yet passed.
```bash
node --test-rerun-failures /path/to/state/file
```
## TODO tests
Individual tests can be marked as flaky or incomplete by passing the `todo`
@ -1342,6 +1382,9 @@ added:
- v18.9.0
- v16.19.0
changes:
- version: REPLACEME
pr-url: https://github.com/nodejs/node/pull/59443
description: Added a rerunFailuresFilePath option.
- version: v23.0.0
pr-url: https://github.com/nodejs/node/pull/54705
description: Added the `cwd` option.
@ -1432,6 +1475,10 @@ changes:
that specifies the index of the shard to run. This option is _required_.
* `total` {number} is a positive integer that specifies the total number
of shards to split the test files to. This option is _required_.
* `rerunFailuresFilePath` {string} A file path where the test runner will
store the state of the tests to allow rerunning only the failed tests on a next run.
see \[Rerunning failed tests]\[] for more information.
**Default:** `undefined`.
* `coverage` {boolean} enable [code coverage][] collection.
**Default:** `false`.
* `coverageExcludeGlobs` {string|Array} Excludes specific files from code coverage
@ -3220,6 +3267,8 @@ Emitted when a test is enqueued for execution.
* `cause` {Error} The actual error thrown by the test.
* `type` {string|undefined} The type of the test, used to denote whether
this is a suite.
* `attempt` {number|undefined} The attempt number of the test run,
present only when using the [`--test-rerun-failures`][] flag.
* `file` {string|undefined} The path of the test file,
`undefined` if test was run through the REPL.
* `line` {number|undefined} The line number where the test is defined, or
@ -3244,6 +3293,10 @@ The corresponding execution ordered event is `'test:complete'`.
* `duration_ms` {number} The duration of the test in milliseconds.
* `type` {string|undefined} The type of the test, used to denote whether
this is a suite.
* `attempt` {number|undefined} The attempt number of the test run,
present only when using the [`--test-rerun-failures`][] flag.
* `passed_on_attempt` {number|undefined} The attempt number the test passed on,
present only when using the [`--test-rerun-failures`][] flag.
* `file` {string|undefined} The path of the test file,
`undefined` if test was run through the REPL.
* `line` {number|undefined} The line number where the test is defined, or
@ -3947,6 +4000,7 @@ Can be used to abort test subtasks when the test has been aborted.
[`--test-only`]: cli.md#--test-only
[`--test-reporter-destination`]: cli.md#--test-reporter-destination
[`--test-reporter`]: cli.md#--test-reporter
[`--test-rerun-failures`]: cli.md#--test-rerun-failures
[`--test-skip-pattern`]: cli.md#--test-skip-pattern
[`--test-update-snapshots`]: cli.md#--test-update-snapshots
[`--test`]: cli.md#--test

View File

@ -446,6 +446,9 @@
}
]
},
"test-rerun-failures": {
"type": "string"
},
"test-shard": {
"type": "string"
},
@ -698,6 +701,9 @@
}
]
},
"test-rerun-failures": {
"type": "string"
},
"test-shard": {
"type": "string"
},

View File

@ -493,6 +493,10 @@ A test reporter to use when running tests.
.It Fl -test-reporter-destination
The destination for the corresponding test reporter.
.
.It Fl -test-rerun-failures
Configures the tests runner to persist the state of tests to allow
rerunning only failed tests.
.
.It Fl -test-only
Configures the test runner to only execute top level tests that have the `only`
option set.

View File

@ -26,7 +26,10 @@ const {
reporterScope,
shouldColorizeTestFiles,
setupGlobalSetupTeardownFunctions,
parsePreviousRuns,
} = require('internal/test_runner/utils');
const { PassThrough, compose } = require('stream');
const { reportReruns } = require('internal/test_runner/reporter/rerun');
const { queueMicrotask } = require('internal/process/task_queues');
const { TIMEOUT_MAX } = require('internal/timers');
const { clearInterval, setInterval } = require('timers');
@ -69,6 +72,7 @@ function createTestTree(rootTestOptions, globalOptions) {
shouldColorizeTestFiles: shouldColorizeTestFiles(globalOptions.destinations),
teardown: null,
snapshotManager: null,
previousRuns: null,
isFilteringByName,
isFilteringByOnly,
async runBootstrap() {
@ -203,6 +207,25 @@ function collectCoverage(rootTest, coverage) {
return summary;
}
function setupFailureStateFile(rootTest, globalOptions) {
if (!globalOptions.rerunFailuresFilePath) {
return;
}
rootTest.harness.previousRuns = parsePreviousRuns(globalOptions.rerunFailuresFilePath);
if (rootTest.harness.previousRuns === null) {
rootTest.diagnostic(`Warning: The rerun failures file at ` +
`${globalOptions.rerunFailuresFilePath} is not a valid rerun file. ` +
'The test runner will not be able to rerun failed tests.');
rootTest.harness.success = false;
process.exitCode = kGenericUserError;
return;
}
if (!process.env.NODE_TEST_CONTEXT) {
const reporter = reportReruns(rootTest.harness.previousRuns, globalOptions);
compose(rootTest.reporter, reporter).pipe(new PassThrough());
}
}
function setupProcessState(root, globalOptions) {
const hook = createHook({
__proto__: null,
@ -230,6 +253,9 @@ function setupProcessState(root, globalOptions) {
const rejectionHandler =
createProcessEventHandler('unhandledRejection', root);
const coverage = configureCoverage(root, globalOptions);
setupFailureStateFile(root, globalOptions);
const exitHandler = async (kill) => {
if (root.subtests.length === 0 && (root.hooks.before.length > 0 || root.hooks.after.length > 0)) {
// Run global before/after hooks in case there are no tests

View File

@ -0,0 +1,40 @@
'use strict';
const {
ArrayPrototypePush,
JSONStringify,
} = primordials;
const { relative } = require('path');
const { writeFileSync } = require('fs');
function reportReruns(previousRuns, globalOptions) {
return async function reporter(source) {
const obj = { __proto__: null };
const disambiguator = { __proto__: null };
for await (const { type, data } of source) {
if (type === 'test:pass') {
let identifier = `${relative(globalOptions.cwd, data.file)}:${data.line}:${data.column}`;
if (disambiguator[identifier] !== undefined) {
identifier += `:(${disambiguator[identifier]})`;
disambiguator[identifier] += 1;
} else {
disambiguator[identifier] = 1;
}
obj[identifier] = {
__proto__: null,
name: data.name,
passed_on_attempt: data.details.passed_on_attempt ?? data.details.attempt,
};
}
}
ArrayPrototypePush(previousRuns, obj);
writeFileSync(globalOptions.rerunFailuresFilePath, JSONStringify(previousRuns, null, 2), 'utf8');
};
};
module.exports = {
__proto__: null,
reportReruns,
};

View File

@ -148,6 +148,7 @@ function getRunArgs(path, { forceExit,
only,
argv: suppliedArgs,
execArgv,
rerunFailuresFilePath,
root: { timeout },
cwd }) {
const processNodeOptions = getOptionsAsFlagsFromBinding();
@ -170,6 +171,9 @@ function getRunArgs(path, { forceExit,
if (timeout != null) {
ArrayPrototypePush(runArgs, `--test-timeout=${timeout}`);
}
if (rerunFailuresFilePath) {
ArrayPrototypePush(runArgs, `--test-rerun-failures=${rerunFailuresFilePath}`);
}
ArrayPrototypePushApply(runArgs, execArgv);
@ -588,6 +592,7 @@ function run(options = kEmptyObject) {
execArgv = [],
argv = [],
cwd = process.cwd(),
rerunFailuresFilePath,
} = options;
if (files != null) {
@ -620,6 +625,10 @@ function run(options = kEmptyObject) {
);
}
if (rerunFailuresFilePath) {
validatePath(rerunFailuresFilePath, 'options.rerunFailuresFilePath');
}
if (shard != null) {
validateObject(shard, 'options.shard');
// Avoid re-evaluating the shard object in case it's a getter
@ -702,6 +711,7 @@ function run(options = kEmptyObject) {
coverage,
coverageExcludeGlobs,
coverageIncludeGlobs,
rerunFailuresFilePath,
lineCoverage: lineCoverage,
branchCoverage: branchCoverage,
functionCoverage: functionCoverage,
@ -735,6 +745,7 @@ function run(options = kEmptyObject) {
isolation,
argv,
execArgv,
rerunFailuresFilePath,
};
if (isolation === 'process') {

View File

@ -71,6 +71,7 @@ const {
} = require('timers');
const { TIMEOUT_MAX } = require('internal/timers');
const { fileURLToPath } = require('internal/url');
const { relative } = require('path');
const { availableParallelism } = require('os');
const { innerOk } = require('internal/assert/utils');
const { bigint: hrtime } = process.hrtime;
@ -290,6 +291,10 @@ class TestContext {
return this.#test.passed;
}
get attempt() {
return this.#test.attempt ?? 0;
}
diagnostic(message) {
this.#test.diagnostic(message);
}
@ -537,6 +542,7 @@ class Test extends AsyncResource {
this.childNumber = 0;
this.timeout = kDefaultTimeout;
this.entryFile = entryFile;
this.testDisambiguator = new SafeMap();
} else {
const nesting = parent.parent === null ? parent.nesting :
parent.nesting + 1;
@ -646,6 +652,8 @@ class Test extends AsyncResource {
this.endTime = null;
this.passed = false;
this.error = null;
this.attempt = undefined;
this.passedAttempt = undefined;
this.message = typeof skip === 'string' ? skip :
typeof todo === 'string' ? todo : null;
this.activeSubtests = 0;
@ -690,6 +698,23 @@ class Test extends AsyncResource {
this.loc.file = fileURLToPath(this.loc.file);
}
}
if (this.loc != null && this.root.harness.previousRuns != null) {
let testIdentifier = `${relative(this.config.cwd, this.loc.file)}:${this.loc.line}:${this.loc.column}`;
const disambiguator = this.root.testDisambiguator.get(testIdentifier);
if (disambiguator !== undefined) {
testIdentifier += `:(${disambiguator})`;
this.root.testDisambiguator.set(testIdentifier, disambiguator + 1);
} else {
this.root.testDisambiguator.set(testIdentifier, 1);
}
this.attempt = this.root.harness.previousRuns.length;
const previousAttempt = this.root.harness.previousRuns[this.attempt - 1]?.[testIdentifier]?.passed_on_attempt;
if (previousAttempt != null) {
this.passedAttempt = previousAttempt;
this.fn = noop;
}
}
}
applyFilters() {
@ -1329,6 +1354,12 @@ class Test extends AsyncResource {
if (!this.passed) {
details.error = this.error;
}
if (this.attempt !== undefined) {
details.attempt = this.attempt;
}
if (this.passedAttempt !== undefined) {
details.passed_on_attempt = this.passedAttempt;
}
return { __proto__: null, details, directive };
}

View File

@ -8,6 +8,7 @@ const {
ArrayPrototypePush,
ArrayPrototypeReduce,
ArrayPrototypeSome,
JSONParse,
MathFloor,
MathMax,
MathMin,
@ -28,7 +29,7 @@ const {
const { AsyncResource } = require('async_hooks');
const { relative, sep, resolve } = require('path');
const { createWriteStream } = require('fs');
const { createWriteStream, readFileSync } = require('fs');
const { pathToFileURL } = require('internal/url');
const { getOptionValue } = require('internal/options');
const { green, yellow, red, white, shouldColorize } = require('internal/util/colors');
@ -150,6 +151,20 @@ function shouldColorizeTestFiles(destinations) {
});
}
function parsePreviousRuns(rerunFailuresFilePath) {
let data;
try {
data = readFileSync(rerunFailuresFilePath, 'utf8');
} catch (err) {
if (err.code === 'ENOENT') {
data = '[]';
} else {
throw err;
}
}
return JSONParse(data);
}
async function getReportersMap(reporters, destinations) {
return SafePromiseAllReturnArrayLike(reporters, async (name, i) => {
const destination = kBuiltinDestinations.get(destinations[i]) ??
@ -202,6 +217,7 @@ function parseCommandLine() {
const updateSnapshots = getOptionValue('--test-update-snapshots');
const watch = getOptionValue('--watch');
const timeout = getOptionValue('--test-timeout') || Infinity;
const rerunFailuresFilePath = getOptionValue('--test-rerun-failures');
const isChildProcess = process.env.NODE_TEST_CONTEXT === 'child';
const isChildProcessV8 = process.env.NODE_TEST_CONTEXT === 'child-v8';
let globalSetupPath;
@ -308,9 +324,12 @@ function parseCommandLine() {
validateInteger(functionCoverage, '--test-coverage-functions', 0, 100);
}
if (rerunFailuresFilePath) {
validatePath(rerunFailuresFilePath, '--test-rerun-failures');
}
const setup = reporterScope.bind(async (rootReporter) => {
const reportersMap = await getReportersMap(reporters, destinations);
for (let i = 0; i < reportersMap.length; i++) {
const { reporter, destination } = reportersMap[i];
compose(rootReporter, reporter).pipe(destination);
@ -343,6 +362,7 @@ function parseCommandLine() {
timeout,
updateSnapshots,
watch,
rerunFailuresFilePath,
};
return globalTestOptions;
@ -637,4 +657,5 @@ module.exports = {
shouldColorizeTestFiles,
getCoverageReport,
setupGlobalSetupTeardownFunctions,
parsePreviousRuns,
};

View File

@ -923,6 +923,11 @@ EnvironmentOptionsParser::EnvironmentOptionsParser() {
&EnvironmentOptions::test_global_setup_path,
kAllowedInEnvvar,
OptionNamespaces::kTestRunnerNamespace);
AddOption("--test-rerun-failures",
"specifies the path to the rerun state file",
&EnvironmentOptions::test_rerun_failures_path,
kAllowedInEnvvar,
OptionNamespaces::kTestRunnerNamespace);
AddOption("--test-udp-no-try-send",
"", // For testing only.
&EnvironmentOptions::test_udp_no_try_send,

View File

@ -198,6 +198,7 @@ class EnvironmentOptions : public Options {
bool test_runner_update_snapshots = false;
std::vector<std::string> test_name_pattern;
std::vector<std::string> test_reporter;
std::string test_rerun_failures_path;
std::vector<std::string> test_reporter_destination;
std::string test_global_setup_path;
bool test_only = false;

25
test/fixtures/test-runner/rerun.js vendored Normal file
View File

@ -0,0 +1,25 @@
const { test } = require('node:test')
test('should fail on first two attempts', ({ attempt }) => {
if (attempt < 2) {
throw new Error('This test is expected to fail on the first two attempts');
}
});
test('ok', ({ attempt }) => {
if (attempt > 0) {
throw new Error('Test should not rerun once it has passed');
}
});
function ambiguousTest(expectedAttempts) {
test(`ambiguous (expectedAttempts=${expectedAttempts})`, ({ attempt }) => {
if (attempt < expectedAttempts) {
throw new Error(`This test is expected to fail on the first ${expectedAttempts} attempts`);
}
});
}
ambiguousTest(0);
ambiguousTest(1);

View File

@ -0,0 +1,117 @@
'use strict';
const common = require('../common');
const fixtures = require('../common/fixtures');
const assert = require('node:assert');
const { rm, readFile } = require('node:fs/promises');
const { setTimeout } = require('node:timers/promises');
const { test, beforeEach, afterEach, run } = require('node:test');
const fixture = fixtures.path('test-runner', 'rerun.js');
const stateFile = fixtures.path('test-runner', 'rerun-state.json');
beforeEach(() => rm(stateFile, { force: true }));
afterEach(() => rm(stateFile, { force: true }));
const expectedStateFile = [
{
'test/fixtures/test-runner/rerun.js:17:3': { passed_on_attempt: 0, name: 'ambiguous (expectedAttempts=0)' },
'test/fixtures/test-runner/rerun.js:9:1': { passed_on_attempt: 0, name: 'ok' },
},
{
'test/fixtures/test-runner/rerun.js:17:3': { passed_on_attempt: 0, name: 'ambiguous (expectedAttempts=0)' },
'test/fixtures/test-runner/rerun.js:17:3:(1)': { passed_on_attempt: 1, name: 'ambiguous (expectedAttempts=1)' },
'test/fixtures/test-runner/rerun.js:9:1': { passed_on_attempt: 0, name: 'ok' },
},
{
'test/fixtures/test-runner/rerun.js:17:3': { passed_on_attempt: 0, name: 'ambiguous (expectedAttempts=0)' },
'test/fixtures/test-runner/rerun.js:17:3:(1)': { passed_on_attempt: 1, name: 'ambiguous (expectedAttempts=1)' },
'test/fixtures/test-runner/rerun.js:9:1': { passed_on_attempt: 0, name: 'ok' },
'test/fixtures/test-runner/rerun.js:3:1': { passed_on_attempt: 2, name: 'should fail on first two attempts' },
},
];
const getStateFile = async () => JSON.parse((await readFile(stateFile, 'utf8')).replaceAll('\\\\', '/'));
test('test should pass on third rerun', async () => {
const args = ['--test-rerun-failures', stateFile, fixture];
let { code, stdout, signal } = await common.spawnPromisified(process.execPath, args);
assert.strictEqual(code, 1);
assert.strictEqual(signal, null);
assert.match(stdout, /pass 2/);
assert.match(stdout, /fail 2/);
assert.deepStrictEqual(await getStateFile(), expectedStateFile.slice(0, 1));
({ code, stdout, signal } = await common.spawnPromisified(process.execPath, args));
assert.strictEqual(code, 1);
assert.strictEqual(signal, null);
assert.match(stdout, /pass 3/);
assert.match(stdout, /fail 1/);
assert.deepStrictEqual(await getStateFile(), expectedStateFile.slice(0, 2));
({ code, stdout, signal } = await common.spawnPromisified(process.execPath, args));
assert.strictEqual(code, 0);
assert.strictEqual(signal, null);
assert.match(stdout, /pass 4/);
assert.match(stdout, /fail 0/);
assert.deepStrictEqual(await getStateFile(), expectedStateFile);
});
test('test should pass on third rerun with `--test`', async () => {
const args = ['--test', '--test-rerun-failures', stateFile, fixture];
let { code, stdout, signal } = await common.spawnPromisified(process.execPath, args);
assert.strictEqual(code, 1);
assert.strictEqual(signal, null);
assert.match(stdout, /pass 2/);
assert.match(stdout, /fail 2/);
assert.deepStrictEqual(await getStateFile(), expectedStateFile.slice(0, 1));
({ code, stdout, signal } = await common.spawnPromisified(process.execPath, args));
assert.strictEqual(code, 1);
assert.strictEqual(signal, null);
assert.match(stdout, /pass 3/);
assert.match(stdout, /fail 1/);
assert.deepStrictEqual(await getStateFile(), expectedStateFile.slice(0, 2));
({ code, stdout, signal } = await common.spawnPromisified(process.execPath, args));
assert.strictEqual(code, 0);
assert.strictEqual(signal, null);
assert.match(stdout, /pass 4/);
assert.match(stdout, /fail 0/);
assert.deepStrictEqual(await getStateFile(), expectedStateFile);
});
test('using `run` api', async () => {
let stream = run({ files: [fixture], rerunFailuresFilePath: stateFile });
stream.on('test:pass', common.mustCall(2));
stream.on('test:fail', common.mustCall(2));
// eslint-disable-next-line no-unused-vars
for await (const _ of stream);
await setTimeout(common.platformTimeout(10)); // Wait for the stream to finish processing
assert.deepStrictEqual(await getStateFile(), expectedStateFile.slice(0, 1));
stream = run({ files: [fixture], rerunFailuresFilePath: stateFile });
stream.on('test:pass', common.mustCall(3));
stream.on('test:fail', common.mustCall(1));
// eslint-disable-next-line no-unused-vars
for await (const _ of stream);
await setTimeout(common.platformTimeout(10)); // Wait for the stream to finish processing
assert.deepStrictEqual(await getStateFile(), expectedStateFile.slice(0, 2));
stream = run({ files: [fixture], rerunFailuresFilePath: stateFile });
stream.on('test:pass', common.mustCall(4));
stream.on('test:fail', common.mustNotCall());
// eslint-disable-next-line no-unused-vars
for await (const _ of stream);
await setTimeout(common.platformTimeout(10)); // Wait for the stream to finish processing
assert.deepStrictEqual(await getStateFile(), expectedStateFile);
});