Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Option to run tests in parallel? #1777

Open
NullVoxPopuli opened this issue Jul 8, 2024 · 3 comments
Open

[Feature Request]: Option to run tests in parallel? #1777

NullVoxPopuli opened this issue Jul 8, 2024 · 3 comments
Labels
Component: Core For module, test, hooks, and reporters. Type: Enhancement New idea or feature request. Type: Meta Seek input from maintainers and contributors.

Comments

@NullVoxPopuli
Copy link
Contributor

NullVoxPopuli commented Jul 8, 2024

I would be great if I could declare a module as parallel, like this:

import { module, test } from 'qunit';

module.parallel('my suite', function (hooks) {
  // all 3 of these tests run "at the same time"
  test('a', async function (assert) {
    await 0;
    assert.ok(true);
  });

  test('b', async function (assert) {
    await 0;
    assert.ok(true);
  });

  test('c', async function (assert) {
    await 0;
    assert.ok(true);
  });
});

these tests are overly simplified, but I imagine this could lead to implementation of:

// psuedo code
if (module.isParallel) {
  await Promise.all(module.tests.map(test => runTest(test)));
} else {
  // current behavior
  for (let test of module.tests) {
    await runTest(test);
  }
}

this could greatly improve the performance of async tests that may be I/O bound (and thus await-ing on external things to happen)


maybe this should be module.concurrent, since you can't actually have single-threaded parallelism.

to have parallelism, you'd need to split test-running in to workers -- which would be useful as well -- and maybe easier from a context isolation perspective?

@Krinkle Krinkle added Type: Enhancement New idea or feature request. Type: Meta Seek input from maintainers and contributors. Component: Core For module, test, hooks, and reporters. labels Jul 9, 2024
@Krinkle
Copy link
Member

Krinkle commented Jul 9, 2024

That sounds great to me! I've seen ad-hoc ways to do this in a few places already. Most recently, I did one myself here in this repo to speed up the CLI tests.

function concurrentMapKeys (input, concurrency, asyncFn) {

QUnit.module('CLI Main', () => {
QUnit.test.each('fixtures',
// Faster testing: Let the commands run in the background with concurrency,
// and only await/assert the already-started command.
concurrentMapKeys(readFixtures(FIXTURES_DIR), 0, (runFixture) => runFixture()),
async (assert, fixture) => {
assert.timeout(10000);
const result = await fixture;
assert.equal(result.snapshot, result.expected, result.name);
}

A few things that come to mind:

  • Hooks should be fine. These tend to be contextual and well-written in practice. Although any use of lexical variables will of course be racey. That's something to document as gotcha. But because we already automatically create and tear down these context objects, I'm not too worried about having multiple of them.
  • Modules can already overlap via QUnit.config.reorder. I think you're proposing concurrency within a single module only, but this means we could potentially go further.
  • Events are where we could get into some difficulty. I believe there's a pretty strong assumption that testStart/log/testEnd happen in that order and don't overlap. Given that reporters are not usually maintained by people writing the tests, this is not something you can take care of by writing tests differently/better. We may want to fake a thing or two here so that we neatly present a sequential timeline there, e.g. buffer events and make sure they get reported FIFO based on the order in which they start. So that if test 1 starts and takes a second, test 2 and 3 run meanwhile, but won't be reported until test 1 is finished.
  • Runtime metrics. These would presumably remain measured within a single test. But it means it's going to become common for a test to be wrongly reported as slow just because it did 1 async thing, and then the main thread goes to a different test that's slow, and its time gets assigned/observed by another. In addition, the total runtime would no longer be a sum. I think that's already the case, afaik we measure that separately, so total runtime should remain walltime total, not double-counting. We may want to add a boolean flag in the event data so that reporters can either choose to report the module runtime only, or to mark it as "async" so that readers know these numbers may be off.

In terms of API, the natural choice is a submethod indeed, but we do have a lot of them now:

https://qunitjs.com/api/QUnit/module/

And more are coming in #1772.

For concurrency, perhaps we can use an option instead.

QUnit.module('example', { concurrency: boolean | number }, function ( hooks ) {
 QUnit.test();
 QUnit.test();
 QUnit.test();
});

Given that options duals as the seed for the test context object, this would potentially conflict with existing code. This is not unprecedented in that we already reserve before, beforeEach, afterEach, and after keys in the same object. But something to consider.

Another idea would be to add it to the hooks object. That avoids conflicts, and has the benefit of working within the simpler-looking pattern of only using the scope parameter. Passing three parameters feels more complex, with another thing to learn.

QUnit.module('example', function ( hooks ) {
  hooks.enableParallel();

  QUnit.test();
  QUnit.test();
  QUnit.test();
});

to have parallelism, you'd need to split test-running in to workers -- which would be useful as well -- and maybe easier from a context isolation perspective?

Yeah, that's something we can't really do in core, but something the CLI and runners like Testem could do. See previous discussion at "Split tests for parallelization". #947.

To do this well, without needing to have upfront awareness of the modules and tests, you'd want to deterministically shard and filter by moduleId. This has been requested in "Dynamically filter tests to run". #1183 as well.

@Krinkle
Copy link
Member

Krinkle commented Jul 10, 2024

Fixture reset. If we think parallel is likely to be useful in a browser, we may want to give each test its own fixture. That could involve providing an automatic DOM reference like "this.fixture" or "assert.getFixture()", or some other mechanism.

QUnit.config.current may also need some thought. There are still extension features for which this is the only documented way to access certain information. That's solvable. We could document assert.test as officially supported accessor, to make up for it.

@Krinkle
Copy link
Member

Krinkle commented Jul 10, 2024

If we limit this to the CLI, then a safer option would be to use sub processes indeed. A fairly straightforward approach could be to spread out by test file. This is more crude in that tests concentrated in one module wouldn't run in parallel. But this might be offset by the gain in true parallel main threads, and by having entire modules run concurrently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Core For module, test, hooks, and reporters. Type: Enhancement New idea or feature request. Type: Meta Seek input from maintainers and contributors.
Development

No branches or pull requests

2 participants