< C) Build | D) Test | > E) Referencing files |
This page documents how jsenv can be used to write and execute tests in a web browser. For Node.js testing, refer to I) Test in Node.js.
Key features of jsenv tests:
- debugging a test file is identical to debugging a source file.
- Test execution is standard making it easy to switch between source and test files.
- Isolated environment; each test file runs in a a dedicated runtime.
- Test files can be executed in Chrome, Firefox and Safari
- Smart parallelism
- Human friendly logs: Dynamic, colorful and easy to read.
This section demonstrates how to write execute tests for a source file using jsenv.
project/ src/ sum.js index.html package.json
Let's write a test for sum.js:
export const sum = (a, b) => a + b;
To test sum.js, the following files are needed:
project/
+ scripts/
+ dev.mjs
+ test.mjs
src/
sum.js
+ sum.test.html
index.html
package.json
src/sum.test.html
<!doctype html>
<html>
<head>
<title>Title</title>
<meta charset="utf-8" />
<link rel="icon" href="data:," />
</head>
<body>
<script type="module">
import { sum } from "./sum.js";
const actual = sum(1, 2);
const expect = 3;
if (actual !== expect) {
throw new Error(`sum(1,2) should return 3, got ${actual}`);
}
</script>
</body>
</html>
scripts/dev.mjs: Start a web server, will be used to execute sum.test.html in a browser.
import { startDevServer } from "@jsenv/core";
await startDevServer({
sourceDirectoryUrl: new URL("../src/", import.meta.url),
port: 3456,
});
scripts/test.mjs: Execute test file(s).
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Before executing test, install dependencies:
npm i --save-dev @jsenv/core
npm i --save-dev @jsenv/test
npm i --save-dev @playwright/browser-chromium
☝️ playwright↗ is used by @jsenv/test
to start a web browser (Chromium).
Run the tests with the following command:
node ./scripts/test.mjs
The terminal will display the following output:
In a real project there may be many test files:
project/
src/
sum.test.html
foo.test.html
bar.test.html
... and so on ...
Each test file can be executed in isolation, directly in the browser:
The page is blank because sum.test.html execution completed without error and without rendering anything. Some test maye render UI but this is not the case here.
Debugging test execution can be done using browser dev tools:
The example above compares actual and expect without an assertion library. In practice, tests often use assertion libraries. Below is an example using @jsenv/assert. Note that any assertion library can be used.
+ import { assert } from "@jsenv/assert";
import { sum } from "./sum.js";
const actual = sum(1, 2);
const expect = 3;
- if (actual !== expect) {
- throw new Error(`sum(1,2) should return 3, got ${actual}`);
- }
+ assert({ actual, expect });
Your web server is automatically started if needed. This is doe using the webServer
parameter.
If server is already running at webServer.origin
:
- Tests are executed using existing server.
If no server is running at webServer.origin
:
webServer.moduleUrl
orwebServer.command
is executed in a separate process.- The code waits for the server to start. If it doesn't start within 5 seconds, an error is thrown.
- Test are executed using the server started in step 1.
- After tests complete, the server is stopped by killing the process.
import { executeTestPlan, chromium, firefox, webkit } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
firefox: {
runtime: firefox(),
},
webkit: {
runtime: webkit(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Before executing tests, install Firefox and Webkit dependencies:
npm i --save-dev @playwright/browser-firefox
npm i --save-dev @playwright/browser-webkit
The terminal output:
Each test is executed in a browser tab using one instance of the browser.
For further isolation, you can dedicate a browser instance per test by using chromiumIsolatedTab instead of chromium. The same applies to Firefox and WebKit.
import { executeTestPlan, chromiumIsolatedTab } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromiumIsolatedTab(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Executions are started sequentially without waiting for the previous one to finish. Parallelism can be configured using the parallel
parameter.
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
parallel: {
max: "50%",
maxCpu: "50%",
maxMemory: "50%",
},
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
Controls the maximum number of parallel executions.
max | Max executions in parallel |
---|---|
1 | Only one (disable parallelism) |
5 | 5 |
80% | 80% of cores available on the machine |
The default value is 80%: For a machine with 10 processors, up to 8 executions can run in parallel.
Parallelism can also be disabled with parallel: false
which is equivalent to parallel: { max: 1 }
.
Prevents new executions from starting if CPU usage is too high
The default value is 80%. New executions will start as long as CPU usage is below 80% of the total available CPU.
Prevents new executions from starting if memory usage is too high.
The default value is 50%. New executions will start as long as memory usage is below 50% of the total available memory.
Each test file is given 30s to execute. If this duration is exceeded, the browser tab is closed, and the execution is marked as failed. This duration can be configured as shown below:
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
allocatedMs: 60_000,
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
You can generate HTML files showing code coverage for test executions:
The coverage above was generated by the following code:
import { executeTestPlan, chromium, reportCoverageAsHtml } from "@jsenv/test";
const testResult = await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
coverage: true,
});
reportCoverageAsHtml(testResult, new URL("./coverage/", import.meta.url));
Coverage can also be written to a JSON file.
import { executeTestPlan, chromium, reportCoverageAsJson } from "@jsenv/test";
const testResult = await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
coverage: true,
});
reportCoverageAsJson(testResult, new URL("./coverage.json", import.meta.url));
This JSON file can be used with other tools, such as https://github.com/codecov/codecov-action.
Now let's say we want to get code coverage for a file where code behaves differently depending on the browser:
if (window.navigator.userAgent.includes("Firefox")) {
console.log("firefox");
} else if (window.navigator.userAgent.includes("Chrome")) {
console.log("chrome");
} else if (window.navigator.userAgent.includes("AppleWebKit")) {
console.log("webkit");
} else {
console.log("other");
}
The file is executed by the following HTML file:
<!doctype html>
<html>
<head>
<title>Title</title>
</head>
<body>
<script type="module" src="./demo.js"></script>
</body>
</html>
Execute the HTML file in Firefox, Chrome, and Webkit, and generate the coverage:
import {
executeTestPlan,
chromium,
firefox,
webkit,
reportCoverageAsHtml,
} from "@jsenv/test";
const testPlanResult = await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./client/**/many.test.html": {
chromium: {
runtime: chromium(),
},
firefox: {
runtime: firefox(),
},
webkit: {
runtime: webkit(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../client/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
coverage: true,
});
reportCoverageAsHtml(testResult, new URL("./coverage/", import.meta.url));
The resulting coverage:
And the following warnings in the console:
Coverage conflict on "./client/many.js", found two coverage that cannot be merged together: v8 and istanbul. The istanbul coverage will be ignored.
--- details ---
This happens when a file is executed on a runtime using v8 coverage (node or chromium) and on runtime using istanbul coverage (firefox or webkit)
--- suggestion ---
disable this warning with coverage.v8ConflictWarning: false
--- suggestion 2 ---
force coverage using istanbul with coverage.methodForBrowsers: "istanbul"
At this point either you disable the warning with coverage: { v8ConflictWarning: false }
or force Chromium coverage to be collected using "istanbul":
coverage: {
methodForBrowsers: "istanbul";
}
During test executions browser are opened in headless mode and once all tests are executed all browsers are closed.
It's possible to display browser and keep them opened using keepRunning: true
:
import { executeTestPlan, chromium, firefox, webkit } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium(),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
+ keepRunning: true,
});
In that case execution timeouts are disabled.
The following code forwards custom launch options to playwright
import { executeTestPlan, chromium } from "@jsenv/test";
await executeTestPlan({
rootDirectoryUrl: new URL("../", import.meta.url),
testPlan: {
"./src/**/*.test.html": {
chromium: {
runtime: chromium({
playwrightLaunchOptions: {
ignoreDefaultArgs: ["--mute-audio"],
},
}),
},
},
},
webServer: {
origin: "http://localhost:3456",
rootDirectoryUrl: new URL("../src/", import.meta.url),
moduleUrl: new URL("./dev.mjs", import.meta.url),
},
});
See https://playwright.dev/docs/api/class-browsertype#browser-type-launch
The value returned by executeTestPlan
is an object called testPlanResult
.
import { executeTestPlan } from "@jsenv/test";
const testPlanResult = await executeTestPlan();
It contains all execution results and a few more infos
< C) Build | D) Test | > E) Referencing files |