-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark tests that test overall behaviour other tests expand on in more detail #5823
Comments
Basically, have a series of "smoke tests" that are run first, then go deeper if those pass? Interesting idea. @dantman You can also run jest with the |
I like this idea! Maybe we could have a flag that marks later This will totally mess with |
Another choice could be to describe this as part of the test itself. describe('foo()', () => {
testBeforeAll('should be a number', () => {
expect(typeof foo()).toBe('number');
});
test('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
}); |
@cpojer any thoughts on this feature? |
I'm not sure I see the value of making tests dependent on each other. Worst case, every test depends on every test before it. In that case, --bail on a per test basis makes more sense than adding new APIs imho. |
@jamonholmgren Nice idea but less practical for the example in the OP, where you want to bail only a certain "describe". Validating some value before using it for the rest of a test is very common imo. |
It also completely falls apart when you have tests in different tests folders grouped with relevant code an do not want to have to put all your smoke tests in a single spot disconnected from the code it is testing. |
I'm starting to like this idea: test.essential(name, fn, timeout) // if fails, other tests within the scope will not be executed
describe.essential(name, fn, timeout) // if fails, other tests within the scope will not be executed It gives lots of flexibility in designing the test suit. describe('foo()', () => {
test.essential('should be a number', () => {
expect(typeof foo()).toBe('number');
});
test('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
}); |
I like Though here's another idea to throw into the mix. What about defining preconditions for tests. I often do things in describe('foo()', () => {
test('should be a number', () => {
expect(typeof foo()).toBe('number');
});
test('should be a positive number', () => {
precondition(() => {
// Greater than/less than tests require the result to be a number
// If this fails, then the test isn't actually valid
expect(typeof foo()).toBe('number');
});
expect(foo()).toBeGreaterThan(0);
});
}); Test "dependencies" could be written by writing the test inside of a function, executing that function as a test once on its own, and executing that function as a precondition inside other tests. describe('foo()', () => {
function fooIsANumber() {
expect(typeof foo()).toBe('number');
}
test('should be a number', () => {
fooIsANumber();
});
test('should be a positive number', () => {
precondition(fooIsANumber);
expect(foo()).toBeGreaterThan(0);
});
}); Though perhaps essential and precondition are features for separate use cases that are both useful and both would be good to implement. Preconditions are more useful for tests that use functions from other tests, but those functions aren't themselves 'essential' to an entire test suite. |
I'm saying we would do this, but from an API standpoint, what about: describe('foo()', () => {
test('should be a number', () => {
expect(typeof foo()).toBe('number');
});
test.depends(['should be a number'], 'should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
}); |
I'm more fan of just "all other tests in the same and descendent scopes are skipped" than being able to point to some tests directly from other tests |
@SimenB agreed, what about: describe('foo()', () => {
test.required('should be a number', () => {
expect(typeof foo()).toBe('number');
});
test('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
}); |
I kind of like I too don't like the idea of having to repeat the name of a test, which is normally a human readable sentence not an identifier key intended to be used multiple times. And of course like I said, I think it would be good to support both |
I like the idea of both test.depends and test.essential. Having them both provides more flexibility and allows to truly test behavior, not just unit of code. |
I like
My motivation/urge for this feature: I will have nested describes and my current plan is to have the deepest describe with a list of dependent tests. These tests verifies UI element behaviors of all the various cases, e.g. (in)valid inputs. For the sake of speed, the behavior tests are dependent on each other, on test order. Doing so, I do not need to restore UI state (with before/afterTest) for each of the 3000 tests. The acceptable downside of this speedup is, that complete test sequences fails (inside the deepest describe). (If you have any other idea, e.g. with a customEnvironment or custom globals, please drop a note.) Argumentation for using
|
Has there been any progress on this? I just ran into this today, here's my testing scenario: First, I need to test to ensure that some Obviously all the tests using
Expected behavior would be: |
I'm also encountering this problem, and I worked around it by creating a gating promise that tests depend on and that wraps the required test. However, this is cumbersome and ugly, plus it lists all tests as failing. I would prefer explicitly marking a certain test as a requirement; that way when you run individual tests it can run the required tests too. E.g. let thing
test('creation works', async () => {
thing = await getExpensiveThing()
expect(thing).toHaveProperty('bar')
}
test.requirePrevious('bar is 5', () => {
expect(thing.bar).toBe(5)
})
test.require('creation works')('can cleanup', async () => {
await expect(thing.cleanup()).to.resolve.toBeTruthy()
}) and if you would run only the If creation fails, it would just skip the other tests. perhaps the requirement methods could instead be named |
I already used a lot of this feature in TestNG for large projects and it was good for avoiding too much false positives and show the root cause more easily. With "depends", the test framework can more easily auto organize the order of the tests to be performed. Currently there is no "ordering", so the tests tends to be large in order to perform various steps in sequence. See an example at http://websystique.com/java/testing/testng-dependsonmethods-example/ We could have something like: let thing
let one = it('do one', async () => {
thing = await doOne()
})
let two = it('do two', async () => {
await thing.doSomething()
}, one) //here it is the "depends on" argument So when Jest is run, it will organize the execution graph according to the dependencies and if one node fails, all dependants nodes won't run. |
Actually, that seems very intuitive. How about this refinement: let thing
let one = it('do one', async () => {
thing = await doOne()
})
it('do two', async () => {
await one
await thing.doSomething()
}) So If It means that you can't easily analyze the code to find a dependency graph, but that's not necessary either. |
It seems nice idea, at least for an application that only have narrow
complexity
But when we have hundreds test in one file, it will make a complex tree of
waiting, also circular `await` i think 🤔.
Or the purpose of test one by one, is to decoupling and make it paralel
tests. CMIIW.
…On Fri, May 22, 2020, 7:16 PM Wout Mertens ***@***.***> wrote:
Actually, that seems very intuitive. How about this refinement:
let thing
let one = it('do one', async () => {
thing = await doOne()})
it('do two', async () => {
await one
await thing.doSomething()})
So it and test return a Promise for completion, and you simply await it.
If do one fails, it will throw a subclass of Error so that other tests
can detect that they failed due to a dependent test, and skip instead of
erroring.
It means that you can't easily analyze the code to find a dependency
graph, but that's not necessary either.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#5823 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABYSAEAIS5JOULS4FIC5IXTRSZUINANCNFSM4EV4DNNA>
.
|
Actually, I just realized that this way, you can't force a parent test to run, it needs to be like this: let thing
let testOne = it('do one', async () => {
thing = await doOne()
return thing
})
it('do two', async () => {
const thing = await testOne()
await thing.doSomething()
}) When you run
|
When I import this at the top of a test file, it does exactly what I want: // https://github.com/facebook/jest/issues/5823
class TestError extends Error {
constructor(title, error) {
super(title)
this.message = `test "${title}" failed: ${error.message}`
this.stack = error.stack
}
}
const origTest = global.test
global.test = (title, fn) => {
let alreadyRan = false
let result = undefined
let fnError = undefined
const handleError = (error, saveError) => {
let newError
if (error instanceof TestError) {
error.message = `Dependency: ${error.message}`
newError = error
} else {
newError = new TestError(title, error)
}
if (saveError) fnError = newError
throw newError
}
const runFnOnce = () => {
if (alreadyRan) {
if (fnError) throw fnError
else return result
}
alreadyRan = true
try {
result = fn()
} catch (err) {
// synchronous error
handleError(err, true)
}
// async error
if (result.catch) result = result.catch(err => handleError(err, false))
return result
}
origTest(title, runFnOnce)
return runFnOnce
}
// add .each etc
Object.assign(test, origTest) It wraps I tried adding it to the global test setup but it seems that |
That’s interesting. Do you know if Jest always run the tests in the order they appear (are registered) or they run in parallel in some cases? |
In theory, jest could run individual tests in different processes, but I
think it runs tests in the same file in-order.
However, when the tests are async, they then run in parallel after starting
up, and that's where the awaiting of tests comes in.
If jest runs tests from the same file in multiple processes, it can't do
that with the approach I showed here. It would work, but the parent tests
would run multiple times for no reason. That would require some static
analysis, seeing where the test handles are used and building a dependency
graph, or it would require manual hinting.
|
I updated #10077 with a new API which I've been using for a while and works great:
It doesn't need any big changes, it enforces dependencies exactly, it is robust when you move tests around, it's easy to read, it can do sync and async results etc. I really like it. |
I really like this. It's nice and explicit, which I like. However, my team has hundreds of tests this would need to be added to, and many developers from different teams and of different skill levels adding new tests every day. An implicit solution like I am 100% for adding |
I also like the idea of adding the |
@enriqu3l I don't like the test.essential because it's extra information you need to provide and maintain. With test.result, you enforce a dependency when you need it and only when you need it, and as you stop needing it the dependency will be removed as well. You can even wait for multiple tests in a single test. |
@wmertens I like Conceptually, using In my case, and I believe the case of others in this thread, our "dependent" test cases don't actually depend on the first test case or its result. They will run just fine without the first test case. The problem is simply that they might be running unnecessarily, and always fail, in the case that the first test fails. This pollutes the test output and makes it slightly more difficult to spot the actual failure condition, and also makes things a bit slower. |
@apples would your use-case not be served already with beforeAll? With perhaps a test on a global that beforeAll sets up? |
@wmertens No, it's not that there is some kind of setup required. Consider a scenario in when you have a function You could test it like this: it('is not null', () => {
expect(foo()).not.toBeNull();
});
it('A', () => {
expect(foo().a).toBe(1);
});
it('B', () => {
expect(foo().b).toBe(2);
}); In this case, tests A and B are completely irrelevant if the first test fails, but will still be run anyways. With your it('A', () => {
test.result('is not null'); // don't care about actual result, as there is none
expect(foo().a).toBe(1);
}); But this would need to be carefully maintained for each test (although, admittedly, it's not a big deal if it is forgotten for one or two tests). Currently, the best solution (afaik) is to simply let these tests fail. But this pollutes the output with failed tests, which makes it difficult to see that the issues is accurately identified by the first test alone. Ideally, the tests A and B would be skipped entirely, and show up in the output as "skipped". |
It may read a little bit weird, but that's exactly what beforeAll does - if it fails, none of the tests run. |
No,
If you have 2 |
@apples so beforeAll is almost what you want but it needs better error behavior? Except that you would also want test.essential to apply halfway in a file; I wonder if that would not be better expressed by beforeAll in a describe block (not sure if possible but sounds nice) |
I'm looking for this feature today (´-ωก`) In my case, the tests will look up JS modules under directories, and ensure each index.js has expose specific named-exports. as my expect, it will look like: describe('~store/*/index.js', () => {
let moduleEntries
// if throws, report the title and suspense executing of other tests
test.ensure('every store contains index.js', () => {
moduleEntries = resolveModuleEntries()
return true
})
describe.each(
moduleEntries /* already ensured */
)('~store/%s/index.js', (name, module) => {
it('should expose { reducer }', () => {
expect(typeof module.reducer).toBe('function')
})
})
})
// example that used `fs` and `require()`
function resolveModuleEntries () {
const basedir = path.join(__dirname, '../store')
return fs.readdirSync(basedir, { withFileTypes: true })
.filter(dirent => dirent.isDirectory())
.map(dirent => {
const moduleId = path.join(basedir, dirent.name, 'index.js')
// it will throw exception when moduleId cannot be found
return [dirent.name, require(moduleId)]
)
}
|
@shirohana I think what you want won't be helped by what is requested in this issue. In any case, you could save errors during test lookup and then throw them in .beforeAll(). That will integrate well with Jest. |
@wmertens Thank you for your advice, I've tried initialize variables (argument for also, |
these hooks are not designed for assertions and control testflow through exceptions, I guess. |
Is this a feature still under consideration? |
Another scenario that wants to be able to make tests dependent on one another: We are (maybe wrongfully so) using Jest for some end-to-end integration tests, where we chain multiple steps. For example:
Currently we model this as 1 Jest test I think it would be nicer for the test failure reporting, to model this as 5 tests. I know you can also create a test sequencer, but that's poorly documented, and I don't like to have an out-of-band file with the sequence, I'd like to define it in the same place as my tests––e.g. by being able to specify the dependency between tests ( |
@ottokruse we've been using our jest.result() prototype at #10077 for a couple of years now and it works fine. |
what about making tests inside other tests or using it the upper test fails the test inside will not run ... or register as a tests // Old implementation
function foo() {
return 1;
}
// New implementation
function foo() {
return "string";
}
describe('foo()', () => {
it('should be a number', () => {
expect(typeof foo()).toBe('number');
it('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
it('should be a finite number', () => {
expect(foo()).toBeLessThan(Infinity);
expect(foo()).toBeGreaterThan(-Infinity);
});
it('should not be a fraction', () => {
expect(Number.isInteger(foo()).toBe(true);
});
});
}); or describe('foo()', () => {
it('should be a number', () => {
expect(typeof foo()).toBe('number');
}).then( ()=>{
it('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
it('should be a finite number', () => {
expect(foo()).toBeLessThan(Infinity);
expect(foo()).toBeGreaterThan(-Infinity);
});
it('should not be a fraction', () => {
expect(Number.isInteger(foo()).toBe(true);
});
});
}); In more elegant syntax describe('foo()', async () => {
await it('should be a number', () => {
expect(typeof foo()).toBe('number');
})
it('should be a positive number', () => {
expect(foo()).toBeGreaterThan(0);
});
it('should be a finite number', () => {
expect(foo()).toBeLessThan(Infinity);
expect(foo()).toBeGreaterThan(-Infinity);
});
it('should not be a fraction', () => {
expect(Number.isInteger(foo()).toBe(true);
});
}); |
@perymimon it's nice, but then the inside tests won't be known until the outer test runs, and then running only certain tests won't work. That could be resolved by looking at the AST though 🤔 would be cool... |
@wmertens With the |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 30 days. |
This issue was closed because it has been stalled for 30 days with no activity. Please open a new issue if the issue is still relevant, linking to this one. |
It should be possible to mark a test as testing overall behaviour that other tests are testing for more specific behaviour on. i.e. We should be able to let know that if a certain test fails, a bunch of other tests are expected to fail and aren't really relevant.
We could do this either with a way to mark a test in a suite as important, or some way of marking a bunch of tests as dependent on another test.
Imagine this abstract scenario where we have a
foo()
that returns1
and then it's changed to return"string"
.Every one of these tests is going to fail because the return of
foo()
is a string instead of a number. Butshould be a number
is the only test that matters here, the moment thatshould be a number
fails every other test is irrelevant. e.g.should be a finite number
telling you that a string is not an infinite number is irrelevant to solving the fact that a string is not a number, which is already tested.However when Jest runs a test suite like this, it will output all 4 failures to the console with a long diff of details for each and every one of them. But the only one that actually tells us what we want is the first test. In fact in a complex application where the more detailed failures give us too specific errors, unless we scroll all the way back up the failures at the end will just make it difficult to figure out what has failed.
If we are able to mark a test in some way as a parent to, dependent of, more important than, ... other tests, then Jest can try and improve the test results in this case. We could ignore the other tests entirely, or we could output the fact the test failed but not output the detailed diffs for any tests we are expecting to fail so the developer can go straight to figuring out why the parent test failed.
As a practical example, I am writing tests for a Redux store that represents a form template. Instead of hardcoding the structure of this template in my test suite I use a snapshot test to make sure that the
newTemplate()
action is behaving correctly then I use thenewTemplate()
action to generate an initial template. Then I test the other actions (the ones that just modify the template) by usingnewTemplate()
to create the empty template that the modification actions modify.This keeps my tests sane, however if
newTemplate()
is broken I know that every other test is going to fail. And their failures are all irrelevant since the real failure is innewTemplate()
and that is what I have to fix rather than the individual modification actions.The text was updated successfully, but these errors were encountered: