-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to skip() test inside the function #591
Comments
skip() inside of a test-case can't automagically jump out of it, just do |
Thanks, but just as Idea: it('should skip if not fit', function(done, skip){ or maybe do it with done function by supplying some paramenters done({skip: true}), by the way done('some message') output is huge and like error out, I wonder why? |
? there's no reason for skip functionality, just |
This fails to highlight to the tester when a test has actually been skipped; calling |
yeah that's true, it would be simple to add but it does seem like a strange feature to me at least |
It would be really useful when testing optional dependencies, for example an abstraction layer supporting several adapters for different environments. Currently we work around this in the following way: (condition ? describe : describe.skip)("Optional foo tests", function() {
}); but this loses some of the natural readability of the tests. |
👍 for this feature, for exactly the same reason. I'd like to have some complex code that decides if a test should be skipped, and have the test output show a skip, not a pass. I'd be OK with calling Regarding skip() magically getting out of the test case -- how about throwing an exception just like when an assertion fails? |
I'd like this too. We have tests that are only sensible if you have a local identd running. I'd like to be able to skip the tests when we detect that identd is not running. |
+1 |
1 similar comment
+1 |
Maybe something along the line:
where |
if you want environment or otherwise specific tests, you create those suits in different files / folders, so you can run them separately. |
That may be OK for running the tests manually at a console, but not great if you want an entire test suite to be run with a single entry point (e.g. I added a pull request to the similar issue referenced above, and @visionmedia did make some initial comments on it. We've been using it for a while in a forked repo, and has been quite helpful. See also my comment on #332 for example usage. |
@hallas I have a suite of tests that run in browsers. I'd like to skip tests when some features are missing (for example, XHR level 2, or IndexedDB). My tests are grouped by the features they cover in my library, and I'd really like to have the feature detection (and therefore skipping decision) right next to the test code. |
Alright I hear you, let's see what happens |
It sounds like you should have different directories for different types of tests. For example I use GruntJS and I have unit tests and integration tests, they are in separate directories. I can run these all together or independent of each other. |
@shanestillwell see the comments between @hallas and @pwnall above. The problem being addressed here can't be solved simply by grouping of tests in folders; it is more concerned with cases where certain tests cannot be run in certain environments. Imagine that your test suite runs in-browser, and one of the integration tests requires local storage. One option is to simply not run the test if |
Easy, already implemented:
|
Not really. The goal is to skip after the initial pass (e.g. async).
|
+1 |
Here's a rather elegant way to do it that I made for ship: describe 's3', ->
maybeIt = (
if fs.existsSync('test/ship.s3.opts')
it
else
it.skip
)
maybeIt 'should deploy via CLI', (done) ->
cmd = exec 'bin/ship test/ship.s3.opts'
[...] |
@slang800 I mentioned that workaround a while back; it's better than some of the other options, but it's not great when you have a number of different conditions. |
Oh, whoops - you're right! That's the exact same thing... prolly should've read through the full thread before posting. |
Thank you for
We should probably add a |
Why can't you name your tests in such a way that the |
@boneskull have a look at the linked issue #332 to see why |
Because I want to run npm test regardless of which version of express I am
|
@starsquare Because users want one command to run the tests? Sorry if I'm being dense. There's no reason you can't have |
Perhaps, which is why I opened the following: #946 |
@starsquare Something's been bothering me about this, #946 and #332. I'm in the camp that says once your unit tests start depending on their environment, you no longer have unit tests, and instead have integration tests. Of course, that's fine, but is it Mocha's intent to be used for both unit and integration tests? Sure, people do it, but was it written with this in mind? The word "unit" is conspiculously absent from the homepage, but I don't want to presume anything. If it wasn't the intent, is the intent to move in this direction? If so, then 👍, let's make it more flexible. If not, then 👎, and we should do our part to promote better unit testing practices, which would mean leaving the unfriendly API as-is and maybe writing a tutorial about precisely why it's difficult to do. Hopefully @travisjeffery can shed a bit of light on this. |
Well, regardless, people seem to use the hell out of this for integration tests. I'm behind merging #946. |
Thx for: (version > 3 ? it : it.skip)("should works with express.Router", function(done) { |
This is can help. Not beauty, but work: self = null
index = 0
nextTestShouldBeSkipped = true
describe 'something: ', ->
self = this
it 'test1', ->
index++
if(nextTestShouldBeSkipped)
self.tests[index].pending = true
#You can modify test title if needed
self.tests[index].title += " (skipped)."
it 'test2', (done)->
done() |
About the build, I'm not in charge of that. I guess both @boneskull and @travisjeffery are quite busy lately. |
+1 |
+1 I think I can provide a good use case for this. Let's say that you are running an integration test -- hitting a real server. You want the test to actually log in, but it would be a bad idea to put user/password credentials in the repository, and for CI purposes you don't want them to appear on the command line. So, you'd add them to an environment variable. In this case, you'd want the test to conditionally skip if the environment variable was not present. That use case could be covered by slang800's comment, although would be simpler as a check within the test itself. However, the other reason you'd conditionally skip would be if your server couldn't be reached. If that happens, you really can't say whether the code you're testing is working correctly or not; the only logical thing to do is abort the test and mark it as skipped. |
This is available as of v2.2.0 (back in March) |
Aah, OK. Your previous comment on this suggested that it hadn't yet made it into a tagged release. FYI, I got here from a google search, and none of the other docs that turned up in my searching made any mention of |
It looks like it was merged into a tag the day after my comment :-) The docs don't look to mention it, so maybe worth filing an issue on mochajs/mochajs.github.io. |
So, I'm using v2.3.4 and got this when I tried the following: it('authenticates services', (done) => {
if (undefined === process.env.USER || undefined === process.env.PASSWORD) {
console.log("Set environment variables for USER and PASSWORD to run network tests");
this.skip("foo");
}
// etc
}
|
Perhaps mochajs/mochajs.github.io#14 is relevant here regarding ES6? |
Very much so -- thanks! |
thanks for .. (version > 3 ? it : it.skip)("should works with express.Router", function(done) .... which skips one test (or not) based on a condition also works for .. (version > 3 ? describe : describe.skip)("should works with express.Router", function(done) .... if you want to skip an entire describe block |
ok, so I need this ;) However, in my case
I get the error |
This is my try to write conditional test based on value set in async function callback describe("add admin account if it doesn't exist", function() {
it("isAdmin()", function(done) {
isAdmin(db, (value) => {
// true or false
this.admin = value;
done();
});
});
it("create admin account or skip if it exists", function(done) {
if (this.admin) {
this.skip();
}
createAccount(db, {
profileData
}, {
validate
}, (profile) => {
done();
});
});
}); |
Use this: describe.if = function (condition, message, func) {
const run = condition ? this : this.skip
run(message, func)
} Example of use: var condition = false
describe.if(condition, 'Some test', function () {
// test
}) |
Here are TypeScript typings for @aMarCruz 's approach in case anyone finds them useful. /// <reference path="../node_modules/@types/mocha/index.d.ts" />
/* Note: this reference path may be in a different location for your project. */
declare module "mocha" {
interface SuiteFunction {
if: (condition: boolean, message: string, func: (this: Mocha.Suite) => void) => void;
}
}
/* https://github.com/mochajs/mocha/issues/591#issuecomment-443841252 */
describe.if = function (condition: boolean, message: string, func: (this: Mocha.Suite) => void): void {
const run: Mocha.PendingSuiteFunction = condition ? this : this.skip;
run(message, func);
}; |
@shirakaba Thanks for this! I get the intent, but can't get it working. Sorry if this is a novice question, where where am I supposed to add these declarations? While typescript compiles fine when I use |
@PhABC Ah, yes – sorry to neglect. You should have an entry file that always gets run first (e.g. Note: The "reference path" line needs to strictly be at the top of the file (no comments before it; can only be preceded by other reference paths), otherwise it'll be completely ignored. |
Did you solve this issue (#591 (comment))? |
In my tests some times test need to run only if certain conditions are met, for example previous test passed (where not skipped like load and save consequent tests).
It would be great if inside test function I could do something like that:
if (!this.fit) this.skip() // and mocha would skip the test.
The text was updated successfully, but these errors were encountered: