Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: TODO tests #234

Closed
ghost opened this issue Mar 4, 2015 · 6 comments
Closed

Feature request: TODO tests #234

ghost opened this issue Mar 4, 2015 · 6 comments

Comments

@ghost
Copy link

ghost commented Mar 4, 2015

In Perl's TAP testing framework, the developer can mark certain tests as "TODO", which are useful when you know what the correct result should be, but the code doesn't produce it yet:

http://testanything.org/tap-specification.html

Could this be added to the various expect_* and is_* functions, perhaps as an extra todo argument, or maybe as a style=c('test','todo','skip') argument that allows different options (to prevent future arg proliferation)?

@ghost ghost mentioned this issue Mar 4, 2015
@HarlanH
Copy link

HarlanH commented Mar 10, 2015

The skip function is sorta the right idea, but I was hoping I'd be able to do something like:

skip(expect_equal(not_implemented_fn(), "cat"))

And have the result be a warning that displays the argument...

@krlmlr
Copy link
Member

krlmlr commented Jan 3, 2016

I'd be fine with the summary reporter displaying the location of skipped tests (if no failures or errors occur). Or would a dedicated reporter be better?

@richierocks
Copy link

Rather than having extra code in the test to say that the thing to be tested isn't ready yet, it seems to make more sense (to me at least) that the function uses.NotYetImplemented from the base package. Then reporters can optionally choose to display something different for cases of a not-ye-implemented error.

From a test-driven-development point of view this seems semantically more correct. The test should fail because the code isn't correct, but you still have the information there to report a sensible reason as to why it failed.

@kenahoo
Copy link
Contributor

kenahoo commented Jan 30, 2017

I've looked in #343, but I don't see how this supports marking certain tests as TODO. Is there a writeup of it somewhere else?

@richierocks It's not always feasible or even desirable to handle it in the package side. For example, you may have a bug and written a test for it that currently fails, but have no idea where the bug sits in the package. Specifying the TODO semantics in the tests is a good way to define the desired behavior even before you know how/where you're going to implement it.

@krlmlr
Copy link
Member

krlmlr commented Jan 31, 2017

If you call skip("TODO: ...") in a test, it will remind you that the test (or the code that it tests) doesn't work yet. We have a bunch of these in dplyr.

@kenahoo
Copy link
Contributor

kenahoo commented Jan 31, 2017

Oh okay, thanks @krlmlr .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants