Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expect_known_results #782

Closed
hadley opened this issue Jul 29, 2018 · 10 comments · Fixed by #906
Closed

expect_known_results #782

hadley opened this issue Jul 29, 2018 · 10 comments · Fixed by #906
Labels
expectation 🙀 feature a feature request or enhancement wip work in progress

Comments

@hadley
Copy link
Member

hadley commented Jul 29, 2018

I've noticed that I've started writing output regression tests like this:

  expect_known_output(
    file = test_path("test-type-df.txt"),
    {
      cat("mtcars:\n")
      print(vec_type(mtcars))
      cat("\n")
      cat("iris:\n")
      print(vec_type(iris))
    }
  )

This is starting to feel like a poor man's knitr, which made me realise we need expect_known_something() that uses evaluate to show both the input code and the output results.

This would subsume #681 (assuming we added crayon options) and #768.

@hadley
Copy link
Member Author

hadley commented Jul 29, 2018

I think this leads to a natural split when testing errors — you have many tests where you use class argument to verify that the correct error is thrown, and then you have a few tests with expect_known_result() where you check that the error generates a message that's useful to humans.

cc @jennybc

@hadley
Copy link
Member Author

hadley commented Oct 8, 2018

Maybe we should have a top-level equivalent to test_that() that ran its contents and saved to disk.

Then the test above would be become:

regress_that("df.txt", {
   vec_type(mtcars)
   vec_type(iris)
})

This takes us the full circle back to the regression tests that base R uses extensively

@gaborcsardi
Copy link
Member

It is nice to have a single test_that(), but I agree that the benefit of having multiple expectations within a single knitr run outweighs that.

As an alternative, maybe expect_known_output could create multiple expectations? I think that's possible, just call expect() multiple times. Not sure if it is better. Multiple expectations within a single expect_* seems confusing. But for the end user it might not matter much.

@gaborcsardi
Copy link
Member

TBH I am not 100% sure that we should base everything on printing. E.g. even in the simple case of having a tibble, only the first n rows will be printed, but we want to detect the differences in all rows.

Maybe what we need is some special printing? Or a way to compare the full object to the reference, and then have a way to explore the differences if a test fails?

@hadley
Copy link
Member Author

hadley commented Oct 8, 2018

I think printing gets you 90% of the way there, and you can always customise the printing.

The disadvantage of basically any other approach is that you lose nice diffs.

@gaborcsardi
Copy link
Member

gaborcsardi commented Oct 8, 2018

Yeah, but you only need the diffs if a test case fails. So I think I would

  1. save and compare the full objects
  2. if a test fails, print (by default) for the diffs

To make it more flexible, we could have a test_diff() or print_diff() generic, to be able to customize what is printed and what is diffed.

@krlmlr
Copy link
Member

krlmlr commented Oct 8, 2018

There's diffobj which specializes on object diffs for testing.

@hadley
Copy link
Member Author

hadley commented Oct 8, 2018

That behaviour is still just a special case of running arbitrary code in an Rmd 😄

@gaborcsardi
Copy link
Member

Yeah, I am not arguing against the Rmds. Only against comparing the printouts instead of comparing the objects.

@hadley
Copy link
Member Author

hadley commented Mar 28, 2019

Needs option to set override crayon behaviour

@hadley hadley added feature a feature request or enhancement reporter 📝 expectation 🙀 and removed reporter 📝 labels Mar 28, 2019
hadley added a commit that referenced this issue Jul 18, 2019
@hadley hadley added the wip work in progress label Jul 18, 2019
hadley added a commit that referenced this issue Jul 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
expectation 🙀 feature a feature request or enhancement wip work in progress
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants