-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression versus failure #834
Comments
I don't really see the advantage to what you describe over using |
This is what I planned to do initially but Hadley suggested using a special expectation. Perhaps it is about giving more structure to this class of expectation? Is |
I think there's a class of expectations that should be ignored on CRAN by default (i.e. regression tests). I don't have any strong feelings about how it should be implemented. |
If these tests are mostly about outputs, it would be useful to send a structured expectation with before/after fields. We could then provide tools to review and validate regressions similar to what we have in vdiffr. |
Would it make sense to use the standard testing machinery, but name the test files different? e.g. |
Interesting idea. Advantages:
Disadvantages:
About |
How about using a special block instead of a different file? see_that("this thing looks like this", {
expect_true(...)
expect_known_output(...)
vdiffr::expect_doppelganger(...)
}) |
Oh yeah, I like that idea too. Maybe |
The motivation for this is that while we cannot always control how an exact output evolves over time, the code generating the output should be robust and always work. |
This is about better support for tests that should not cause R CMD check failures because they are testing output that is not under the developer's control, at least not entirely. For instance, rlang has a lot of
expect_known_output()
tests for the backtraces. These outputs sometimes change across R versions, e.g. becauseeval()
produces a different backtrace. Another case is vdiffr tests, because the appearance of plots is sensitive to upstream code such as computation of margins or spacing between elements.Such tests should fail on platforms where the
CI
envvar is set (Travis, Appveyor), or where theNOT_CRAN
envvar is set (tests run locally). This allows the developer to monitor and assess regressions during development.@hadley suggested calling these tests regression tests. They could be implemented with a new expectation class
expectation_regression
.The text was updated successfully, but these errors were encountered: