-
-
Notifications
You must be signed in to change notification settings - Fork 358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace [TestCase] tests with individual tests #194
Comments
In terms of the argument that we're missing the richness of the canonical data I think we could work around that by adding additional comments alongside of the test cases. However, from a usability standpoint, I'm all for making the tests easier to run if they're difficult to run today. In my setup I'm using Visual Studio which has the ability to run all of the cases for a single test or to run them one by one very easily, but I know different tools don't support the same functionality. Here are the list of exercises using the |
Great work gathering this data @robkeim! |
I can start working on these. I will always leave a comment here before starting to convert the tests for an exercise - this way you'll know which ones are still up for grabs :) |
Sounds great, thanks @balazsbotond. |
@balazsbotond Would you mind waiting until I've merged #193? Otherwise we'll have lots of merge conflicts :) |
@ErikSchierboom OK, then I'll start with the F# track :) |
Great! |
@balazsbotond I've just merged #199, which means that this issue is now free to being worked on (if you're still interested). |
@ErikSchierboom I am interested in helping with this, if you or anyone is up for a little mentoring along the way. |
@morrme Great! We love to receive contributions. As for the mentoring, we'd be happy to. Let's help you get started. First thing to note is that we are in the process of adding test data generators (#195), which is basically code to automatically generate an exercise's test file. This generation of the code is based on the exercise's canonical data, which can be found in the x-common repository. I've looked at the remaining exercises for which to replace the [TestCase] with individual tests, and I think your best bet would be to start with the The generators can be found in the If you have any specific questions, please post them in our Gitter room. Good luck and enjoy! |
Hi @ErikSchierboom, Do you still need help with this ticket? |
Yeah sure! |
atbash-cipher, isogram, nth-prime, perfect-numbers, pig-latin are the one which still needs to be done, right? |
@GalaDe eh, not really. If you look at the list in the second comment, you'll find the remaining exercise listed. Those you mentiond have all been checked. |
Ops:) @ErikSchierboom ok:) |
@GalaDe By the way, have you read this comment? In it, I explain that from now on, we want to use test data generators to generate the test files automatically. There are already several test data generators available. Links to most of the PR's can be found on this page. |
@ErikSchierboom, yes, I did. Thank you! |
@ErikSchierboom I'm closing this because I feel that it no longer makes sense given that we're moving to test generators (assuming the test generators are creating individual tests and not test cases). Please feel free to re-open the issue if you disagree. |
Agreed! |
We have several places where we use
[TestCase]
to create a single test with multiple input values. While efficient (no code duplication), this is not the best of user experiences as it makes it harder to run a single test and also doesn't really allow us to mark the various input values with a descriptive text. It also doesn't align with how canonical data for exercises is defined, which only works on single test cases.Consider the
isogram
exercises's tests:What's the difference between test case one and two? It's not immediately obvious. The canonical-data of this exercise looks much nicer. I think we should convert the [TestCase] tests to individual tests.
The text was updated successfully, but these errors were encountered: