Skip to content
This repository was archived by the owner on Nov 19, 2022. It is now read-only.

Document our philosophy / best practices about test suites #8

Closed
1 of 3 tasks
kytrinyx opened this issue Mar 31, 2017 · 9 comments
Closed
1 of 3 tasks

Document our philosophy / best practices about test suites #8

kytrinyx opened this issue Mar 31, 2017 · 9 comments

Comments

@kytrinyx
Copy link
Member

kytrinyx commented Mar 31, 2017

I couldn't find a document that talks about how we think about the exercise test suites.
There's a little bit in here: https://github.com/exercism/docs/blob/master/contributing-to-language-tracks/improving-consistency-across-tracks.md but I think it would be great to have a top-level philosophy document that other documentation could refer to.

@parkerl @jtigger @IanWhitney @petertseng @kotp @ErikSchierboom you all immediately come to mind as maintainers who care about this, and have a good understanding of this.

Would you suggest some bullet points of what should be included in such a doc?


TODO

@petertseng
Copy link
Member

petertseng commented Mar 31, 2017

@kotp
Copy link
Member

kotp commented Mar 31, 2017

This is excellent @petertseng set to start from.

@kytrinyx
Copy link
Member Author

kytrinyx commented Apr 1, 2017

Also perhaps:

  • consider the user experience (what do the failure messages / error messages look like, are they communicative?)
  • consider the test cases: try to avoid a test that introduces multiple requirements at once, try to avoid tests that test for the same thing, try to make the order of the tests increase the requirements gradually

@petertseng
Copy link
Member

That's a great point about user experience. It reminded me of a point:

@kytrinyx
Copy link
Member Author

kytrinyx commented Apr 1, 2017

Can we make it as easy as possible to run the tests?

Yeah, that's a good point. We should optimize for ease of running the tests, while still balancing against idiomatic use of the language's tools.

@ErikSchierboom
Copy link
Member

The only thing I would add is:

  • Try to make the test code as simple as can be.

I think it is very useful if people can actually read the tests, which means that they should be as simple as possible.

Furthermore, perhaps "Follow x-common's canonical-data.json if it's available" should be specified in more detail. Something like:

  • Use the test description as specified in the canonical data
  • Use the test order as specified in the canonical data

@kytrinyx
Copy link
Member Author

kytrinyx commented Apr 3, 2017

"Follow x-common's canonical-data.json if it's available" should be specified in more detail

I think we should mention that these are what we think are good defaults, but if you have a good reason (language idioms, etc) then don't hesitate to deviate from them.

@parkerl
Copy link
Contributor

parkerl commented Apr 3, 2017

Dropping this here for reference from https://github.com/exercism/request-new-language-track/edit/master/TRAVIS.

When implementing an exercise test suite, we want to provide a good user experience for the people writing a solution to the exercise. People should not be confused or overwhelmed.

In most Exercism language tracks, we simulate Test-Driven Development (TDD) by implementing the tests in order of increasing complexity. We try to ensure that each test either

- helps triangulate a solution to be more generic, or
- requires new functionality incrementally.

Many test frameworks will randomize the order of the tests when running them. This is an excellent practice, which helps ensure that subsequent tests are not dependent on side effects from earlier tests. However, in order to simulate TDD we want tests to run *in the order that they are defined*, and we want them to *fail fast*, that is to say, as soon as the test suite encounters a failure, we want the execution to stop. This ensures that the person implementing the solution sees only one error or failure message at a time, unless they make a change which causes prior tests to fail.

This is the same experience that they would get if they were implementing each new test themselves.

Most testing frameworks do not have the necessary configuration options to get this behavior directly, but they often do have a way of marking tests as _skipped_ or _pending_. The mechanism for this will vary from language to language and from test framework to test framework.

Whatever the mechanism—functions, methods, annotations, directives, commenting out tests, or some other approach—these are changes made directly to the test file. The person solving the exercise will need to edit the test file in order to "activate" each subsequent test.

Any tests that are marked as skipped will not be verified by the track test suite unless special care is taken.

Additionally, in some programming languages, the name of the file containing the solution is hard-coded in the test suite, and the example solution is not named in the way that we expect people to name their files.

We will need to temporarily (and programmatically) edit the exercise test suites to ensure that all of their tests are active. We may also need to rename the example solution file(s) in order for the exercise test suite to run against it.

@jtigger
Copy link

jtigger commented Apr 4, 2017

As part of this work, as discussed in exercism/generic-track#12 please add a link from https://github.com/exercism/request-new-language-track/blob/master/TRAVIS to the document you create here.

parkerl added a commit to parkerl/docs-1 that referenced this issue May 2, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants