-
-
Notifications
You must be signed in to change notification settings - Fork 7
How do you choose which version to test against? #29
Comments
For C# and F#, I've chosen to run the same language version as the one that is specified by default in the exercises. Both of these languages have a project file that is downloaded along with each test suite file, and that project file specifies an exact version of the runtime is targets. The test runner subsequently tests using that version. |
For Rust, we actually use two different versions, both brought in at the point of building the runner dockerfile. The runner itself is built using most recent stable. This should update every six weeks or so. The student's solution is built with the most recent nightly edition. Some students will submit solutions requiring nightly Rust. I'd be OK with the runner failing in those cases, though; nightly is only required for new or advanced features which probably want a mentor's attention anyway. The real reason I use nightly at all is that emitting the test output as JSON, last time I checked (August), counted as a "new or advanced feature". It was worth using the nightly compiler to get JSON reports instead of attempting to parse text output intended for human consumption. |
Current plan for Erlang is, to run against the version that was current ~3 months ago. This roughly covers my speed with updating exercise CI and documentation for the recommended version. Though, for erlang I do not expect things to break because the compiler is to new. |
In Scala the |
I think this is something each test runner and/or analysis tool should be able to handle based on the solution. The strategy per language might need to vary. For example: Python
Rust
I think it might be possible to get warnings about use of unstable features from Clippy, so if that is run by the analysis tool first, that might help. Elixir
(similar to how Erik says he's doing it for C# and F#) ImplementationThis will require the Docker image having all the versions you might want to run the tests with, which is probably not appropriate. Maybe the analysis tool should run first, and one of the outputs for it should be the name of the docker image to use for the test run? This could then also return nothing if the analysis tool encounters a syntax error or something, in which case the test runner wouldn't need to run at all. |
One can specify the version, though Usually it will work with newer versions without problems, unless |
I'm not excited by the prospect of needing to download and setup a particular Rust version based on the student's tests. TBH I'm not thrilled to have needed to go to nightly at all. I'd much rather say "your program may work great on nightly, but for automatic test results, your code needs to work on stable". |
The machine that runs the test doesn't have internet, so yeah, this is definitely not going to happen.
Agree. |
So a general question here, is how we tell students what version their code is going to run against. Where do we put that information so that the website can display it? Maybe in the config.json of the tracks? Thoughts? |
Wherever we put that information, the Dockerfile needs to have access at build time. No point duplicating information, after all. The other option is to assume that the Dockerfile is the source of truth (as it is, de facto, anyway) and add a requirement that it include a file in a canonical location which reports the test runner's version. |
For the Julia track the plan is to use the latest stable x64 version. It uses semver versioning so it will be backwards compatible. The testsuites themselves are tested against all Julia versions with official support, so they're also backwards compatible. That way students can use the latest features if they like but submissions by students on older or LTS versions will also work. If they use release candidates or nightly builds, they should be used to things that don't work :D
Couldn't the website extract that information from the Dockerfile directly? |
I'm going to say no. I don't want to have to work out how to parse 50 Dockerfiles to get the relevant string, and I figure even if I did this would be very brittle. I think we'll need a file in a specific place, or an addition to an existing config file. |
Okay, so I think what I'm gathering for the elixir track, I need to pose this question to the track maintainers and propose to have them decide on a version so that the test runner and the track can move in lock step. @iHiD, i'm not sure what all the other languages would need to adequately specify their needs, but elixir should only need a version number, wherever that should be located. |
I think so, yes. And then we need to collectively decide where to store that version so that I can easily get it to show to a user, which I increasingly feel would be best as a string in config.json. |
The strongest argument against putting it in `config.json` is that it puts
key information--the version--in two places: both the Dockerfile and the
configuration. It will be much simpler to keep things in sync if the
user-visible version information lives in a text file at a canonical path
in the docker image, so that a single change updates things everywhere.
Right now, `config.json` and `Dockerfile` for the test generators aren't
even in the same repository. I've done my share of parallel PRs to keep
distinct repos synced, and I'd prefer to avoid that in the future if
possible!
…On Wed, Oct 30, 2019 at 2:59 PM Jeremy Walker ***@***.***> wrote:
I need to pose this question to the track maintainers and propose to have
them decide on a version so that the test runner and the track can move in
lock step.
I think so, yes.
And then we need to collectively decide where to store that version so
that I can easily get it to show to a user, which I increasingly feel would
be best as a string in config.json.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#29?email_source=notifications&email_token=AB3V4TUY2Z2MHPDAJXZIFZ3QRGHKXA5CNFSM4JGKDNW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECUI3WY#issuecomment-547917275>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3V4TWDZN3EAFZGP6HGRXTQRGHKXANCNFSM4JGKDNWQ>
.
|
I agree with your concerns about maintaining it in two places.
The website doesn't interact with Docker images though, so we can't keep it in there. I don't mind having a file at the base of each directory if we want to do that, but I think we're then still maintaining the same thing in two places. It's important that the Ruby track is tested on Travis against the same version of Ruby as the test-runner, and the analyzer and representer. All these things should be on the same version. How about we have a github action that automatically applies any changes to a |
I haven't played with github actions at all. If they can be set up to
forward files between repos on changes like that, then yes, that's a good
solution.
…On Wed, Oct 30, 2019 at 3:16 PM Jeremy Walker ***@***.***> wrote:
I've done my share of parallel PRs to keep distinct repos synced, and I'd
prefer to avoid that in the future if possible!
I agree with your concerns about maintaining it in two places.
It will be much simpler to keep things in sync if the user-visible version
information lives in a text file at a canonical path in the docker image.
The website doesn't interact with Docker images though, so we can't keep
it in there.
I don't mind having a file at the base of each directory if we want to do
that, but I think we're then still maintaining the same thing in two
places. It's important that the Ruby track is tested on Travis against the
same version of Ruby as the test-runner, and the analyzer and representer.
All these things should be on the same version.
How about we have a github action that automatically applies any changes
to a lang-version key in the track's config.json to a lang-version file
in the analyzer, representer and test-runner repos, which the Dockerfiles
can read from when they build? That keeps everyting in sync and removes any
manual burden on maintainers.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#29?email_source=notifications&email_token=AB3V4TU6G47YP6H2KB7BGXDQRGJNJA5CNFSM4JGKDNW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECULBLA#issuecomment-547926188>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3V4TXMZVHX3TMFCHSEH2LQRGJNJANCNFSM4JGKDNWQ>
.
|
Not to throw gas on the fire, but for the JVM languages a single version does not really tell the whole story. For the JVM languages - Scala, Clojure and Kotlin - there is the JVM version being used. And, there is the language version. Lets take Scala as an example - we could be running the tests using JDK 1.8 and Scala language version 2.12.8. Currently the students know what to use by looking at the INSTALLATION.MD for the Scala track. And, each exercise specifies the Scala language version in the exercise build.sbt file. To make matters worse - each exercise could potentially use a different Scala version. Though they are all synced up now, and would want to keep the Scala versions in sync, it is a non trivial task when upgrading. I am just not sure a single "version" is sufficient to identify the test system environment. |
Couldn't Scala just use a version string like "JDK 1.8 / Scala 2.12.8"?
That's pretty palatable for humans, but it's also easy to unpack into the
individual versions necessary to construct an unambiguous Docker image.
Then, it's just one more value to sync with the exercises.
…On Wed, Oct 30, 2019 at 5:35 PM Ric Emery ***@***.***> wrote:
Not to throw gas on the fire, but for the JVM languages a single version
does not really tell the whole story. For the JVM languages - Scala,
Clojure and Kotlin - there is the JVM version being used. And, there is the
language version.
Lets take Scala as an example - we could be running the tests using JDK
1.8 and Scala language version 2.12.8. Currently the students know what to
use by looking at the INSTALLATION.MD for the Scala track. And, each
exercise specifies the Scala language version in the exercise build.sbt
file. To make matters worse - each exercise could potentially use a
different Scala version. Though they are all synced up now, and would want
to keep the Scala versions in sync, it is a non trivial task when upgrading.
I am just not sure a single "version" is sufficient to identify the test
system environment.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#29?email_source=notifications&email_token=AB3V4TU7ZUKNC6SISFPHE7TQRGZWVA5CNFSM4JGKDNW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECU4PQY#issuecomment-547997635>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3V4TXL3B4RW4MRIMZA4U3QRGZWVANCNFSM4JGKDNWQ>
.
|
That would be fine as long as the value is just some human readable string.
It is not clear to me how the value is expected to be used - and is
expected to just be version number.
On Wed, Oct 30, 2019 at 9:42 AM Peter Goodspeed-Niklaus <
[email protected]> wrote:
… Couldn't Scala just use a version string like "JDK 1.8 / Scala 2.12.8"?
That's pretty palatable for humans, but it's also easy to unpack into the
individual versions necessary to construct an unambiguous Docker image.
Then, it's just one more value to sync with the exercises.
On Wed, Oct 30, 2019 at 5:35 PM Ric Emery ***@***.***>
wrote:
> Not to throw gas on the fire, but for the JVM languages a single version
> does not really tell the whole story. For the JVM languages - Scala,
> Clojure and Kotlin - there is the JVM version being used. And, there is
the
> language version.
>
> Lets take Scala as an example - we could be running the tests using JDK
> 1.8 and Scala language version 2.12.8. Currently the students know what
to
> use by looking at the INSTALLATION.MD for the Scala track. And, each
> exercise specifies the Scala language version in the exercise build.sbt
> file. To make matters worse - each exercise could potentially use a
> different Scala version. Though they are all synced up now, and would
want
> to keep the Scala versions in sync, it is a non trivial task when
upgrading.
>
> I am just not sure a single "version" is sufficient to identify the test
> system environment.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <
#29?email_source=notifications&email_token=AB3V4TU7ZUKNC6SISFPHE7TQRGZWVA5CNFSM4JGKDNW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECU4PQY#issuecomment-547997635
>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AB3V4TXL3B4RW4MRIMZA4U3QRGZWVANCNFSM4JGKDNWQ
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#29?email_source=notifications&email_token=AALYOG3G7FAQLFGJAD26TIDQRG2PRA5CNFSM4JGKDNW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECU5IHI#issuecomment-548000797>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AALYOG4BB64CFFPRHEPSHMLQRG2PRANCNFSM4JGKDNWQ>
.
|
My impression is that it's a human-readable string, and it's the responsibility of the maintainer of the test runner / analyzer / whatever to parse that string within the Docker build and do the right things with it. |
Even if it isn't, easy to do in structured way for ihid to capture for the purposes of the website: {
"language": "elixir",
"version": "1.9.1",
"copy": "Elixir 1.9.1, running on OTP 22",
"env_versions": [
{
"name": "OTP",
"version": "22"
}
]
} |
With my website-maintainer hat on I care about a human-readable string we can display to students in the website. Track/tooling-maintainers will care about being able to reference that to avoid the syncing/duplication issues that @coriolinus has pointed out. So I'm totally happy with track-maintainers parsing the human string, or having both a human, and a machine-readable version, encoded into the config.json. |
How are you all choosing which language version to run the solution against?
In the elixir track, the repo targets versions 1.6-1.9 with TravisCI, but likely running the solution against all four major versions in that range is likely not the intent of the project.
So how do you choose? Newest? Stable?
As I also said, this may be a bigger issue for the elixir track to consider that maybe rather than targeting four versions, it should just target one, then it reduces the complexity of testing.
I am also posting @iHiD's response as requested and I think it's useful in having this discussion.
I think choose a version and stick to it. But as we've not deployed our first test-runner yet, we've not locked that in. I think somewhere we need to expose this to the student. Could you open an issue at https://github.com/exercism/automated-tests asking this question pls, and linking to this comment?
Originally posted by @iHiD in exercism/elixir-test-runner#3 (comment)
The text was updated successfully, but these errors were encountered: