-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All tracks that have discussed whether to use a build system or not #117
Comments
Having some doubts on whether this topic is exploring build systems or test frameworks. If it's the former, the following two discussions are not applicable:
I think after some deliberation I will say... I did this survey in order to figure out how tracks answer the question of "How do we make it as easy as possible to run the tests?" So, I would consider discussions about a particular test framework to be in scope if they make a difference in this regard. For others who wish to answer related questions such as the level of exposure to language ecosystems offered by the various tracks, however, the choice of test framework can also be interesting. |
My first reaction to this is that I would want to optimize each track for using tooling that is common/idiomatic for that programming language, rather than optimizing for consistency across the different Exercism tracks.
I think this is an excellent question. If we have a good discussion here about the philosophical aspects of it, and decide on a general approach or guidelines, I think we should include this topic in the set of issues that we generate when we create a new language track. |
This is spot on. We should be aiming at using the most idiomatic solution throughout the track. From the tests, to the expected implementation to the build system. This has several obvious examples:
|
I hope I won't be misunderstood (I'll edit my post to be clear)
No intent. Within a single track, though. |
Another advantage I see, that has been made clear by the C# and F# efforts: Cross-platform compatibility. A disadvantage I see, or at least a point of discussion: May prevent student from learning about the fundamental building blocks making up the test command they are running, what's going on beneath the covers. I'll edit both these points into the initial comment. Supporting details moved there as well so I don't have to maintain them in two places, y'all have them in the notification email too. |
I agree that it would be ideal if people could first learn how to build a simple program from scratch, and then increase building complexity until the student can grasp the more modern, and actually used, building systems. That teaching paradigm worked well for a long time, but as tooling developed more functionality and we started depending more on them, things changed a lot in some language. Sometimes, the fundamental building blocks can only make sense after you have enough exposure to understand some concepts. Because of that, I would divide the users in two groups to:
If I where to guess, I think we have a lot of people in the second group in Exercism... Because of that, I think we should make it as easy as possible to run the tests, maybe even when that is not the most idiomatic solution. To give an example, in the Haskell track we used stack, which is an easy to use and popular tool, with hpack, which is not popular at all, but makes things simpler for maintainers and users. This lowers the entry-barrier a lot, so that students can get more experience before having to deal with cabal, and eventually face dependency problems that I still think are really hard to solve, even after an entire year of Haskell-only amateur programming. |
I can only agree with @rbasso. For many languages, the building blocks are actually quite complex to understand. For example, when starting out with C#, you wouldn't want to know anything of MSBuild. Same goes for Scala and SBT, albeit to a lesser extent. Perhaps what we can do is include links to documentation if people want to know more about the fundamental building blocks? |
This may depend on the different learning styles. Including links seems good as then each student could choose an individually preferred point to start to learn about the lower-level tools. Actually, someone might argue that the job of Exercism is just to allow students to run the tests and it's not our job to teach about the fundamental building blocks, but I figure it can't hurt to go above and beyond our job and include a few links...
There's a chance I fall in this group. I have a few language tracks where I had to take multiple attempts to follow their setup instructions before I found something that stuck. I really just wanted something that would work and did not want to have to dig too deep to figure all the tools out because I was just evaluating the various languages at the time.
Another thing that might be interesting if we can get feedback - see how many people find that sort of thing valuable. |
One of the things that we've been using rikki- for in the Go track is to introduce people to tooling as they do the exercises. It's not done perfectly, but I think that it helps to introduce things at the point where you have a reason to be curious about it. If someone says Hey, your solution is formatted differently than usual; the Go community uses gofmt to remove inconsistent formatting now it's about you not just arbitrary-feeling tools/documentation. I think that the idea of making it super easy first, and slowly help people get used to the idiomatic choices of the language is a good idea. |
I'm seeing common ground around the idea of "the idiomatic setup that requires the least understanding (both in installation and usage) to be successful." Seems like if there are details about compilation/execution that can actually be ignored if properly handled by tooling (in JVM languages, such a detail is the classpath), likely we'll serve our practitioners better by encouraging them to install and use the build tool. |
Thanks for discussing, everyone. I'll close this to get it out of the queue since there is never really a state when we can call this "done". A possible action to be taken (take into account user experience) has already been noted in a the linked issue (docs). Even though this is closed, feel free to continue updating it. |
Hello. Sometimes when trying to run the tests of a certain language track, the command is very involved. Let's take a look at what Erlang's tests used to be:
Note in addition that
accumulate
is the exercise slug, so the command depends on the exercise too.Okay, you could probably script that, but it's a little annoying to have to script it.
Wouldn't it be better if the command to run the tests was easy to remember and constant across all exercises within the same track?
Thus, the Erlang track adopted rebar3, so that no matter what the exercise, you use
rebar3 eunit
and the tests are run.I would like to get a list of all tracks that have made similar decisions.
I would be pleased if you could add to the below list.
In particular, I am interested in any discussions on whether to use a build system, with arguments for and against each.
That means that your opinion is welcome even if your track does not currently use a build system (maybe in your track, each exercise's tests are just a single script that you invoke the compiler and/or interpreter on).
This is because it's still possible that your language ecosystem has a build system that you explicitly decided not to use.
make
cmake .. && make
dotnet test
ceylon test $(basename sources/*)
lein test
crystal spec
dub test
npm run test
elixir *test.exs
npm test
rebar3 eunit
dotnet test
go test
groovy *Spec.groovy
@Grab
(self-contained) instead of Gradlestack test
make
gradle test
jasmine *.spec.js
julia runtests.jl
gradle test
busted .
make
perl *.t
pulp test
python *test.py
raco test *test.rkt
ruby *spec.rb
cargo test
sbt test
swift test
yarn test
In reading these past discussions, I would say the main question that I noticed is:
Should Exercism tracks be about just the language, or should it be about language plus ecosystem?
Some arguments for both sides that I observed:
dotnet test
is all that will be necessary.make
and never learn about how to invokegcc
and its associated flags (and this knowledge would be important to be able to write future Makefiles!)The text was updated successfully, but these errors were encountered: