Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All tracks that have discussed whether to use a build system or not #117

Closed
petertseng opened this issue Feb 13, 2017 · 11 comments
Closed

Comments

@petertseng
Copy link
Member

petertseng commented Feb 13, 2017

Hello. Sometimes when trying to run the tests of a certain language track, the command is very involved. Let's take a look at what Erlang's tests used to be:

    erl -make
    erl -noshell -eval "eunit:test(accumulate, [verbose])" -s init stop

Note in addition that accumulate is the exercise slug, so the command depends on the exercise too.
Okay, you could probably script that, but it's a little annoying to have to script it.

Wouldn't it be better if the command to run the tests was easy to remember and constant across all exercises within the same track?

Thus, the Erlang track adopted rebar3, so that no matter what the exercise, you use rebar3 eunit and the tests are run.

I would like to get a list of all tracks that have made similar decisions.
I would be pleased if you could add to the below list.

In particular, I am interested in any discussions on whether to use a build system, with arguments for and against each.

That means that your opinion is welcome even if your track does not currently use a build system (maybe in your track, each exercise's tests are just a single script that you invoke the compiler and/or interpreter on).
This is because it's still possible that your language ecosystem has a build system that you explicitly decided not to use.

Track Build system Command Discussion Notes
C Make make exercism/c#4 Rake (unusual for C) -> Make
C++ CMake cmake .. && make exercism/exercism#1758 Pro: Simplifies setup, cross-platform; con: overkill, hides fundamental details
C# .NET Core dotnet test exercism/csharp#200
Ceylon built-in ceylon test $(basename sources/*) ?
Clojure Leiningen lein test exercism/exercism#508 language vs ecosystem
Crystal built-in crystal spec exercism/crystal#47 conventional dirs
D DUB dub test exercism/d#30
ECMAScript Node Package Manager npm run test exercism/javascript#12 exercism/javascript#272
Elixir script = test elixir *test.exs exercism/elixir#218 script is simple, so no Mix
Elm Node Package Manager npm test exercism/elm#76 shell script -> npm test
Erlang Rebar3 rebar3 eunit exercism/erlang#106
F# .NET Core dotnet test exercism/fsharp#308 In progress
Go built-in go test ?
Groovy script = test groovy *Spec.groovy exercism/groovy#31 language vs ecosystem, @Grab (self-contained) instead of Gradle
Haskell Stack stack test exercism/haskell#182
Idris Make make exercism/idris#11 exercism/idris#17
Java Gradle gradle test exercism/exercism#1046 Gradle (simple install) over Maven
JavaScript Node Package Manager jasmine *.spec.js https://github.com/exercism/xjavascript/issues/341
Julia script = test julia runtests.jl ?
Kotlin Gradle gradle test ? Gradle (simple install)
Lua Busted busted . exercism/lua#70 conventional filenames
OCaml Make make ?
Perl5 script = test perl *.t ?
Purescript Pulp pulp test ?
Python script = test python *test.py ?
Racket raco raco test *test.rkt ?
Ruby script = test ruby *spec.rb ?
Rust Cargo cargo test exercism/rust#18
Scala Scala Build Tool sbt test exercism/exercism#850 simplest test runner
Swift Swift Package Manager swift test exercism/swift#233
TypeScript Yarn yarn test ?

In reading these past discussions, I would say the main question that I noticed is:
Should Exercism tracks be about just the language, or should it be about language plus ecosystem?

Some arguments for both sides that I observed:

  • Advantage: This prepares the student for non-Exercism projects that use that build system. This is a large advantage if the build system is widely used.
  • Advantage: Fewer and shorter commands needed to run the tests.
    • Effect of advantage affected by what the commands would be without it.
    • Simplifying assumption: Large advantage for compiled languages, smaller for interpreted.
  • Advantage: The same command is used to run the tests, no matter what the exercise.
  • Advantage: Cross-platform compatibility (esp. CMake and .NET Core)
    • I had to craft special scripts to run the C# and F# tests on Linux, but build systems are meant to mask away those differences. After .NET Core, dotnet test is all that will be necessary.
  • Disadvantage: A build system would be overkill for Exercism's simple exercises.
  • Disadvantage: May prevent student from learning about the fundamental building blocks making up the test command they are running, what's going on beneath the covers.
    • For example, maybe the student will only learn about how to use make and never learn about how to invoke gcc and its associated flags (and this knowledge would be important to be able to write future Makefiles!)
  • Disadvantage: Student now has to learn about the ecosystem in addition to the language.
    • Exercism has been explicitly stated to specifically target language only - is adding ecosystem acceptable?
  • Disadvantage: If a build system requires configuration, the track maintainers now have to maintain that configuration as well.
  • Disadvantage: Now we must explain to the student how to install the build system in addition to the language.
    • Disadvantage mitigated for those languages where the build system installs the language for you.
  • Disadvantage: Constrains the freedom of choice of the student, for those who wish to use different (or no!) build system.
  • Advantages and disadvantages that I missed.
@petertseng
Copy link
Member Author

Having some doubts on whether this topic is exploring build systems or test frameworks.

If it's the former, the following two discussions are not applicable:

I think after some deliberation I will say... I did this survey in order to figure out how tracks answer the question of "How do we make it as easy as possible to run the tests?"

So, I would consider discussions about a particular test framework to be in scope if they make a difference in this regard.

For others who wish to answer related questions such as the level of exposure to language ecosystems offered by the various tracks, however, the choice of test framework can also be interesting.

@kytrinyx
Copy link
Member

Wouldn't it be better if the command to run the tests was easy to remember and constant across all exercises?

My first reaction to this is that I would want to optimize each track for using tooling that is common/idiomatic for that programming language, rather than optimizing for consistency across the different Exercism tracks.

How do we make it as easy as possible to run the tests?

I think this is an excellent question. If we have a good discussion here about the philosophical aspects of it, and decide on a general approach or guidelines, I think we should include this topic in the set of issues that we generate when we create a new language track.

@ErikSchierboom
Copy link
Member

My first reaction to this is that I would want to optimize each track for using tooling that is common/idiomatic for that programming language, rather than optimizing for consistency across the different Exercism tracks.

This is spot on. We should be aiming at using the most idiomatic solution throughout the track. From the tests, to the expected implementation to the build system. This has several obvious examples:

  1. Easier to get started for people that are already familiar with the ecosystem
  2. Easier for new people to find documentation on the ecosystem
  3. Most likely good tooling support

@petertseng
Copy link
Member Author

I hope I won't be misunderstood (I'll edit my post to be clear)

consistency across the different Exercism tracks.

No intent. Within a single track, though.

@petertseng
Copy link
Member Author

petertseng commented Feb 17, 2017

Another advantage I see, that has been made clear by the C# and F# efforts:

Cross-platform compatibility.

A disadvantage I see, or at least a point of discussion:

May prevent student from learning about the fundamental building blocks making up the test command they are running, what's going on beneath the covers.

I'll edit both these points into the initial comment. Supporting details moved there as well so I don't have to maintain them in two places, y'all have them in the notification email too.

@rbasso
Copy link

rbasso commented Feb 19, 2017

Cross-platform compatibility.

A disadvantage I see, or at least a point of discussion:

May prevent student from learning about the fundamental building blocks making up the test command they are running, what's going on beneath the covers.

I agree that it would be ideal if people could first learn how to build a simple program from scratch, and then increase building complexity until the student can grasp the more modern, and actually used, building systems.

That teaching paradigm worked well for a long time, but as tooling developed more functionality and we started depending more on them, things changed a lot in some language.

Sometimes, the fundamental building blocks can only make sense after you have enough exposure to understand some concepts. Because of that, I would divide the users in two groups to:

  • Students who are wiling to spend a lot of time to really understand how the tooling work before playing with any project.
  • Students that want to first get a feeling about how it is to program in that language, so that they can decide if it makes sense to learn all the tooling commonly used.

If I where to guess, I think we have a lot of people in the second group in Exercism...

Because of that, I think we should make it as easy as possible to run the tests, maybe even when that is not the most idiomatic solution.

To give an example, in the Haskell track we used stack, which is an easy to use and popular tool, with hpack, which is not popular at all, but makes things simpler for maintainers and users. This lowers the entry-barrier a lot, so that students can get more experience before having to deal with cabal, and eventually face dependency problems that I still think are really hard to solve, even after an entire year of Haskell-only amateur programming.

@ErikSchierboom
Copy link
Member

I can only agree with @rbasso. For many languages, the building blocks are actually quite complex to understand. For example, when starting out with C#, you wouldn't want to know anything of MSBuild. Same goes for Scala and SBT, albeit to a lesser extent.

Perhaps what we can do is include links to documentation if people want to know more about the fundamental building blocks?

@petertseng
Copy link
Member Author

This may depend on the different learning styles. Including links seems good as then each student could choose an individually preferred point to start to learn about the lower-level tools.

Actually, someone might argue that the job of Exercism is just to allow students to run the tests and it's not our job to teach about the fundamental building blocks, but I figure it can't hurt to go above and beyond our job and include a few links...

Students that want to first get a feeling about how it is to program in that language, so that they can decide if it makes sense to learn all the tooling commonly used.

There's a chance I fall in this group. I have a few language tracks where I had to take multiple attempts to follow their setup instructions before I found something that stuck. I really just wanted something that would work and did not want to have to dig too deep to figure all the tools out because I was just evaluating the various languages at the time.

Students who are wiling to spend a lot of time to really understand how the tooling work before playing with any project.

Another thing that might be interesting if we can get feedback - see how many people find that sort of thing valuable.

@kytrinyx
Copy link
Member

One of the things that we've been using rikki- for in the Go track is to introduce people to tooling as they do the exercises. It's not done perfectly, but I think that it helps to introduce things at the point where you have a reason to be curious about it. If someone says Hey, your solution is formatted differently than usual; the Go community uses gofmt to remove inconsistent formatting now it's about you not just arbitrary-feeling tools/documentation.

I think that the idea of making it super easy first, and slowly help people get used to the idiomatic choices of the language is a good idea.

@jtigger
Copy link

jtigger commented Feb 23, 2017

I'm seeing common ground around the idea of "the idiomatic setup that requires the least understanding (both in installation and usage) to be successful."

Seems like if there are details about compilation/execution that can actually be ignored if properly handled by tooling (in JVM languages, such a detail is the classpath), likely we'll serve our practitioners better by encouraging them to install and use the build tool.

@petertseng
Copy link
Member Author

Thanks for discussing, everyone.

I'll close this to get it out of the queue since there is never really a state when we can call this "done".

A possible action to be taken (take into account user experience) has already been noted in a the linked issue (docs).

Even though this is closed, feel free to continue updating it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants