Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New exercise: High Scores #886

Merged
merged 2 commits into from
Oct 25, 2018
Merged

New exercise: High Scores #886

merged 2 commits into from
Oct 25, 2018

Conversation

emcoding
Copy link
Contributor

@emcoding emcoding commented Oct 24, 2018

This is part of the rearrangement of the track.
One of the things missing is an easy exercise with an array as a list and some list manipulations.
The Problem Specification PR is here: exercism/problem-specifications#1378

This High Scores exercise is meant to be a core exercise
TwoFer -> Acronym -> Isogram -> High Scores -> Matrix -> ...

As we want to test drive, it's added here as a side exercise first.
This exercise is also going to be the first core in the track where the students have to instantiate a class and work with multiple methods. We paid extra attention to the sequence of the tests, so the code is building up nicely, and at the end there should be some nice mentoring opportunities left.

Thanks to @pgaspar, who took care of the implementation. 🙇

@Insti
Copy link
Contributor

Insti commented Oct 24, 2018

Awesome, 👍

I want to have a closer look at this before I hit the merge button, but I expect that you two will have done a great job and am not expecting any problems.

Will review as soon as I can.

def workload
[
"scores = #{scores}",
"output = #{expected.inspect}",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer that the variable here was expected

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason why I went with output (also thought of result) was to make it easier to parse visually, since we have them in every spec:

scores = [1, 2, 3, 4]
output = [1, 2, 3, 4]
result = [1, 2, 3, 4]
expected = [1, 2, 3, 4]

That being said, if you think the we're losing meaning by using those variable names, I'll change it. What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather choose variable names because they are meaningful rather than because they make the code look prettier.

You can always add a space after "scores" if you really care about lining things up.

"output" can be ambiguious, what is it the "output" of? In this case I understand it to be the the expected_output which I'd rather shorten to expected
as it makes the assert_equal line make more sense.

The general format being:
assert_equal expected, actual

Then we have:

assert_equal output, HighScores.new(scores).#{property}
vs
assert_equal expected, HighScores.new(scores).#{property}

"expected" is also consistent with what most of the other exercises use in their tests.

(Although, this might not be a valid argument since I'm probably responsible for all of them being that way :) )

Copy link
Contributor

@Insti Insti Oct 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear: I think you should change it. :)

Copy link
Member

@pgaspar pgaspar Oct 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the detailed comments - I don't feel strongly about this and I'm typically not a proponent of making sure everything lines up because in "the real world" it tends to be hard to maintain.

This alone is enough reason for me:

"expected" is also consistent with what most of the other exercises use in their tests.

Consistency++ I'll change it!

Although, this might not be a valid argument since I'm probably responsible for all of them being that way :)

Heh, good point - I think it's a good call though 👍

require 'minitest/autorun'
require_relative 'high_scores'

# Common test data version: 1.0.0 dd88548
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe the Common test data version should be 0.0.0 until this gets merged into problem-specifications.

skip
scores = [5, 3, 1]
output = [5, 3, 1]
assert_equal output, HighScores.new(scores).top
Copy link
Contributor

@Insti Insti Oct 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like top with no arguments as a method name here 👍

@Insti
Copy link
Contributor

Insti commented Oct 24, 2018

OK, I've left some feedback, most of it is fairly minor and this is certainly mergeable as is ⭐

Also see my comments in the exercism/problem-specifications#1378 PR.

Do you want to get this PR merged ASAP or are you happy to wait a week or so for the problem-specifications PR to go through?

@pgaspar
Copy link
Member

pgaspar commented Oct 24, 2018

@Insti heads up that I'm planning on squashing these commits before we merge.

@emcoding emcoding merged commit a2a9d97 into exercism:master Oct 25, 2018
@emcoding
Copy link
Contributor Author

YAY!

@emcoding
Copy link
Contributor Author

And after merging I found this in the docs:

Reference the PR in problem-specifications if it hasn't been merged yet, this must not be merged until the exercism/problem-specifications PR is merged.

Should I reverse the merge? @Insti, do you know?

cc @pgaspar

@pgaspar
Copy link
Member

pgaspar commented Oct 25, 2018

@F3PiX I saw that before too but forgot to ask @Insti about it.

From my limited testing we should be good. It just means that anyone trying to generate the tests for the whole track will see this at the end of the output:

Broken generators: (Run these individually to see the error raised.)

high-scores

It doesn't block the other generators from running and it doesn't change our existing tests. @Insti will know better though.

@Insti
Copy link
Contributor

Insti commented Oct 26, 2018

@pgaspar that sounds about right.

I'm trying to remember if there was some other reason but I think It's mostly so the generator works correctly and will have the correct version information.

I think it's not going to be a problem in this case.

@Insti
Copy link
Contributor

Insti commented Oct 26, 2018

Thanks for your work on this @F3PiX and @pgaspar ❤️

@emcoding
Copy link
Contributor Author

Thanks for your support @Insti :-)) Much appreciated.

@emcoding emcoding deleted the high-scores branch October 26, 2018 10:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants