-
-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New exercise: High Scores #886
Conversation
Awesome, 👍 I want to have a closer look at this before I hit the merge button, but I expect that you two will have done a great job and am not expecting any problems. Will review as soon as I can. |
def workload | ||
[ | ||
"scores = #{scores}", | ||
"output = #{expected.inspect}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer that the variable here was expected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reason why I went with output
(also thought of result
) was to make it easier to parse visually, since we have them in every spec:
scores = [1, 2, 3, 4]
output = [1, 2, 3, 4]
result = [1, 2, 3, 4]
expected = [1, 2, 3, 4]
That being said, if you think the we're losing meaning by using those variable names, I'll change it. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather choose variable names because they are meaningful rather than because they make the code look prettier.
You can always add a space after "scores" if you really care about lining things up.
"output" can be ambiguious, what is it the "output" of? In this case I understand it to be the the expected_output
which I'd rather shorten to expected
as it makes the assert_equal
line make more sense.
The general format being:
assert_equal expected, actual
Then we have:
assert_equal output, HighScores.new(scores).#{property}
vs
assert_equal expected, HighScores.new(scores).#{property}
"expected" is also consistent with what most of the other exercises use in their tests.
(Although, this might not be a valid argument since I'm probably responsible for all of them being that way :) )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear: I think you should change it. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the detailed comments - I don't feel strongly about this and I'm typically not a proponent of making sure everything lines up because in "the real world" it tends to be hard to maintain.
This alone is enough reason for me:
"expected" is also consistent with what most of the other exercises use in their tests.
Consistency++ I'll change it!
Although, this might not be a valid argument since I'm probably responsible for all of them being that way :)
Heh, good point - I think it's a good call though 👍
require 'minitest/autorun' | ||
require_relative 'high_scores' | ||
|
||
# Common test data version: 1.0.0 dd88548 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the Common test data version should be 0.0.0
until this gets merged into problem-specifications.
skip | ||
scores = [5, 3, 1] | ||
output = [5, 3, 1] | ||
assert_equal output, HighScores.new(scores).top |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like top
with no arguments as a method name here 👍
OK, I've left some feedback, most of it is fairly minor and this is certainly mergeable as is ⭐ Also see my comments in the exercism/problem-specifications#1378 PR. Do you want to get this PR merged ASAP or are you happy to wait a week or so for the problem-specifications PR to go through? |
@Insti heads up that I'm planning on squashing these commits before we merge. |
Co-authored-by: Maud de Vries <[email protected]>
YAY! |
@F3PiX I saw that before too but forgot to ask @Insti about it. From my limited testing we should be good. It just means that anyone trying to generate the tests for the whole track will see this at the end of the output:
It doesn't block the other generators from running and it doesn't change our existing tests. @Insti will know better though. |
@pgaspar that sounds about right. I'm trying to remember if there was some other reason but I think It's mostly so the generator works correctly and will have the correct version information. I think it's not going to be a problem in this case. |
Thanks for your work on this @F3PiX and @pgaspar ❤️ |
Thanks for your support @Insti :-)) Much appreciated. |
This is part of the rearrangement of the track.
One of the things missing is an easy exercise with an array as a list and some list manipulations.
The Problem Specification PR is here: exercism/problem-specifications#1378
This High Scores exercise is meant to be a core exercise
TwoFer -> Acronym -> Isogram -> High Scores -> Matrix -> ...
As we want to test drive, it's added here as a side exercise first.
This exercise is also going to be the first core in the track where the students have to instantiate a class and work with multiple methods. We paid extra attention to the sequence of the tests, so the code is building up nicely, and at the end there should be some nice mentoring opportunities left.
Thanks to @pgaspar, who took care of the implementation. 🙇