Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we change difficulty, don't we need to put it into correct order in this array ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, where we want to put this exercise just depends on: Do we think
grains
is well-placed in this track, given its difficulty relative to the exercises around it?Maybe the picture would be easier to see if we have a single PR proposing difficulty numbers for all the exercises, without adding topics to most of them yet (it's what #345 suggests to do).
For example, is there any exercise before
grains
that we would rate a 3? Any exercise aftergrains
that we would rate a 1? If so, that is a bit strange, and then maybe some of those exercises should get moved.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me I've been adding the difficulty and topics at the same time as the process of making those judgements is intrinsically combined. I'll look at other peoples submissions and see if there are any comments indicating peoples feelings regarding the exercise, so that the difficulty rating is not confined to my own experience. This is especially important for exercises that I completed a while back where maybe I've forgotten some of the pain of not knowing the solution. At the same time I'm looking for what topics are being covered. For some exercises the submissions are all very similar, and so the topics list and difficulty is quite easy. For others, such as this one, there can be a number of different approaches, covering different topics.
I think it would be a more difficult task to rate all the exercises in one fell swoop and then add topics as we go. Would someone need to have completed all the exercises to do this? If they're reviewing the exercise to gauge difficulty, is it not worth making a note of topics simultaneously? Or would the initial difficulty ratings be a quick and simple best guess effort, to be adjusted over time? (My concern with this is starting an exercise's rating with a misleading level of difficulty) My intention from here is to add topics to exercises in the current order they are in, I only added grains out of step as I completed it yesterday and wanted to tackle it while fresh in my mind.
I agree with @petertseng, it's hard to make a judgement on order until there are adjacent exercises to compare against.
Also to consider is the idea of this exercise being simple in one way, but potentially introducing a difficult concept.
I've updated the config.json in the xpython track in a similar way to this, and my assumption was that any reordering could occur when there's sufficient information to compare the exercises. I wanted to get started on this to at least have a kind of baseline from which to work from. Anyway I've rambled on, sorry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My suspicion is that if we think the track is already well-ordered, we can take it slow.
My experience in another track: In Haskell, it was slightly urgent to get the track ordered quickly since some problems were in wildly wrong places, so I did everything in one go by roughly guessing how I'd approach the problem and estimating the difficulty of that approach, resulting in exercism/haskell#402 . I had not completed every exercise personally, but I was at least familiar with the tests being given in every single exercise, since I had reviewed PRs updating every single one of them. I also cross-referenced with F# as that track has implemented almost every exercise and assigned difficulties to each one. You're right that these guesses could have been wrong, but it was better than the alternative.
That approach may not be necessary for this track. I had previous work checking ordering in #279, but this only checked stub files vs not. I haven't checked other things about track ordering.
At a quick glance, I would say react is probably too early, and variable-length-quantity probably doesn't belong at the very end.
Ah, that explains it. Thanks. I was wondering why you suddenly jumped to grains...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so, just so I'm clear, are you & @ferhatelmas OK if I continue adding difficulty & topics at the same time to exercises? I'll mostly go in order, unless I complete another exercise as I go, though it shouldn't take too long to catch up with myself. I'm just not familiar enough with the exercises to tackle all the difficulties at once. I'll also cross-reference with other tracks as I go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK with me