Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement survey based difficulty #92

Closed
behrtam opened this issue Oct 24, 2016 · 8 comments
Closed

Implement survey based difficulty #92

behrtam opened this issue Oct 24, 2016 · 8 comments

Comments

@behrtam
Copy link

behrtam commented Oct 24, 2016

In the last months most of the tracks adopted the new config.json format #60 which added a difficulty for each problem on a 1-10 scale. Right now we try to come up with good numbers in endless contributor discussions or most of the times it just stays 1. Maybe we should ask the users instead ...

We could add a simple way of rating the difficulty on the site after you submitted the problem, or only if you did a second iteration or if you comment on another solution. Than we take the average or a trimmed mean to determine the difficulty.

With this user data it might also be possible to automatically update the order of the exercises.

@kotp
Copy link
Member

kotp commented Oct 24, 2016

I like the ideas in here. New exercises generally get added to the end of the list, at what point in the list should new exercises appear, to help avoid the "This is too hard, I am frustrated", which can lead to not realizing you can skip, and move on.

@sguermond
Copy link

Exercises could be assigned a base difficulty value by the creator/track mentors for initial placement, which is then adjusted based on user ratings (somewhat like reviewer rating/user rating on IMDB).

@jtigger
Copy link

jtigger commented Nov 1, 2016

This relates (but is not exactly the same as) #86.

@jtigger
Copy link

jtigger commented Nov 1, 2016

There are multiple variables that go into the ordering of a track, not just difficulty. This would be a great "read-out" informing the track maintainers of experienced difficulty. Imagine something like a candlestick chart that plots the range of experienced difficulty over the current ordering. If there's a sudden spike, that would reflect in the graph.

As a first iteration, just exporting these stats in a CSV (import into a spreadsheet) to view the graph.

@robphoenix
Copy link

I've also recently come across the potential for two different difficulty ratings. For instance in the xgo track, the grains exercise can be approached using math.Pow() or using bitwise operators, the latter in my mind requiring understanding of a more difficult subject, and I was unsure whether to rate the exercise lower or higher depending on the possibilities for an easier or harder to arrive at solution. (431 & 430)
A survey of users experiences would help with this.

@kotp
Copy link
Member

kotp commented Jan 6, 2017

When there is a simpler approach, I would use that rating. The student can decide to solve the problem in a way that is much more complex and difficult. If the simpler solution is difficult to realize, though, maybe bump it up a level on the difficulty scale.

@robphoenix
Copy link

ok, sounds good @kotp thanks.

@kytrinyx
Copy link
Member

We also had an interesting (related) discussion in exercism/exercism#1081

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants