-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement survey based difficulty #92
Comments
I like the ideas in here. New exercises generally get added to the end of the list, at what point in the list should new exercises appear, to help avoid the "This is too hard, I am frustrated", which can lead to not realizing you can skip, and move on. |
Exercises could be assigned a base difficulty value by the creator/track mentors for initial placement, which is then adjusted based on user ratings (somewhat like reviewer rating/user rating on IMDB). |
This relates (but is not exactly the same as) #86. |
There are multiple variables that go into the ordering of a track, not just difficulty. This would be a great "read-out" informing the track maintainers of experienced difficulty. Imagine something like a candlestick chart that plots the range of experienced difficulty over the current ordering. If there's a sudden spike, that would reflect in the graph. As a first iteration, just exporting these stats in a CSV (import into a spreadsheet) to view the graph. |
I've also recently come across the potential for two different difficulty ratings. For instance in the xgo track, the |
When there is a simpler approach, I would use that rating. The student can decide to solve the problem in a way that is much more complex and difficult. If the simpler solution is difficult to realize, though, maybe bump it up a level on the difficulty scale. |
ok, sounds good @kotp thanks. |
We also had an interesting (related) discussion in exercism/exercism#1081 |
In the last months most of the tracks adopted the new
config.json
format #60 which added a difficulty for each problem on a 1-10 scale. Right now we try to come up with good numbers in endless contributor discussions or most of the times it just stays 1. Maybe we should ask the users instead ...We could add a simple way of rating the difficulty on the site after you submitted the problem, or only if you did a second iteration or if you comment on another solution. Than we take the average or a trimmed mean to determine the difficulty.
With this user data it might also be possible to automatically update the order of the exercises.
The text was updated successfully, but these errors were encountered: