-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update optimal retention and parameters tooltip #3148
update optimal retention and parameters tooltip #3148
Conversation
In this case, we'll probably want to 1) move the existing strings to the bottom of the file and 2) add new strings with an updated key (eg deck-config-weights-tooltip -> deck-config-weights-tooltip2). If the keys remain the same, translators won't be notified. |
OK. I will update the PR tonight. |
This reverts commit 32fdc5c.
Co-authored-by: user1823 <[email protected]>
Co-authored-by: user1823 <[email protected]>
Co-authored-by: user1823 <[email protected]>
I think that when the number of reviews is exactly 0 (which most likely implies an error in the search query), an error should be shown. One example of a user entering an incorrect search query and not getting any feedback (if the same query was run in 24.04.2 beta): https://forums.ankiweb.net/t/why-cant-i-optmize-my-fsrs-parameters/43830/3 |
For this case, it's better to warn the user that the query doesn't match any reviews. But I don't know how to add a new warning popup in Anki. @dae, what do you think of? I hope you could help me. |
Why don't we push that problem into a separate issue? We could take a similar approach to #3123 and display the item count somewhere, then users would know the search is wrong, without us having to make a separate error. |
deck-config-compute-optimal-retention-tooltip3 = | ||
This tool assumes that you’re starting with 0 learned cards, and will attempt to find the desired retention value | ||
that will lead to the most material learnt, in the least amount of time. To accurately simulate your learning process, | ||
this feature requires a minimum of 400+ reviews. The calculated number can serve as a reference when deciding what to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How likely is it that this number will change again? We should not be specific if there is a chance it will change again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This number was not changed by me. Here is the discussion: #3019 (comment)
I don't have a plan to modify it in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you happy with it at 400? Sorry, we should have checked with you at the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
400 is OK to me. And I plan to make the simulation less sensitive to outliers.
Yes, we can definitely move it to a new issue. I just thought that it would be easier to make all related changes at once.
That would depend upon exactly how are you planning to display the count. |
Co-authored-by: Damien Elmes <[email protected]>
Co-authored-by: Damien Elmes <[email protected]>
Looks good, pending that one question above. |
This comment was marked as resolved.
This comment was marked as resolved.
This PR doesn't prevent evaluation with less than 400 reviews.
This PR doesn't introduce any limit. |
Thanks all! |
I removed the words about the 1000 review limitation from the tooltip of FSRS parameters, and mentioned the 400 review limitation in the tooltip of optimal retention.