-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Brushing over notebook #1470
Brushing over notebook #1470
Conversation
…on where possible. Thermodynamic more robust.
…g to naming scheme in benchmark models as well
…according to naming scheme in benchmark models as well" This reverts commit a1a54f7.
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## develop #1470 +/- ##
===========================================
- Coverage 82.92% 82.89% -0.03%
===========================================
Files 163 163
Lines 13786 13786
===========================================
- Hits 11432 11428 -4
- Misses 2354 2358 +4 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly looks good! Thanks.
I added a few comments of the same type about the reduction of starts of optimization and samples of sampling. I think it's good to reduce as much as we can so it takes slower. But I would not reduce so low that the figures look less presentable or nice. If the histograms start being just random bars it's not much use showing them. So if you didn't already, check whether the figures stay the same in character -- especially if there's no random seed that controls startpoints and how well the model is fit.
doc/example/amici.ipynb
Outdated
"n_starts = 20 # usually a value >= 100 should be used\n", | ||
"n_starts = 10 # usually a value >= 100 should be used\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just check that these 10 starts with the set random seed give a good model fit. It's kinda important to have fast and good sampling and profiling.
Also, is 10 enough to have a parameter histogram and scatter plot that looks like something nice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think problem would here be more that we do find optima veeery well and hence the correlation plot looks hideous. You think we should switch back to boehm? Histogram looks soso
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, if it makes the notebook, the figures and conclusions taken from them make more sense, I would go for that rather than it being a bit faster
doc/example/getting_started.ipynb
Outdated
"outputs": [ | ||
{ | ||
"data": { | ||
"text/plain": [ | ||
"[0.0,\n", | ||
" 0.0,\n", | ||
" 0.0,\n", | ||
" 0.0,\n", | ||
" 0.0,\n", | ||
" 0.0,\n", | ||
" 3.056285312137946e-36,\n", | ||
" 3.76158192263132e-36,\n", | ||
" 6.018531076210112e-36,\n", | ||
" 9.780112998841433e-36]" | ||
] | ||
}, | ||
"execution_count": 42, | ||
"metadata": {}, | ||
"output_type": "execute_result" | ||
} | ||
], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove all outputs from the notebook
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine for me, thanks!
I would not reduce so low that the figures look less presentable or nice.
re: this balance between computation time of CI vs. sufficient computation for nice figures, we could use some environment variable (e.g. CI
and GITHUB_ACTIONS
are always set to true during GitHub actions CI) to determine the computation effort in each notebook. This could look like this at the top of the notebooks.
import os
n_starts = 100
n_samples = 10000
if os.environ.get("CI", None) is not None:
n_starts = 10
n_samples = 1000
Co-authored-by: Dilan Pathirana <[email protected]>
Co-authored-by: Dilan Pathirana <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Thanks
Ohh I just remembered. It would be nice if this was implemented in the getting_started notebook as a somewhat guide to estimating models. This might take some restructuring of the notebook, so I'm also ok with doing it myself in a separate PR. Wanted to mention it here so it's not forgotten :D |
Mainly:
Hopefully closes #1460 and #1396