-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature prior #130
Feature prior #130
Conversation
Codecov Report
@@ Coverage Diff @@
## develop #130 +/- ##
===========================================
+ Coverage 67.3% 68.44% +1.13%
===========================================
Files 11 11
Lines 1040 1090 +50
Branches 241 256 +15
===========================================
+ Hits 700 746 +46
- Misses 305 307 +2
- Partials 35 37 +2
Continue to review full report at Codecov.
|
sp = scale(np.random.normal(loc=p_params[0], scale=p_params[1], | ||
size=(n_starts,))) | ||
|
||
elif p_type == 'logNormal': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to decide on a consistent parametrization of the distributions.
I understood from our discussion (might be my misunderstanding...), that we agreed on another parametrization, but I prefer your parametrization here. What do you think @LeonardSchmiester? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that was the way we agreed on it: Introducing log-normal as an exponentiated normal distribution on the linear parameter space? What should be different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought that we parametrized it as it is parametrized in scipy. But as I said, I prefer your version and just wanted to make sure, that Leonard implements the same parametrization. I will adapt my code accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah...
Yes, scipy does it differently...
Yes, but we do everything via numpy here, so I would also prefer not to get involved with yet another package which has redundant functionality, but uses a different definition...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine for me, approved, if we agree on this parametrization of the logNormal distribution (which I prefer anyhow!)
@paulstapor : Can you please include a section in the documentation on the (meaning of the) supported values for the respective columns? |
Indeed, that's still missing. Will do that before merging! |
…ctivePriorType and Parameters
Okay, I updated the documentation, also about the additional columns we're actually still discussing about (initializationPriorType, initializationPriorParameters, objectivePriorType, and objectivePriorParameters). This is yet to come, but fairly simple to implement. I think/hope, it was safe to include this in the "Extensions" part of the documentation... |
Can we agree on that this fixes now #17 ? |
Okay for me. |
May/Can/Shall I merge? |
Both things (this PR and the Issue) are PEtab things... |
Yeah, sorry. Just ignore my comment 🤦♂ |
* added an initial parameter sampling functionality * add tests for startpoint sampling * added documentation for the additional columns intialization and objectivePriorType and Parameters
Added some functionality to sample starting points according to priors, added files for a test (Fixes #17 )