-
Notifications
You must be signed in to change notification settings - Fork 252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prior predictive guide #1334
Prior predictive guide #1334
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1334 +/- ##
=======================================
Coverage 95.31% 95.31%
=======================================
Files 47 47
Lines 4912 4913 +1
=======================================
+ Hits 4682 4683 +1
Misses 230 230 ☔ View full report in Codecov by Sentry. |
View / edit / reply to this conversation on ReviewNB wd60622 commented on 2025-01-04T16:25:51Z Line #6. "likelihood": Prior("Normal", sigma=Prior("HalfNormal", sigma=6)),
Is the sigma meant to be massive? |
View / edit / reply to this conversation on ReviewNB wd60622 commented on 2025-01-04T16:25:52Z fig.suptitle("ROAS Prior Distributions", fontsize=18, fontweight="bold", y=1.06)
I believe this should be Prior instead of Posterior
Can you add ";" at the end to make the "Text(...)" not show up |
View / edit / reply to this conversation on ReviewNB wd60622 commented on 2025-01-04T16:25:53Z Could you do an iteration of this model to show some better priors?
Maybe try the new Censored wrapper and / or reducing the variation. Then show the same plots (ideally side by side) |
@wd60622 I am using the plotting functions from the mmm components! They are amazing! We should promote them more! |
Yeah, I like them too. Convenient workflow for prior and posterior. I think they should be on all component based items with same / similar API. |
View / edit / reply to this conversation on ReviewNB ErikRingen commented on 2025-01-08T11:43:20Z One thing that I think would be useful to highlight and perhaps demonstrate is how prior sensitivity declines with more data. In teaching I find this often eases student's anxiety about setting the "right" prior to know that in many cases the likelihood will dominate. juanitorduz commented on 2025-01-09T10:35:04Z This is a great point. I think I will add it as a remark if that is ok :) |
View / edit / reply to this conversation on ReviewNB ErikRingen commented on 2025-01-08T11:43:21Z Would be nice to have legend denoting which color is prior distribution, for consistency with other plots. juanitorduz commented on 2025-01-09T11:02:24Z done! |
View / edit / reply to this conversation on ReviewNB ErikRingen commented on 2025-01-08T11:43:22Z Standard deviation should be exponential, not lognormal, right? juanitorduz commented on 2025-01-09T11:02:20Z good catch! |
View / edit / reply to this conversation on ReviewNB ErikRingen commented on 2025-01-08T11:43:22Z If this is targeted at beginners I would include more explanation or at least a reference to maximum entropy distributions. juanitorduz commented on 2025-01-09T12:14:02Z Great point! Added a fererence~ |
View / edit / reply to this conversation on ReviewNB ErikRingen commented on 2025-01-08T11:43:23Z I would have expected the prior mean function to be nearly flat over date? This seems to mirror the actual fluctuations in the data. If this is correct behavior maybe an explanation for readers (like me) who would expect to see a flat line.
Taken as a whole, this is pretty diffuse prior, no? Given that the predictive distribution extends ~3x higher than the maximum observed value. Also, it is probably worth acknowledging somewhere that the prior predictions include negative sales, which we would want to avoid in any real analysis. juanitorduz commented on 2025-01-09T12:29:01Z These are great points!
I added a Trundated Normal. To make sales positive. |
View / edit / reply to this conversation on ReviewNB ErikRingen commented on 2025-01-08T11:43:24Z Given the emphasis on domain knowledge about reasonable bounds, the negative sales in the predictive check stands out as a contradiction to this advice. Could just use a HalfNormal likelihood to fix?
That said, I think it is actually useful to show such nonsensical prior predictions as a teaching exercise. juanitorduz commented on 2025-01-09T12:29:51Z I added a Truncated Normal :) |
View / edit / reply to this conversation on ReviewNB wd60622 commented on 2025-01-08T14:33:33Z Can we put these into a single chart in order to contrast the different prior information with the 1 observed data set juanitorduz commented on 2025-01-09T12:29:58Z done! |
View / edit / reply to this conversation on ReviewNB wd60622 commented on 2025-01-08T14:33:34Z Thoughts on using Prior("InverseGamma", mu=4).constrain(...) instead?
The parameters can be bracket index with Prior instances in order to be used with the HSGPKwargs. I did create an issue for dot attribution of Prior class as well juanitorduz commented on 2025-01-09T12:46:51Z cool! |
This is a great point. I think I will add it as a remark if that is ok :) View entire conversation on ReviewNB |
good catch! View entire conversation on ReviewNB |
done! View entire conversation on ReviewNB |
Great point! Added a fererence~ View entire conversation on ReviewNB |
These are great points!
I added a Trundated Normal. To make sales positive. View entire conversation on ReviewNB |
I added a Truncated Normal :) View entire conversation on ReviewNB |
done! View entire conversation on ReviewNB |
cool! View entire conversation on ReviewNB |
Thank you @wd60622 and @ErikRingen great feedback! I think I have addressed all your comments (If not please let me know). This one is ready for a second review round 🙏 |
Looks really good to me! It is such a big topic of course we could think of more to add but this seems like the right length/amount of detail for a first tutorial on prior predictions. Something perhaps for a follow-up notebook is the different types of informative/regularizing priors, the most important distiction I think being informative with respective to magnitude (i.e., a priori we expect the effect of be small) vs informative with respective to sign (i.e., a priori we expect the effect to be positive). When teaching I often reference the "5 levels" of priors from the stan devs: https://github.com/stan-dev/stan/wiki/prior-choice-recommendations |
Thanks for the comments! Indeed, this is a very interesting topic! I will add the Stan link as a reference (look great!). If you have up for it, you would write a follow-up notebook or extend this one (on iteration) based on your valuable experience teaching these topics 💪 ! |
@ErikRingen would you mind approving this PR then ? 🤗 |
yay! thanks you for the reviews! |
Many of our users (and most Bayesian practitioners, as it is a key component of the Bayesian workflow) have questions about prior predictive checks. We do this in many notebooks but go very fast. I'd like to suggest a very introductory guide to the topic. This could be a PyMC Example, but he tackles explicitly some prior specifications in PyMC-Marketing.
📚 Documentation preview 📚: https://pymc-marketing--1334.org.readthedocs.build/en/1334/