-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TuringLang Roadmap #774
Comments
One thing I was thinking about is how cool it would be for us to provide a couple of "canned" models, like Bayesian linear regression to interface with StatsModels.jl. I could see someone calling fit = bayeslm(@formula(y ~ 1 + x1 + x2), data = my_data_frame, nchains=4) We could also do a logistic regression model, binomial, you name it. Plus, we could pretty heavily optimize the inference on the back end. It'd be a relatively quick way to get us more involved with the JuliaStats people and provide a rapid way of using Turing's awesome features without too much stress. It might get more people involved in MCMCChains as well, so we can figure out how to make it work better for everyone. |
It would be cool to have this and might be a killer app for Julia. In fact, this should be a low-hanging fruit given Julia's meta-programming support and Turing's inference engines. |
I had actually start typing this up a little while ago and it was pretty straightforward. I can put it on my plate if you'd like, whenever I'm not working on the API overhaul. |
Hi guys, What is the current status of the MH sampler. Does it auto-optimise the variances of the proposal distributions? I am using the MH sampler for my current work (since I am facing a few troubles with the autodiff, and hence cannot use HMC). But the results returned from the MH sampler looked really strange, i.e. the chain barely moved. I have tried setting initial values for my parameter in the hope of improving the sampling. But the chain only moved a bit at the beginning, and the got stuck completely (this happened to all the parameters). Anyway, I have not tried to run a long chain. The result above was from a few quick tests with only 1000 iterations. And I am not sure if this problem comes from the way I implement my model at the moment or from the MH sampler. I hope to have some advices from you guys before running more tests. Thanks a lot. Lam |
Our Metropolis-Hastings implementation is really bare-bones. Currently it's the most basic possible implementation of MH -- no adaptation of any kind. If your model is a complex one you should probably try running a lot more samples (by the way, we can't see whatever result you are mentioning). You should open a new issue with your model specification -- it might also be worth fixing whatever your autodiff problem if possible so you can use a better sampler than MH. |
The other example of "canned" Bayesian models is rstanarm: https://mc-stan.org/users/interfaces/rstanarm.html |
Roadmap
posterior server
)New Sampling Algorithms
PDSampler.jl
)Deterministic Inference Algorithms
ForneyLab.jl
)Tutorials
Feature Requests
These are assembled in no particular order, and represent things various Turing users have requested or mentioned as something it would be nice to have.
@model Y ~ x1 + x2
. This would create a Turing model on the backend, and support functions likepredict(model)
or something. See R's brms for more info.Misc
sample
call.The text was updated successfully, but these errors were encountered: