-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC Sampler type #771
Comments
Not exactly related to your specific question of sampler vs info specialization, but I wrote the following to summarize my thoughts on separating the compiler and inference parts of Turing in general, #634 is relevant. FWIW, I think sampler specialization is probably more neat but info specialization is less hectic because currently we dispatch on On the separation of compiler and inference, currently we have the following components on Turing's inference side:
I think an important question we need to answer here is where should each struct and function live, Turing or external packages? Let's take the most complicated case, where we need a So given the above, I think one thing that can be helpful in order to have a more modular Turing is to combine all the compiler-jargon into a single variable, and define functions for common operations performed in The above is probably in line with the thinking in #634 with the main difference being that |
I had some similar thoughts here as well. |
@mohamed82008 There are many good points in your comments. I have been thinking about some similar ideas. I think we need to carefully re-design some core APIs to make the boundary between Turing and its external world (e.g.
At the moment, these PRs still needs some effort for separating, simplifying and documenting APIs and internal functions. As a side note, we might need a more thorough code reviewing process and a few more team hackathons. Fortunately, the Turing code base is relatively small. So with time, we can turn Turing into a swiss knife style library for probabilistic machine learning and Bayesian statistics. |
@mohamed82008 Most of what you wrote is pretty much in the lines of what I'm currently trying. Once we have the PRs ready and merged we will probably need a few more rounds but I feel we are going in the right direction. @yebai Swiss knife style library for probabilistic machine learning sounds awesome. This would be a great selling point for Turing. I think the documentation & tutorials of Turing already highlight that we are headed in this direction. Maybe we can put even more efforts in the tutorials once in a while and showcase the wide application of Turing to various domains and task. |
Duplicate of #746 (comment) |
We currently use
spl.info
which is a dictionary and we would like to remove it.There are a few ways:
Sampler
types likeHMCSampler
,PGSampler
, orHMCInfo
,PGInfo
, etc types and set them asspl.info
Related issues: #602
The text was updated successfully, but these errors were encountered: