-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Networks of discrete random variables #98
Comments
Hi @nathanielvirgo! Thanks for trying out At the moment As for the second model, there are two problems, i.e. In the near future, In case of urgency, you can check out ForneyLab.jl in particular, the notebook on Variational Laplace and Sampling which provides approximate posteriors in a wider range of models (this features will be ported to ReactiveMP soon, so stay tuned) I guess @bvdmitri and @bartvanerp will elaborate more on this issue/question. |
Hey @nathanielvirgo ! Thanks for your question. As Albert already pointed out, the speed and efficiency of message passing (and ReactiveMP.jl implementation) comes from the fact that we run inference in large parts of the graph analytically, hence requiring conjugate structure of the model. Regarding your first model, there is no analytical solution of product of Bernoulli and Beta distributions. What we mean here is that there is no known distribution that represent product of However, ReactiveMP.jl provides you a straightforward API to inject approximation methods and run inference on non-conjugate models. This is somewhat advanced usage of our API but it is definitely possible. We support the @model function coin_model(n)
y = datavar(Float64, n)
θ ~ NormalMeanVariance(0.0, 10.0) # I changed prior here, also non conjugate
for i in 1:n
y[i] ~ Bernoulli(θ)
end
return y, θ
end If you run this model without constraints, it will give the similar error:
However, with constraints = @constraints begin
q(θ) :: SampleList(1000, RightProposal())
end n = 10
result = inference(
model = Model(coin_model, n),
constraints = constraints,
returnvars = (θ = KeepLast(), ),
data = (y = rand(n), )
)
mean(result.posteriors[:θ]) # some value here, 0.450794357590217 in my case If you really want to use As for the second model, I'm not sure what do you mean by scaling and shifting ( |
This is more of a question than an issue. (If there's a better place to ask it please let me know.)
I'm hoping to do inference on networks of (mostly) discrete random variables. ReactiveMP sounded like it might be well suited to my applications, because message passing is likely to perform a lot better than the MCMC algorithms that other PPLs tend to use.
However, I can't get some simple examples to work, and I'm wondering whether ReactiveMP can be used for this sort of thing or if I'm just barking up the wrong tree.
Here's the simplest thing I tried, modifying the coin flip example from the getting started page:
Here the 'parameter' θ is drawn from a Bernoulli distribution and y is just a copy of θ. We condition on y=1, so we hope to find that the posterior for θ is also concentrated on 1. However, it gives this error message instead (plus a stack trace):
I understand this error, though it seems that calculating the product of those distributions analytically should be straightforward (you just evaluate the beta distribution at each point in the support of the Bernoulli one). Does this mean this kind of thing isn't supported?
I also tried some variations on this slightly less silly model, where y is a noisy copy of θ instead of an exact copy:
but I couldn't get any version of this to work either. (This version doesn't seem to like the
+
.)In short my question is whether I can use ReactiveMP to build up networks of discrete random variables and perform inference on them, or is this not supported?
The text was updated successfully, but these errors were encountered: