-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filter free estimation #30
base: filter-free
Are you sure you want to change the base?
Conversation
Implementation of the filter-free DSGE estimation in MacroModelling.jl
The file is a test suite for filter-free DSGE estimation. It: -solves the RBC model using MacroModelling, -draws structural shocks from a t-distribution, and initial conditions -simulates the data, -Re-estimates the shocks using filter-free estimation.
Fixed simulation exercise
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## filter-free #30 +/- ##
==============================================
Coverage ? 60.76%
==============================================
Files ? 7
Lines ? 3051
Branches ? 0
==============================================
Hits ? 1854
Misses ? 1197
Partials ? 0 ☔ View full report in Codecov by Sentry. |
Implementation of the filter-free DSGE estimation in MacroModelling.jl
The file is a test suite for filter-free DSGE estimation. It: -solves the RBC model using MacroModelling, -draws structural shocks from a t-distribution, and initial conditions -simulates the data, -Re-estimates the shocks using filter-free estimation.
Fixed simulation exercise
…delling.jl into pr/matyasfarkas/30
@matyasfarkas Suggested next steps (with the setup that works):
|
I continued playing around with it and there is an issue with the gradient size of the shocks relative to the parameters. You need the ones for the shocks big enough so that he gets the shocks right but if they are too big he doesn't move around for the parameters at all. Todos:
|
diffstatespacemodels uses the MvNormal and they also do not recover the latent shocks up to high precision. using other loss functions instead of the MvNormal come down to the same thing and a re a dead end. the std is the key parameter balancing, converge (too small -> no convergence), speed (smaller -> slower), and accuracy (smaller -> more accurate estimates of latent shocks and shock size) |
I wrote an issue for MuseInference to help us get it working agian. In the meantime, I would suggest to try The approach is suggested as an alternative in the MuseInference paper and easy to implement in Turing: Here is some theory of combining samplers and some more info about variational inference. I think this could serve as a second best with the first best potentially being MuseInference. |
in hindsight, maybe the trick is really to break the correlation structure between the latent states and the parameters because for small Omega that seems to be where he got stuck |
after some investigation we figured that MuseInference works if Julia is single threaded. it errors with more than one thread |
I experimented a bit with Turing and using a Gibbs sampler to separate the sampling of the latent states and the parameters. My experiments showed that this doesn't really help in making the sampling more efficient. While playing around I came up with another idea: instead of integrating out the latent states, minimise the square sum of the shock sizes which have the model fit the data exactly. another issue is the initial value. instead of figuring out the likelihood given the ergodic distribution I would add more shocks before the time series starts and have an initial value = NSSS at |
another thing I figured out is that any symmetric distribution is approximated by minimal sum of squares. in that sense estimating the degrees of freedom of the student T distribution with a uniform prior was at best confusing for the sampler. |
after some thinking: easiest would be to use UKF. see an implementation here |
Implementation of the filter-free DSGE estimation in MacroModelling.jl