-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about code errors and output interpretation #1
Comments
Thanks for this. I think there are a few things to mention here.
should not stop the script (can you confirm)?
After you've run the command Everything else, I think, sort of depends on this and you're missing one file, which in turn doesn't let you save other outputs, which in turn break all the script... Have you tried running the script bit-by-bit (following the numbering)? Can you post the list of files you get in |
Hello, Thanks for the reply.
> # Code for the paper
> # "Estimating weekly excess mortality at sub-national level in Italy during the COVID-19 pandemic"
> # by Marta Blangiardo, Michela Cameletti, Monica Pirani, Gianni Corsetti, Marco Battaglini, Gianluca Baio
>
> # Last version: 13/08/2020
>
> library(dplyr)
Attaching package: ‘dplyr’
The following objects are masked from ‘package:stats’:
filter, lag
The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
> library(INLA)
Loading required package: Matrix
Loading required package: sp
Loading required package: parallel
Loading required package: foreach
This is INLA_20.03.17 built 2020-11-16 11:05:43 UTC.
See www.r-inla.org/contact-us for how to get help.
>
> # Load extra functions
> source("make.functions.R")
>
> ############################################################################
> # 1. PREPARE DATA
> ############################################################################
> ## Load the macro areas data
> load("./Data/MacroRegions.Rdata")
>
> # Now prepare the data
> Sex = "Females" #other possible choice: Males
> area = "NordOvest" #other possible choices: NordEst, Sud, Centro, Lombardia
> data = make.data(macro.regions,area,Sex=Sex)
Warning messages:
1: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`.
2: Problem with `mutate()` input `IDarea`.
i The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
i Input `IDarea` is `group_indices(., ID_Ita)`.
3: The `...` argument of `group_keys()` is deprecated as of dplyr 1.0.0.
Please `group_by()` first
This warning is displayed once every 8 hours.
Call `lifecycle::last_warnings()` to see where this warning was generated.
> graph = paste0("./Graphs/",tolower(area),".graph")
>
> ############################################################################
>
>
> ############################################################################
> # 2. RUN INLA & SAVE THE OUTPUT
> ############################################################################
>
> ## Formula with temperature
> formula = morti ~ 1 +
+ f(ID1,model="bym",graph=graph,scale.model=T,
+ hyper=list(theta1=list(prior="loggamma",param=c(1,0.1)),theta2=list(prior="loggamma",param=c(1,0.1)))) +
+ f(week,model="rw1",replicate=ID_prov,scale.model=TRUE,hyper=list(prec=list(prior="loggamma",param=c(1,0.1)))) +
+ f(Anno,model="iid") +
+ f(IDtemp,model="rw2",scale.model=TRUE,hyper=list(theta=list(prior="loggamma",param=c(1,0.1))))
>
>
> # INLA SET UP
> # Under Poisson uses default set up
> control.family=inla.set.control.family.default()
> # Defines the correct variable to offset the rates in the log-linear predictor
> offset = data$E
>
> m = inla(formula,
+ data=data,
+ E=offset,
+ family="Poisson",
+ control.family=control.family,
+ verbose = TRUE,
+ num.threads = round(parallel::detectCores()*.8),
+ control.compute=list(config = TRUE)
+ )
hgid: 8b30e851fed2 date: Tue Mar 17 10:38:12 2020 +0300
Report bugs to <help@r-inla.org>
Process file[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/Model.ini] threads[13] blas_threads[1]
inla_build...
number of sections=[11]
parse section=[0] name=[INLA.libR] type=[LIBR]
inla_parse_libR...
section[INLA.libR]
R_HOME=[C:/PROGRA~1/R/R-40~1.3]
parse section=[10] name=[INLA.Expert] type=[EXPERT]
inla_parse_expert...
section[INLA.Expert]
disable.gaussian.check=[0]
cpo.manual=[0]
jp.file=[(null)]
jp.model=[(null)]
parse section=[1] name=[INLA.Model] type=[PROBLEM]
inla_parse_problem...
name=[INLA.Model]
R-INLA tag=[Version_20.03.17]
Build tag=[Version_20.03.17]
openmp.strategy=[default]
pardiso-library installed and working? = [no]
smtp = [taucs]
strategy = [default]
store results in directory=[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/results.files]
output:
cpo=[0]
po=[0]
dic=[0]
kld=[1]
mlik=[1]
q=[0]
graph=[0]
gdensity=[0]
hyperparameters=[1]
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
parse section=[3] name=[Predictor] type=[PREDICTOR]
inla_parse_predictor ...
section=[Predictor]
dir=[predictor]
PRIOR->name=[loggamma]
hyperid=[53001|Predictor]
PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
PRIOR->PARAMETERS=[1, 1e-005]
initialise log_precision[12]
fixed=[1]
user.scale=[1]
n=[139400]
m=[0]
ndata=[139400]
compute=[0]
read offsets from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b]
read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b]
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 0/139400 (idx,y) = (0, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 1/139400 (idx,y) = (1, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 2/139400 (idx,y) = (2, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 3/139400 (idx,y) = (3, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 4/139400 (idx,y) = (4, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 5/139400 (idx,y) = (5, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 6/139400 (idx,y) = (6, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 7/139400 (idx,y) = (7, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 8/139400 (idx,y) = (8, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06675280b] 9/139400 (idx,y) = (9, 0)
Aext=[(null)]
AextPrecision=[1e+008]
output:
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
parse section=[2] name=[INLA.Data1] type=[DATA]
inla_parse_data [section 1]...
tag=[INLA.Data1]
family=[POISSON]
likelihood=[POISSON]
file->name=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f027895b5b]
file->name=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f08127ab0]
file->name=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f04d82196]
read n=[418200] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f027895b5b]
mdata.nattributes = 0
0/139400 (idx,a,y,d) = (0, 0.61143, 1, 1)
1/139400 (idx,a,y,d) = (1, 0.61143, 0, 1)
2/139400 (idx,a,y,d) = (2, 0.61143, 0, 1)
3/139400 (idx,a,y,d) = (3, 0.61143, 0, 1)
4/139400 (idx,a,y,d) = (4, 0.61143, 2, 1)
5/139400 (idx,a,y,d) = (5, 0.61143, 0, 1)
6/139400 (idx,a,y,d) = (6, 0.61143, 0, 1)
7/139400 (idx,a,y,d) = (7, 0.61143, 0, 1)
8/139400 (idx,a,y,d) = (8, 0.61143, 2, 1)
9/139400 (idx,a,y,d) = (9, 0.61143, 0, 1)
likelihood.variant=[0]
Link model [LOG]
Link order [-1]
Link variant [-1]
Link ntheta [0]
mix.use[0]
parse section=[5] name=[ID1] type=[FFIELD]
inla_parse_ffield...
section=[ID1]
dir=[random.effect00000001]
model=[bym]
PRIOR0->name=[loggamma]
hyperid=[10001|ID1]
PRIOR0->from_theta=[function (x) <<NEWLINE>>exp(x)]
PRIOR0->to_theta = [function (x) <<NEWLINE>>log(x)]
PRIOR0->PARAMETERS0=[1, 0.1]
PRIOR1->name=[loggamma]
hyperid=[10002|ID1]
PRIOR1->from_theta=[function (x) <<NEWLINE>>exp(x)]
PRIOR1->to_theta = [function (x) <<NEWLINE>>log(x)]
PRIOR1->PARAMETERS1=[1, 0.1]
correct=[-1]
constr=[0]
diagonal=[1.01511e-005]
id.names=<not present>
compute=[1]
nrep=[1]
ngroup=[1]
read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e]
read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e]
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 0/139400 (idx,y) = (0, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 1/139400 (idx,y) = (1, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 2/139400 (idx,y) = (2, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 3/139400 (idx,y) = (3, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 4/139400 (idx,y) = (4, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 5/139400 (idx,y) = (5, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 6/139400 (idx,y) = (6, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 7/139400 (idx,y) = (7, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 8/139400 (idx,y) = (8, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f013e9677e] 9/139400 (idx,y) = (9, 0)
read graph from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f035f73d0e]
file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f060e11874]
nlocations=[2050]
locations[0]=[1]
locations[1]=[2]
locations[2]=[3]
locations[3]=[4]
locations[4]=[5]
locations[5]=[6]
locations[6]=[7]
locations[7]=[8]
locations[8]=[9]
locations[9]=[10]
initialise log_precision (iid component)[4]
fixed=[0]
initialise log_precision (spatial component)[4]
fixed=[0]
adjust.for.con.comp[1]
scale.model[1]
connected component[0] size[2050] scale[0.522871]
scale.model: prec_scale[0.522871]
read extra constraint from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06cb54d15]
Constraint[0]
A[2050] = 1.000000
A[2051] = 1.000000
A[2052] = 1.000000
A[2053] = 1.000000
A[2054] = 1.000000
A[2055] = 1.000000
A[2056] = 1.000000
A[2057] = 1.000000
A[2058] = 1.000000
A[2059] = 1.000000
A[2060] = 1.000000
e[0] = 0.000000
rank-deficiency is *defined* [1]
output:
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
parse section=[6] name=[week] type=[FFIELD]
inla_parse_ffield...
section=[week]
dir=[random.effect00000002]
model=[rw1]
PRIOR->name=[loggamma]
hyperid=[4001|week]
PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
PRIOR->PARAMETERS=[1, 0.1]
correct=[-1]
constr=[1]
diagonal=[1.01511e-005]
id.names=<not present>
compute=[1]
nrep=[19]
ngroup=[1]
read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3]
read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3]
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 0/139400 (idx,y) = (0, 255)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 1/139400 (idx,y) = (1, 256)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 2/139400 (idx,y) = (2, 257)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 3/139400 (idx,y) = (3, 258)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 4/139400 (idx,y) = (4, 259)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 5/139400 (idx,y) = (5, 260)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 6/139400 (idx,y) = (6, 261)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 7/139400 (idx,y) = (7, 262)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 8/139400 (idx,y) = (8, 263)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f056101aa3] 9/139400 (idx,y) = (9, 264)
file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f01472b0e]
nlocations=[17]
locations[0]=[1]
locations[1]=[2]
locations[2]=[3]
locations[3]=[4]
locations[4]=[5]
locations[5]=[6]
locations[6]=[7]
locations[7]=[8]
locations[8]=[9]
locations[9]=[10]
cyclic=[0]
initialise log_precision[4]
fixed=[0]
scale.model[1]
scale.model: prec_scale[2.41758]
computed/guessed rank-deficiency = [1]
output:
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
parse section=[7] name=[Anno] type=[FFIELD]
inla_parse_ffield...
section=[Anno]
dir=[random.effect00000003]
model=[iid]
PRIOR->name=[loggamma]
hyperid=[1001|Anno]
PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
PRIOR->PARAMETERS=[1, 5e-005]
correct=[-1]
constr=[0]
diagonal=[0]
id.names=<not present>
compute=[1]
nrep=[1]
ngroup=[1]
read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5]
read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5]
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 0/139400 (idx,y) = (0, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 1/139400 (idx,y) = (1, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 2/139400 (idx,y) = (2, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 3/139400 (idx,y) = (3, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 4/139400 (idx,y) = (4, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 5/139400 (idx,y) = (5, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 6/139400 (idx,y) = (6, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 7/139400 (idx,y) = (7, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 8/139400 (idx,y) = (8, 0)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f029024f5] 9/139400 (idx,y) = (9, 0)
file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f058f67d9b]
nlocations=[4]
locations[0]=[1]
locations[1]=[2]
locations[2]=[3]
locations[3]=[4]
cyclic=[0]
initialise log_precision[4]
fixed=[0]
computed/guessed rank-deficiency = [0]
output:
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
parse section=[8] name=[IDtemp] type=[FFIELD]
inla_parse_ffield...
section=[IDtemp]
dir=[random.effect00000004]
model=[rw2]
PRIOR->name=[loggamma]
hyperid=[5001|IDtemp]
PRIOR->from_theta=[function (x) <<NEWLINE>>exp(x)]
PRIOR->to_theta = [function (x) <<NEWLINE>>log(x)]
PRIOR->PARAMETERS=[1, 0.1]
correct=[-1]
constr=[1]
diagonal=[1.01511e-005]
id.names=<not present>
compute=[1]
nrep=[1]
ngroup=[1]
read covariates from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4]
read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4]
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 0/139400 (idx,y) = (0, 17)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 1/139400 (idx,y) = (1, 28)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 2/139400 (idx,y) = (2, 10)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 3/139400 (idx,y) = (3, 28)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 4/139400 (idx,y) = (4, 49)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 5/139400 (idx,y) = (5, 31)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 6/139400 (idx,y) = (6, 27)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 7/139400 (idx,y) = (7, 43)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 8/139400 (idx,y) = (8, 40)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f07a3d37b4] 9/139400 (idx,y) = (9, 22)
file for locations=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f06c473572]
nlocations=[100]
locations[0]=[1]
locations[1]=[2]
locations[2]=[3]
locations[3]=[4]
locations[4]=[5]
locations[5]=[6]
locations[6]=[7]
locations[7]=[8]
locations[8]=[9]
locations[9]=[10]
cyclic=[0]
initialise log_precision[4]
fixed=[0]
scale.model[1]
scale.model: prec_scale[1678.49]
computed/guessed rank-deficiency = [2]
output:
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
section=[4] name=[(Intercept)] type=[LINEAR]
inla_parse_linear...
section[(Intercept)]
dir=[fixed.effect00000001]
file for covariates=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651]
read n=[278800] entries from file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651]
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 0/139400 (idx,y) = (0, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 1/139400 (idx,y) = (1, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 2/139400 (idx,y) = (2, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 3/139400 (idx,y) = (3, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 4/139400 (idx,y) = (4, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 5/139400 (idx,y) = (5, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 6/139400 (idx,y) = (6, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 7/139400 (idx,y) = (7, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 8/139400 (idx,y) = (8, 1)
file=[C:/Users/claud/AppData/Local/Temp/RtmpEHnKic/file48f03ca36bfa/data.files/file48f05e26651] 9/139400 (idx,y) = (9, 1)
prior mean=[0]
prior precision=[0]
compute=[1]
output:
summary=[1]
return.marginals=[1]
nquantiles=[3] [ 0.025 0.5 0.975 ]
ncdf=[0] [ ]
Index table: number of entries[6], total length[143928]
tag start-index length
Predictor 0 139400
ID1 139400 4100
week 143500 323
Anno 143823 4
IDtemp 143827 100
(Intercept) 143927 1
parse section=[9] name=[INLA.Parameters] type=[INLA]
inla_parse_INLA...
section[INLA.Parameters]
lincomb.derived.only = [Yes]
lincomb.derived.correlation.matrix = [No]
global_node.factor = 2.000
global_node.degree = 2147483647
reordering = -1
Contents of ai_param 0000000005B4BD90
Optimiser: DEFAULT METHOD
Option for GSL-BFGS2: tol = 0.1
Option for GSL-BFGS2: step_size = 1
Option for GSL-BFGS2: epsx = 0.005
Option for GSL-BFGS2: epsf = 0.000353553
Option for GSL-BFGS2: epsg = 0.005
Restart: 0
Mode known: No
Gaussian approximation:
tolerance_func = 0.0005
tolerance_step = 0.0005
optpar_fp = 0
optpar_nr_step_factor = -0.1
Gaussian data: No
Strategy: Use a mean-skew corrected Gaussian by fitting a Skew-Normal
Fast mode: On
Use linear approximation to log(|Q +c|)? Yes
Method: Compute the derivative exact
Parameters for improved approximations
Number of points evaluate: 9
Step length to compute derivatives numerically: 0.000100002
Stencil to compute derivatives numerically: 5
Cutoff value to construct local neigborhood: 0.0001
Log calculations: On
Log calculated marginal for the hyperparameters: On
Integration strategy: Automatic (GRID for dim(theta)=1 and 2 and otherwise CCD)
f0 (CCD only): 1.100000
dz (GRID only): 0.750000
Adjust weights (GRID only): On
Difference in log-density limit (GRID only): 6.000000
Skip configurations with (presumed) small density (GRID only): On
Gradient is computed using Central difference with step-length 0.010000
Hessian is computed using Central difference with step-length 0.100000
Hessian matrix is forced to be a diagonal matrix? [No]
Compute effective number of parameters? [Yes]
Perform a Monte Carlo error-test? [No]
Interpolator [Auto]
CPO required diff in log-density [3]
Stupid search mode:
Status [On]
Max iter [1000]
Factor [1.05]
Numerical integration of hyperparameters:
Maximum number of function evaluations [100000]
Relative error ....................... [1e-005]
Absolute error ....................... [1e-006]
To stabilise the numerical optimisation:
Minimum value of the -Hessian [-1.#INF]
Strategy for the linear term [Keep]
CPO manual calculation[No]
Laplace-correction is Disabled.
inla_build: check for unused entries in[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/Model.ini]
inla_INLA...
Strategy = [DEFAULT]
Sparse-matrix library... = [taucs]
OpenMP strategy......... = [huge]
Density-strategy........ = [Low]
Size of graph........... = [143928]
Number of constraints... = [21]
Found optimal reordering=[amdc] nnz(L)=[1311252] and use_global_nodes(user)=[no]
List of hyperparameters:
theta[0] = [Log precision for ID1 (idd component)]
theta[1] = [Log precision for ID1 (spatial component)]
theta[2] = [Log precision for week]
theta[3] = [Log precision for Anno]
theta[4] = [Log precision for IDtemp]
Optimise using DEFAULT METHOD
max.logdens= -96250.2972 fn= 1 theta= 4.0100 4.0000 4.0000 4.0000 4.0000 range=[-0.57 0.87]
max.logdens= -96219.2278 fn= 11 theta= 4.7114 4.2403 4.6589 4.0302 3.9658 range=[-0.55 0.76]
max.logdens= -96219.1571 fn= 12 theta= 4.7114 4.2403 4.6589 4.0302 3.9758 range=[-0.55 0.76]
max.logdens= -96218.9925 fn= 13 theta= 4.7114 4.2403 4.6689 4.0302 3.9658 range=[-0.55 0.76]
Iter=1 |grad|=17.4 |x-x.old|=0.447 |f-f.old|=31.5
max.logdens= -96215.9208 fn= 23 theta= 4.6077 4.0979 4.9817 4.0792 3.9266 range=[-0.55 0.79]
max.logdens= -96215.8882 fn= 24 theta= 4.6077 4.0979 4.9917 4.0792 3.9266 range=[-0.55 0.79]
max.logdens= -96215.8857 fn= 26 theta= 4.6077 4.0979 4.9817 4.0792 3.9166 range=[-0.55 0.79]
max.logdens= -96215.8802 fn= 29 theta= 4.6177 4.0979 4.9817 4.0792 3.9266 range=[-0.55 0.79]
max.logdens= -96215.8789 fn= 31 theta= 4.6077 4.0979 4.9817 4.0892 3.9266 range=[-0.55 0.79]
Iter=2 |grad|=4.23 |x-x.old|=0.167 |f-f.old|=3.31
max.logdens= -96215.6120 fn= 35 theta= 4.6854 4.1372 5.0459 4.1556 3.8729 range=[-0.55 0.77]
max.logdens= -96215.5920 fn= 36 theta= 4.6854 4.1372 5.0559 4.1556 3.8729 range=[-0.55 0.77]
max.logdens= -96215.5650 fn= 37 theta= 4.6854 4.1372 5.0459 4.1556 3.8629 range=[-0.55 0.77]
max.logdens= -96215.5594 fn= 40 theta= 4.6854 4.1272 5.0459 4.1556 3.8729 range=[-0.55 0.78]
max.logdens= -96215.5556 fn= 42 theta= 4.6854 4.1372 5.0459 4.1656 3.8729 range=[-0.55 0.77]
Iter=3 |grad|=3.78 |x-x.old|=0.064 |f-f.old|=0.309
max.logdens= -96215.0678 fn= 46 theta= 4.6841 4.1003 5.0745 4.3431 3.7536 range=[-0.55 0.78]
max.logdens= -96215.0613 fn= 49 theta= 4.6841 4.1003 5.0745 4.3431 3.7436 range=[-0.55 0.78]
max.logdens= -96215.0462 fn= 53 theta= 4.6841 4.1003 5.0745 4.3531 3.7536 range=[-0.55 0.78]
max.logdens= -96214.7201 fn= 57 theta= 4.6827 4.0607 5.1051 4.5438 3.6259 range=[-0.55 0.78]
max.logdens= -96214.7195 fn= 60 theta= 4.6827 4.0607 5.1051 4.5438 3.6159 range=[-0.55 0.78]
max.logdens= -96214.6982 fn= 64 theta= 4.6827 4.0607 5.1051 4.5538 3.6259 range=[-0.55 0.78]
max.logdens= -96214.6894 fn= 65 theta= 4.6827 4.0607 5.0951 4.5438 3.6259 range=[-0.55 0.78]
max.logdens= -96214.5806 fn= 69 theta= 4.6810 4.0110 5.1435 4.7952 3.4659 range=[-0.55 0.79]
max.logdens= -96214.5751 fn= 70 theta= 4.6810 4.0110 5.1435 4.7952 3.4559 range=[-0.55 0.79]
max.logdens= -96214.5625 fn= 72 theta= 4.6810 4.0110 5.1435 4.7952 3.4759 range=[-0.55 0.79]
max.logdens= -96214.5220 fn= 77 theta= 4.6810 4.0110 5.1335 4.7952 3.4659 range=[-0.55 0.79]
Iter=4 |grad|=6.94 |x-x.old|=0.346 |f-f.old|=1.03
max.logdens= -96213.2111 fn= 80 theta= 4.6921 4.0598 5.0180 5.4982 3.1000 range=[-0.55 0.78]
max.logdens= -96213.1908 fn= 83 theta= 4.6921 4.0598 5.0180 5.4982 3.1100 range=[-0.55 0.78]
max.logdens= -96213.1216 fn= 92 theta= 4.6954 4.0742 4.9809 5.7061 2.9918 range=[-0.54 0.78]
max.logdens= -96213.0983 fn= 94 theta= 4.6954 4.0742 4.9809 5.7061 3.0018 range=[-0.54 0.78]
Iter=5 |grad|=3.63 |x-x.old|=0.466 |f-f.old|=1.46
max.logdens= -96212.6772 fn= 104 theta= 4.6090 4.1596 5.0397 6.1392 2.9764 range=[-0.54 0.77]
max.logdens= -96212.6529 fn= 105 theta= 4.6090 4.1596 5.0397 6.1392 2.9864 range=[-0.54 0.77]
Iter=6 |grad|=3.27 |x-x.old|=0.203 |f-f.old|=0.444
max.logdens= -96212.0271 fn= 115 theta= 4.6394 4.1127 5.0223 6.5846 3.1337 range=[-0.54 0.78]
max.logdens= -96212.0082 fn= 117 theta= 4.6394 4.1127 5.0223 6.5846 3.1437 range=[-0.54 0.78]
max.logdens= -96212.0069 fn= 136 theta= 4.6699 4.0758 5.0050 7.0300 3.2910 range=[-0.54 0.78]
max.logdens= -96211.9308 fn= 137 theta= 4.6556 4.0878 5.0131 6.8214 3.2173 range=[-0.54 0.78]
max.logdens= -96211.9172 fn= 139 theta= 4.6556 4.0878 5.0131 6.8214 3.2273 range=[-0.54 0.78]
Iter=7 |grad|=1.97 |x-x.old|=0.326 |f-f.old|=0.746
max.logdens= -96211.7961 fn= 149 theta= 4.6767 4.1390 5.0375 6.8401 3.3465 range=[-0.54 0.77]
max.logdens= -96211.7699 fn= 151 theta= 4.6767 4.1290 5.0375 6.8401 3.3465 range=[-0.54 0.77]
Iter=8 |grad|=2.66 |x-x.old|=0.0643 |f-f.old|=0.135
max.logdens= -96211.6479 fn= 160 theta= 4.6400 4.1280 5.0413 6.8377 3.5261 range=[-0.55 0.78]
max.logdens= -96211.6408 fn= 161 theta= 4.6400 4.1180 5.0413 6.8377 3.5261 range=[-0.55 0.78]
max.logdens= -96211.6406 fn= 179 theta= 4.6399 4.1180 5.0413 6.8377 3.5265 range=[-0.55 0.78]
max.logdens= -96211.6405 fn= 188 theta= 4.6399 4.1180 5.0413 6.8377 3.5267 range=[-0.55 0.78]
max.logdens= -96211.6403 fn= 215 theta= 4.6399 4.1180 5.0413 6.8377 3.5267 range=[-0.55 0.78]
Iter=9 |grad|=1.23 |x-x.old|=0.0824 |f-f.old|=0.149 Reached numerical limit!
Optim: Number of function evaluations = 224
Compute the Hessian using central differences and step_size[0.1]. Matrix-type [dense]
max.logdens= -96211.6003 fn= 226 theta= 4.6399 4.1280 5.0413 6.7377 3.5267 range=[-0.55 0.78]
Mode not sufficient accurate; switch to a stupid local search strategy.
max.logdens= -96211.5850 fn= 238 theta= 4.6399 4.1280 5.0413 6.6327 3.5267 range=[-0.55 0.78]
max.logdens= -96211.5847 fn= 239 theta= 4.6399 4.1280 5.0413 6.7377 3.6317 range=[-0.55 0.78]
max.logdens= -96211.5691 fn= 250 theta= 4.6399 4.1280 5.0413 6.6275 3.6317 range=[-0.55 0.78]
max.logdens= -96211.5690 fn= 309 theta= 4.6399 4.1280 5.0413 6.6275 3.6317 range=[-0.55 0.78]
47.923824 15.991226 -0.143604 -0.050339 -0.088881
15.991226 45.140827 -0.072920 0.007869 -0.006437
-0.143604 -0.072920 44.287863 -0.014403 -0.067313
-0.050339 0.007869 -0.014403 2.771321 0.007341
-0.088881 -0.006437 -0.067313 0.007341 4.990696
Eigenvectors of the Hessian
0.737101 0.675764 0.004130 -0.002371 0.001357
0.675728 -0.737107 0.008030 0.000781 -0.000698
-0.008473 0.003123 0.999958 -0.001721 0.000345
-0.000530 -0.001439 -0.000351 -0.003241 0.999994
-0.001203 -0.002179 -0.001724 -0.999990 -0.003245
Eigenvalues of the Hessian
62.585394
30.480715
44.286805
4.990398
2.771218
StDev/Correlation matrix (scaled inverse Hessian)
0.153835 -0.343819 0.002756 0.004901 0.005966
0.158502 0.000585 -0.002345 -0.001640
0.150267 0.001303 0.004542
0.600708 -0.001939
0.447644
Compute corrected stdev for theta[0]: negative 0.961023 positive 1.053797
Compute corrected stdev for theta[1]: negative 0.929659 positive 1.118742
Compute corrected stdev for theta[2]: negative 1.121293 positive 0.923264
Compute corrected stdev for theta[3]: negative 0.908362 positive 1.169934
Compute corrected stdev for theta[4]: negative 1.279785 positive 0.903255
config 0/27=[ 1.159 -1.023 -1.233 -0.999 -1.408] log(rel.dens)=-2.964, [9] accept, compute, 1811.79s
config 1/27=[ -0.000 -0.000 0.000 0.000 2.222] log(rel.dens)=-3.136, [4] accept, compute, 1822.76s
config 2/27=[ 0.000 0.000 0.000 0.000 0.000] log(rel.dens)=-0.008, [0] accept, compute, 1829.57s
config 3/27=[ 0.000 -0.000 2.271 0.000 -0.000] log(rel.dens)=-2.975, [2] accept, compute, 1829.27s
config 4/27=[ -1.057 1.231 -1.233 -0.999 -1.408] log(rel.dens)=-2.962, [7] accept, compute, 1831.82s
config 5/27=[ 0.000 2.752 0.000 0.000 -0.000] log(rel.dens)=-3.202, [1] accept, compute, 1836.11s
config 6/27=[ 1.159 -1.023 1.016 -0.999 0.994] log(rel.dens)=-2.945, [10] accept, compute, 1837.52s
config 7/27=[ -1.057 -1.023 1.016 -0.999 -1.408] log(rel.dens)=-3.264, [6] accept, compute, 1838.81s
config 8/27=[ -1.057 -1.023 -1.233 -0.999 0.994] log(rel.dens)=-2.943, [5] accept, compute, 1840.14s
config 9/27=[ -1.057 1.231 1.016 -0.999 0.994] log(rel.dens)=-2.948, [8] accept, compute, 1841.91s
config 10/27=[ -0.000 -0.000 -0.000 2.878 -0.000] log(rel.dens)=-2.897, [3] accept, compute, 1842.85s
config 11/27=[ 1.159 1.231 -1.233 -0.999 0.994] log(rel.dens)=-2.601, [11] accept, compute, 1799.88s
config 12/27=[ 1.159 1.231 1.016 -0.999 -1.408] log(rel.dens)=-2.901, [12] accept, compute, 1827.93s
config 13/27=[ 1.159 -1.023 -1.233 1.287 0.994] log(rel.dens)=-2.875, [9] accept, compute, 1800.71s
config 14/27=[ -0.000 -0.000 0.000 -0.000 -3.148] log(rel.dens)=-2.549, [4] accept, compute, 1828.54s
config 15/27=[ 0.000 -0.000 -2.758 -0.000 0.000] log(rel.dens)=-3.161, [2] accept, compute, 1829.26s
config 16/27=[ -1.057 1.231 -1.233 1.287 0.994] log(rel.dens)=-2.880, [7] accept, compute, 1834.38s
config 17/27=[ 2.592 -0.000 0.000 -0.000 0.000] log(rel.dens)=-3.032, [0] accept, compute, 1836.92s
config 18/27=[ -0.000 -2.287 0.000 -0.000 -0.000] log(rel.dens)=-2.989, [1] accept, compute, 1839.75s
config 19/27=[ 1.159 -1.023 1.016 1.287 -1.408] log(rel.dens)=-3.167, [10] accept, compute, 1840.65s
config 20/27=[ -1.057 -1.023 1.016 1.287 0.994] log(rel.dens)=-3.175, [6] accept, compute, 1842.59s
config 21/27=[ -1.057 -1.023 -1.233 1.287 -1.408] log(rel.dens)=-3.200, [5] accept, compute, 1840.58s
config 22/27=[ -1.057 1.231 1.016 1.287 -1.408] log(rel.dens)=-3.150, [8] accept, compute, 1841.07s
config 23/27=[ -0.000 0.000 -0.000 -2.234 0.000] log(rel.dens)=-3.117, [3] accept, compute, 1840.54s
config 24/27=[ 1.159 1.231 -1.233 1.287 -1.408] log(rel.dens)=-2.789, [11] accept, compute, 1772.68s
config 25/27=[ 1.159 1.231 1.016 1.287 0.994] log(rel.dens)=-2.811, [12] accept, compute, 1755.74s
config 26/27=[ -2.364 -0.000 -0.000 0.000 0.000] log(rel.dens)=-3.052, [0] accept, compute, 329.30s
Combine the densities with relative weights:
config 0/27=[ 0.000 0.000 0.000 0.000 0.000] weight = 1.000 neff = 664.77
config 1/27=[ 2.592 -0.000 0.000 -0.000 0.000] weight = 0.183 neff = 598.52
config 2/27=[ -2.364 -0.000 -0.000 0.000 0.000] weight = 0.180 neff = 730.00
config 3/27=[ 0.000 2.752 0.000 0.000 -0.000] weight = 0.155 neff = 667.58
config 4/27=[ -0.000 -2.287 0.000 -0.000 -0.000] weight = 0.191 neff = 686.72
config 5/27=[ 0.000 -0.000 2.271 0.000 -0.000] weight = 0.194 neff = 648.87
config 6/27=[ 0.000 -0.000 -2.758 -0.000 0.000] weight = 0.161 neff = 685.76
config 7/27=[ -0.000 -0.000 -0.000 2.878 -0.000] weight = 0.210 neff = 670.11
config 8/27=[ -0.000 0.000 -0.000 -2.234 0.000] weight = 0.168 neff = 661.63
config 9/27=[ -0.000 -0.000 0.000 0.000 2.222] weight = 0.165 neff = 664.32
config 10/27=[ -0.000 -0.000 0.000 -0.000 -3.148] weight = 0.297 neff = 665.13
config 11/27=[ -1.057 -1.023 -1.233 -0.999 0.994] weight = 0.200 neff = 708.46
config 12/27=[ -1.057 -1.023 -1.233 1.287 -1.408] weight = 0.155 neff = 712.61
config 13/27=[ -1.057 -1.023 1.016 -0.999 -1.408] weight = 0.145 neff = 692.43
config 14/27=[ -1.057 -1.023 1.016 1.287 0.994] weight = 0.159 neff = 695.77
config 15/27=[ -1.057 1.231 -1.233 -0.999 -1.408] weight = 0.197 neff = 698.08
config 16/27=[ -1.057 1.231 -1.233 1.287 0.994] weight = 0.214 neff = 701.40
config 17/27=[ -1.057 1.231 1.016 -0.999 0.994] weight = 0.199 neff = 681.31
config 18/27=[ -1.057 1.231 1.016 1.287 -1.408] weight = 0.163 neff = 685.21
config 19/27=[ 1.159 -1.023 -1.233 -0.999 -1.408] weight = 0.196 neff = 648.93
config 20/27=[ 1.159 -1.023 -1.233 1.287 0.994] weight = 0.214 neff = 652.26
config 21/27=[ 1.159 -1.023 1.016 -0.999 0.994] weight = 0.200 neff = 632.13
config 22/27=[ 1.159 -1.023 1.016 1.287 -1.408] weight = 0.160 neff = 636.22
config 23/27=[ 1.159 1.231 -1.233 -0.999 0.994] weight = 0.282 neff = 639.72
config 24/27=[ 1.159 1.231 -1.233 1.287 -1.408] weight = 0.234 neff = 643.58
config 25/27=[ 1.159 1.231 1.016 -0.999 -1.408] weight = 0.209 neff = 623.55
config 26/27=[ 1.159 1.231 1.016 1.287 0.994] weight = 0.229 neff = 626.86
Done.
Expected effective number of parameters: 665.603(28.161), eqv.#replicates: 209.434
Marginal likelihood: Integration -96214.111984 Gaussian-approx -96213.965819
Compute the marginal for each of the 5 hyperparameters
Interpolation method: Auto
Compute the marginal for theta[0] to theta[4] using numerical integration...
Compute the marginal for theta[0] to theta[4] using numerical integration... Done.
Compute the marginal for the hyperparameters... done.
Store results in directory[C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/results.files]
Wall-clock time used on [C:\Users\claud\AppData\Local\Temp\RtmpEHnKic\file48f03ca36bfa/Model.ini]
Preparations : 0.262 seconds
Approx inference: 4284.324 seconds [0.3|0.0|5.1|94.5|0.1]%
Output : 0.366 seconds
---------------------------------
Total : 4284.952 seconds
>
> file=paste0("Output/",Sex,"/output",area,".Rdata")
> save(m,file=file)
Warning message:
In save(m, file = file) : 'package:stats' may not be available when loading
>
> ############################################################################
> # 3. GIVEN THE INLA OUTPUTS,
> # SIMULATE FROM THE POSTERIOR DISTRIBUTION & SAVE THE OUTPUT
> ############################################################################
> #
> make.posteriors(area,Sex)
>
> ############################################################################
> # 4. GIVEN THE INLA OUTPUTS AND SIMULATIONS,
> # COMPUTE THE PREDICTIONS FOR 2020 & SAVE THE OUTPUT
> ############################################################################
> make.predictions(area,Sex)
Joining, by = c("COD_PROVCOM", "week")
Joining, by = c("COD_PROVCOM", "COMUNE", "COD_PROV", "DEN_UTS", "SIGLA", "DEN_REG", "COD_REG", "week", "Anno", "sex", "ID_Ita", "Vuln2011", "p.africa", "p.america", "p.asia", "p.europa", "age.group", "temperature", "temp_grp", "IDtemp")
There were 50 or more warnings (use warnings() to see the first 50) The warnings are: > warnings()
Warning messages:
1: In rpois(nsim, x) : NAs produced
2: In rpois(nsim, x) : NAs produced
3: In rpois(nsim, x) : NAs produced
4: In rpois(nsim, x) : NAs produced
5: In rpois(nsim, x) : NAs produced
6: In rpois(nsim, x) : NAs produced
7: In rpois(nsim, x) : NAs produced
8: In rpois(nsim, x) : NAs produced
9: In rpois(nsim, x) : NAs produced
10: In rpois(nsim, x) : NAs produced
11: In rpois(nsim, x) : NAs produced
12: In rpois(nsim, x) : NAs produced
13: In rpois(nsim, x) : NAs produced
14: In rpois(nsim, x) : NAs produced
15: In rpois(nsim, x) : NAs produced
16: In rpois(nsim, x) : NAs produced
17: In rpois(nsim, x) : NAs produced
18: In rpois(nsim, x) : NAs produced
19: In rpois(nsim, x) : NAs produced
20: In rpois(nsim, x) : NAs produced
21: In rpois(nsim, x) : NAs produced
22: In rpois(nsim, x) : NAs produced
23: In rpois(nsim, x) : NAs produced
24: In rpois(nsim, x) : NAs produced
25: In rpois(nsim, x) : NAs produced
26: In rpois(nsim, x) : NAs produced
27: In rpois(nsim, x) : NAs produced
28: In rpois(nsim, x) : NAs produced
29: In rpois(nsim, x) : NAs produced
30: In rpois(nsim, x) : NAs produced
31: In rpois(nsim, x) : NAs produced
32: In rpois(nsim, x) : NAs produced
33: In rpois(nsim, x) : NAs produced
34: In rpois(nsim, x) : NAs produced
35: In rpois(nsim, x) : NAs produced
36: In rpois(nsim, x) : NAs produced
37: In rpois(nsim, x) : NAs produced
38: In rpois(nsim, x) : NAs produced
39: In rpois(nsim, x) : NAs produced
40: In rpois(nsim, x) : NAs produced
41: In rpois(nsim, x) : NAs produced
42: In rpois(nsim, x) : NAs produced
43: In rpois(nsim, x) : NAs produced
44: In rpois(nsim, x) : NAs produced
45: In rpois(nsim, x) : NAs produced
46: In rpois(nsim, x) : NAs produced
47: In rpois(nsim, x) : NAs produced
48: In rpois(nsim, x) : NAs produced
49: In rpois(nsim, x) : NAs produced
50: In rpois(nsim, x) : NAs produced The files in ./Output/Females are:
Thank you very much for your time. |
Are the files valid? So for example, if you try
does it throw an error? The output you show (BTW: you don't need to replicate the first part of the script --- the code up to the INLA output seems to work OK. I think if there's an issue is in saving the output onto the relevant file and from there to use it in the rest of the script) seems to indicate a problem when simulating from the posterior predictive distribution. I can go back and check the original script in case we missed something when uploading the files on the GitHub repo (which I don't think so) --- but I think it'd be good to check whether, eg, |
A good way of debugging this is actually to run the function
and then try
Here something should be clear already --- if the file is not correct, it'll throw an error. You could even inspect the loaded object to see what's inside it. Then you can continue with the script and see if, line-by-line, you get an error that can explain what is wrong...
See if this is still OK. Then you can simulate from the posterior --- perhaps for the purpose of debugging, just do 1 simulation (notice I'm using the code
And again see what happens here (should be very fast as it's only one simulation). If there are no errors, try the rest
At this point, it should have saved the simulations ( |
The error you get here is, I think, because you've only done 1 simulation from the joint posterior and so the dimension is not correct. To the main point, I think that the posterior estimates are OK --- but we now need to debug why it gives you a problem in the prediction. I'd suggest, please, to do the kind of line-by-line analysis for |
Also, if it helps we could filter the output from our original model to only include Piemonte and somehow make it available for you? |
Hello, Yes, it would really help if you could filter the output from your original model to include Piemonte and make it available for us. You may contact me on Twitter via DM ( my username is @ClaudioMoroni5 ) or email me at [email protected] . One option could be a shared drive. I'll try to perform the line-by-line of |
Hello,
Thanks for making the methods of your research publicly available. Despite following your instructions, namely installing the packages
INLA
,ggplot2
,dplyr
,gridExtra
, and creating the folders./Output/Females
and./Output/Males
, if we run themake.functions.R
script:and the
model.run.R
script:We get the following errors:
If I look in the ./Output/Females folder, I see 4 files:
outputNordOvest
posteriorsNordOvest
predictionsNordOvest
predItaly
Is it correct, or do we have to fix those errors? If so, how?
Our goal is to get age stratified excess mortality for Piedmont ( so we need at least regional resolution). I think they are in
predItaly[["predictions_prov"]]
: is it right?Unfortunately we are not domain expert. Could you provide an explicit interpretation of the output files and their fields?
Thank you very much.
The text was updated successfully, but these errors were encountered: