Skip to content
Prev 11888 / 20628 Next

Diagnosing Overdispersion in Mixed Models with Parametric Bootstrapping

Xavier Harrison <xav.harrison at ...> writes:
Yes, although keeping in mind that the whole thing is an approximation
anyway, getting the residual df wrong by a few percent is hopefully
not that important
This seems quite reasonable.
This particular model seems to be more or less exactly equivalent
to using an observation-level random effect -- see below ...
The parametric bootstrap is worthwhile for finding reliable
confidence intervals, but otherwise a parametric approach (e.g.
likelihood

  I would do this as follows (slightly more compactly):

n.pops <- 10
n.indiv <- 50

set.seed(101)
popid <- gl(n.pops,n.indiv)
bodysize <- rnorm(length(popid),30,4)
d <- data.frame(popid,bodysize,obs=factor(1:(n.pops*n.indiv)))
library(lme4)
d$y <- simulate(~bodysize+(1|popid)+(1|obs),family="poisson",
         newdata=d,newparams=list(theta=c(0.5,0.8),
                                  beta=c(-0.5,0.05)))[[1]]

f <- glmer(y~bodysize+(1|popid)+(1|obs),family="poisson",
           data=d)
p <- profile(f,which="theta_")
confint(p)

Random effects:
 Groups Name        Std.Dev.
 obs    (Intercept) 0.4795  
 popid  (Intercept) 0.7990  
Number of obs: 500, groups:  obs, 500 popid, 10
Fixed Effects:
(Intercept)     bodysize  
   -1.05462      0.06281  

The intercept looks a bit low, but the std. error is also large
(0.37)

confidence intervals:
    
           2.5 %    97.5 %
.sig01 0.4076925 0.5568827
.sig02 0.5328584 1.3516996