Skip to content
Prev 4321 / 20628 Next

Selecting random effects in lme4: ML vs. QAICc

Richard Feldman wrote:
(1) Yes. I would agree that we don't quite know what's going on with
quasi- in glmer, and that using other methods is better if possible:
various people have reported odd results, Doug Bates has gone on record
as saying he wouldn't really know how to interpret a quasi-likelihood
GLMM anyway (I think that's a fair summary of his position), and it's
not clear whether the problem is with bugs in a little-tested corner of
the software or fundamental problems with the definitions of the model.
That said, quasi- is also the easiest way forward ...
  (2) glmmPQL.quasi uses penalized quasi-likelihood, so at least it's
consistent in the way it handles the random effects and the individual
variance structures.  PQL is known to be a bit dicey for data with small
numbers (e.g. means < 5) [Breslow 2003], not that that has stopped lots
of people from using it because for a long time it was the only game in
town. (3) The dispersion approx. 1 in the neg binom models does look
reasonable. See Venables and Ripley's section on overdispersion for some
cautions on this approach ... (4) I don't know why the deviance is
coming out slightly higher for the zero-inflated neg binom -- seems odd.
(5) Since you've gone to all this trouble to fit the overdispersed and
zero-inflated likelihood models, your best bet is to try likelihood
ratio tests between nested models (e.g. #8 vs #6 to test for
zero-inflation, #8 vs #7 or #6 vs #5 to test for overdispersion).  The
quasi- and overdispersion calculations are usually done as a way of
avoiding having to the fit the more complex models at all.