Skip to content
Prev 11077 / 20628 Next

Model comparison with anova and AIC: p=0 and different AIC-values

On 13-11-15 09:30 PM, Stefan Th. Gries wrote:
I'm not sure about that, it looks like it *might* be a glitch in the
display.  Do the results of anova(m.01b.reml, m.01a.reml) look more
sensible?
see http://glmm.wikidot.com/faq#error_anova_lmer_AIC:

As pointed out by several users (here, here, and here, for example), the
AICs computed for lmer models in summary and anova are different;
summary uses the REML specification as specified when fitting the model,
while anova always uses REML=FALSE (to safeguard users against
incorrectly using restricted MLs to compare models with different fixed
effect components). (This behavior is slightly overzealous since users
might conceivably be using anova to compare models with different random
effects [although this also subject to boundary effects as described
elsewhere in this document ?])

  Note that the AIC differences are almost identical (140)
Well, anova() gave you a likelihood ratio test (based on a ML
refitting, which is not necessarily what you want: see
https://github.com/lme4/lme4/issues/141 ).

  I would say that the bottom line here is that since the more complex
model m.01b.reml is about 60 log-likelihood units (or "REML criterion
units") better, it doesn't really matter what test you do -- the more
complex model is overwhelmingly better.