Skip to content
Prev 716 / 20628 Next

how to know if random factors are significant?

On 01/04/2008, Leonel Arturo Lopez Toledo <llopez at oikos.unam.mx> wrote:
No need to be sorry. It seems though, that you are also using lmer.
In the case of lme, there is no indication of the significance of the
variance parameters in the standard output. To test the variance
component you fit another model excluding that parameter, which i
guess is why you come to think of update(). It is however not possible
to fit a model with lme, that does not contain any random effects,
hence you have to fit a linear model by lm (or gls in package nlme in
case other non-standard stuff is at stake) and make the likelihood
ratio test with anova( fm.lme, fm.lm). Note that order of the
arguments to anova matters in this case (cf ?anova.lme). To obtain a
p-value, you need to compare with some distribution and a chi-square
with one df is the default output. Often however a mixture of 0 and 1
df's are more appropriate, hence a more correct p-value is half the
one, the software reports. You can check these distributions by the
simulate function in the nlme package. When you have more than one
random effect in you model, update works just fine. You should consult
the book: Mixed-Effects Models in S and S-plus by Pinheiro and Bates
for further details.
The variance parameter in the first model indeed seems rather small
compared to residual variation. The latter model is incomparable to
the former model, since it is a binomial(logit) GLMM. The obvious
thing to do would be to compare the deviance of this model, with the
corresponding GLM (I am however unsure of how the constant terms in
the likelihood are handled by glm and lmer in this case, so comparison
is perhaps not simple) without the variance component, but then again,
the interpretation of the fixed effect parameters change, so other
issues should also have a say in choosing an appropriate model.

Best
Rune