Dear R-list members,
I am running mixed models using lme4 package. In model selection, terms were
eliminated from a maximum model (with random intercept) to achieve a simpler
model that retained only the significant main effects and interactions,
using the Akaike information criterion. My final model includes three fixed
factors plus random intercept. Then I perform a likelihood ratio test to
test the significance of the random term. However, because when testing on
the boundary the p-value from the table is incorrect, I followed Zuur et al
(2009) to get the corrected p-value by dividing the p value obtained by 2.
Briefly, my best fit model consists of three main effects: Year (2006,
2007), Hatching Order (1st, 2nd, 3rd) and Sibling Competence
(Present/Absent) plus NestID as random intercept. The modelled outcome is
level of plasma proteins (continuous).
I test the random effect (Nest ID), which has variance 2.1795e-16 and Std.
Dev. 1.4763e-08 (see output). LRT yields a p-value of 0.00031 (0.00015 after
dividing it by 2 as suggested). This would mean that adding a random effect
Nest ID to the model is a significant improvement. However, random effect
variance is near zero, which would indicate otherwise.
In support of the non-significant random effect I think, coefficients and
standard error are exactly the same for models with and without the RE, as
seen in the outputs.
Q 1. In your opinion, should I trust this LRT with a small p-value and leave
the random effect in my model, or follow the parsimony principle and
eliminated it?
Q 2. Is it possible, under certain conditions, to have a random effect with
such low variance and still obtain a LTR p-value indicating that model fit
is improved by it?
Outputs for both models, with and without random effect, followed by LRT
output:
MIXED MODEL