Skip to content
Prev 4574 / 20628 Next

Beginner help setting up a mixed model

On Tue, Oct 12, 2010 at 3:08 PM, Christopher Desjardins
<desja004 at umn.edu> wrote:
Hmm - I think you have that backwards.  When comparing models with
different *fixed* effects you must use REML=FALSE.  The REML criterion
depends on the structure of the fixed-effects model matrix and you
can't easily compare values of the REML criterion for models with
different fixed-effects structure.  There may be advice to use
REML=TRUE when the random effects change but, if so, I don't know why.

My current attitude is that the usual justification for using REML
(reducing the bias in estimates of variance components) isn't as
important as many people believe, because the distribution of the
estimator can be highly skewed and why should we be overwhelmingly
interested in the mean value of a highly skewed distribution?  I
prefer to stay with maximum likelihood fits so that likelihood ratio
tests and derived criterion like AIC and BIC are well-defined.
Not sure I understand that statement.  All mixed models have random
effects.  The definition of a mixed models is that it incorporates
both fixed-effects parameters and random effects.

You mentioned p-values and I think you are reversing SAS and lme4
there.  Many people consider it to be a deficiency of the lme4 package
(which is different from R itself - you can blame SAS Institute for
everything that is in SAS but you should not blame the R Foundation
for everything that is in contributed R packages) that it does not
give p-values for fixed-effects coefficients in a model fit by lmer.
The preferred approach with lme4 is to fit the model with and without
a particular term and use a likelihood ratio test to compare the
quality of the fits.  SAS PROC MIXED, on the other hand, is happy to
provide p-values.  In fact, SAS seem to pride themselves on providing
many different p-values for the same test, even when the test wouldn't
make sense.