Skip to content

degrees of freedom and random effects in lmer

1 message · Douglas Bates

#
I also apologize because I sent an incomplete reply.  I hit the "send"
key sequence before I planned to.

I was going to say that it is not entirely clear exactly what one
should regard as "the degrees of freedom" for random effects terms.
Fixed-effects models have a solid geometric interpretation that gives
an unambiguous definition of degrees of freedom.  Models with random
effects don't have nearly the same clarity.

If one counts parameters to be estimated then random effects for the
levels of a factor cost only 1 degree of freedom, regardless of the
number of levels.  This is the lowest number one could imagine for the
degrees of freedom and, if you regard degrees of freedom as measuring
the explanatory power of a model, this can be a severe underestimate.

If one goes with the geometric argument and measures something like
the dimension of the predictor space then the degrees of freedom would
be the number of levels or that number minus 1, which is what you were
assuming.  This is the same as counting the number of coefficients in
the linear predictor.  The problem here is the predictor doesn't have
all of the degrees of freedom associated with the geometric subspace.
The "estimates" of the random effects are not the solution of a least
squares problem.  They are the solution of a penalized least squares
problem and the penalty has a damping effect on the coefficients.

An argument can be made that the effective degrees of freedom lies
between these two extremes and can be measured according to the trace
of the "hat" matrix.

I really don't know what the best answer is.  In a way I think it is
best to avoid trying to force a definition of degrees of freedom in
mixed models.
On Jan 16, 2008 12:17 PM, Feldman, Tracy <tsfeldman at noble.org> wrote: