Skip to content

Extracting Standard Errors of Uncorrelated Random Effects?

3 messages · Douglas Bates, Reinhold Kliegl, Andrew Robinson

#
On Wed, Dec 14, 2011 at 2:42 PM, Derek Dunfield <dunfield at mit.edu> wrote:
There are differences of opinion on this.  I have spent the last 30+
years in the Department of Statistics at the University of Wisconsin -
Madison, a department that was founded by George Box.  To me it is
natural to pursue parsimony in a model, which means that I delete
terms that do not appear to be contributing significantly to the model
fit.  Like the quote attributed to Einstein, "Make things as simple as
possible, but no simpler."

In fact, in this particular case it is of interest to test the
hypothesis of no correlation of the random effects because the
experimenters want to know if they can predict the extent of sleep
deprivation's effect on response time from the initial response time.
(I.e., are those who have faster response times initially less
affected by sleep deprivation?)

Others (feel free to chime in here, Ben) believe that "model building"
by deleting apparently insignificant terms results in overfitting of
the model and I don't dispute that.  In some ways it depends on what
the objective in fitting the model is.  For the purposes of prediction
I want to pay attention to the bias-variance trade-off and aim for a
simple, adequate model.  For the purposes of establishing the
significance of a fixed-effects term, preliminary simplification of
the model may bias the effect of the model.

The part of the "stay with the initial model regardless" approach that
I don't like is that I am not convinced that the initial model is
necessarily a good model.
#
May I add one consideration that guides my approach in model building
and ask a question? I usually delete non-significant correlation
parameters and variance components with the argument that my current
set of data is most likely not rich enough to support stable estimates
of these model parameters. From this perspective, can I expect that
the simple model yields more stable estimates of the parameters than
the full model? The fact that I drop non-significant parameters does
not imply that I accept the null hypothesis about them. In other
words, I consider it very likely that with a larger, more reliable set
of data than the present one I would be able to estimate these
parameters (and keep them in the model.) Therefore, in a way, I expect
model complexity (and theoretical impact) to grow as I improve the
reliability of my measures or increase the data base.

Reinhold Kliegl
On Wed, Dec 14, 2011 at 10:34 PM, Douglas Bates <bates at stat.wisc.edu> wrote:
#
On Wed, Dec 14, 2011 at 03:34:18PM -0600, Douglas Bates wrote:
I think that it really does depend on the objective in fitting the
model and the provenance of the data.  If an experimental design has
been established and one wishes to perform some inference conditional
on this design, then it is essential that the design be reflected in
the model structure.  Hence, in my opinion, testing some random
effects for inclusion makes no sense, but others are fair game, and
which is which depends on the design.
 
Cheers

Andrew