Skip to content
Prev 3713 / 20628 Next

Random vs. fixed effects

Here's my question for the group:

  Given that it is a reasonable *philosophical* position to say 'treat
philosophically random effects as random no matter what, and leave them
in the model even if they don't appear to be statistically significant',
and given that with small numbers of random-effect levels this approach
is likely to lead to numerical difficulties in most (??) mixed model
packages (warnings, errors, or low estimates of the variance), what
should one do?  (Suppose one is in a situation that is too complicated
to use classical method-of-moments approaches -- crossed designs, highly
unbalanced data, GLMMs ...)

 1. philosophy, schmilosophy.  Fit these factors as a fixed effect,
anything else is too dangerous/misleading/unworkable.
 2. proceed with the 'standard' mixed model (lme4, nlme, PROC MIXED,
...) and hope it doesn't break.  Ignore warnings.
 3. use Bayesian-computational approaches (MCMCglmm, WinBUGS, AD Model
Builder with post-hoc MCMC calculation? Data cloning?)?  Possibly with
half-Cauchy priors on variance as recommended by Gelman [Bayesian
Analysis (2006) 1, Number 3, pp. 515?533]?
Gabor Grothendieck wrote: