Random vs. fixed effects
The answer is effect-size dependent, is it not? If you fit the random effect and the algorithm works without failure, why not use it? If it doesn't work, you have a faulty tool for estimation. Punting to a fixed model is one way out of the problem. Another is matched-on-the-random-factor data analysis. Pragmatism is certainly an issue. But what if you have 10 centers as a factor with known correlation issues. If you analyze with one set of predictors, missing values leaves you with only 5 centers, so you treat centers as a fixed effect with 5 levels. If you use another set of predictors, you have all 10 levels, so you use centers as a random effect with a variance. Isn't intellectual consistency an issue here too? How do you explain this in the executive summary? One thing you can do if the mixed modeling fails is to use the standard deviation among levels of the random-treated-as-fixed factor as an estimate of the random effect. This would at least maintain consistency of concept. Note that I'm not a mixed modeling expert, so my opinions may not be worth much.
At 02:11 PM 4/23/2010, Ben Bolker wrote:
Here's my question for the group: Given that it is a reasonable *philosophical* position to say 'treat philosophically random effects as random no matter what, and leave them in the model even if they don't appear to be statistically significant', and given that with small numbers of random-effect levels this approach is likely to lead to numerical difficulties in most (??) mixed model packages (warnings, errors, or low estimates of the variance), what should one do? (Suppose one is in a situation that is too complicated to use classical method-of-moments approaches -- crossed designs, highly unbalanced data, GLMMs ...) 1. philosophy, schmilosophy. Fit these factors as a fixed effect, anything else is too dangerous/misleading/unworkable. 2. proceed with the 'standard' mixed model (lme4, nlme, PROC MIXED, ...) and hope it doesn't break. Ignore warnings. 3. use Bayesian-computational approaches (MCMCglmm, WinBUGS, AD Model Builder with post-hoc MCMC calculation? Data cloning?)? Possibly with half-Cauchy priors on variance as recommended by Gelman [Bayesian Analysis (2006) 1, Number 3, pp. 515?533]?
================================================================ Robert A. LaBudde, PhD, PAS, Dpl. ACAFS e-mail: ral at lcfltd.com Least Cost Formulations, Ltd. URL: http://lcfltd.com/ 824 Timberlake Drive Tel: 757-467-0954 Virginia Beach, VA 23464-3239 Fax: 757-467-2947 "Vere scire est per causas scire" ================================================================