Skip to content
Prev 499 / 20628 Next

testing fixed effects in binomial lmer...again?

On Jan 8, 2008 5:38 AM, Achaz von Hardenberg <fauna at pngp.it> wrote:
Yes, that is the best choice in lmer.  (In the development version it
is, in fact, the only choice.)
The change in the log-likelihood between two nested models is, in my
opinion, the most sensible test statistic for comparing the models.
However, it is not clear how one should convert this test statistic to
a p-value.  The use of the chi-squared distribution is based on
asymptotic results and can give an "anti-conservative" (i.e. lower
than would be obtained through a randomization test or via simulation)
p-value for small samples.  As far as I can see, the justification for
the use of AIC as a comparison criterion is even more vague.

For linear fixed-effects models one can compensate for small samples
by changing from z-tests to t-tests and from chi-squared tests to F
tests.  The exact theory breaks down for mixed-effects models or for
generalized linear models and is even more questionable for
generalized linear mixed models.  As Ben Bolker mentioned, I think
that one way to deal with the hypothesis testing question while
preserving the integrity of the model is to base inferences on a
Markov-chain Monte Carlo sample from the (Bayesian) posterior
distribution of the parameters.

Code for MCMC samples for parameters in GLMMs is not yet fully
developed (or documented).  In the meantime I would use the likelihood
ratio tests but exercise caution in reporting p-values for
small-sample cases.