Skip to content
Prev 12361 / 20628 Next

Random effects in clmm() of package ordinal

On 14-08-29 07:31 AM, Christian Brauner wrote:
It sounds like something else is going on.  In my experience the
advice to not test random effects is based more on philosophy (the
random effects are often a nuisance variable that is implicit in the
experimental design, and is generally considered necessary for
appropriate inference -- see e.g. Hurlbert 1984 _Ecology_ on
"sacrificial pseudoreplication") than on the difficulties of inference
for random effects (boundary effects, finite-size effects, etc.).  A
large p-value either means that the point estimate of the RE variance is
small, or that its confidence interval is very large (or both);
especially in the former case, it is indeed surprising that its
inclusion should change inference so much.

  That's about as much as I think it's possible to say without more
detail.  I would suggest double-checking your data and model diagnostics
(is there something funny about the data and model fit?) and comparing
point estimates and confidence intervals from the different fits to try
to understand what the different models are saying about the data (not
just why the p-value changes so much).
Are you using different types of p-value estimation in different models
(Wald vs LRT vs ... ?) ?  Are you inducing complete separation or severe
imbalance by including the RE?  Is one of your random-effect levels
confounded with your main effect (an example along these lines came up
on the list a few months ago:
https://stat.ethz.ch/pipermail/r-sig-mixed-models/2014q2/022188.html )?

  good luck
    Ben Bolker