Random effects in clmm() of package ordinal
Hi Jarrod,
Hi Malcolm,
Thank you Jarrod! In the meantime I wrote Rune and asked him whether
clmm() is currently able to fit random slope models and if he could update
his reference manual should this be the case. He answered that it is
indeed possible to fit random slope models with clmm(). In order to get a
an up to date version including a reference manual which does reflect
these changes one should download the ordinal package from r-forge:
install.packages("ordinal",
repos = "http://r-forge.r-project.org",
type = "source")
Best,
Christian
On Fri, Aug 29, 2014 at 06:51:07PM +0100, Jarrod Hadfield wrote:
Hi Christian, I think clmm is dealing with the random 'slopes' correctly (or at least it gives the same estimates as MCMCglmm) when temp is categorical (see below). Without knowing much about clmm I can imagine it might fail when temp is truly continuous because the variance of each latent variable varies as a function of the covariate and so (in the probit case) the cdf of the normal (i.e. the probit link) would need to be evaluated using different standard deviations. Cheers, Jarrod fm2 <- clmm(rating ~ temp + contact + (temp-1 | judge), ? ? ? ? ? ? data = wine, link="probit", ? ? ? ? ? ? Hess = TRUE) # note that I have changed temp|judge to temp-1|judge because I think it makes more sense with categorical variables. Also I have used probit link in order to compare it to MCMCglmm's "threshold". prior2=list(R=list(V=1, fix=1), G=list(G1=list(V=diag(2), nu=2, alpha.V=diag(2)*100, alpha.mu=c(0,0)))) fm2.mcmc <- MCMCglmm(rating ~ temp + contact, random=~us(temp):judge, data = wine,family="threshold", prior=prior2, nitt=13000*10, thin=10*5, burnin=3000*10, verbose=FALSE) par(mfrow=c(2,2)) hist(fm2.mcmc$Sol[,2], breaks=50) abline(v=coef(fm2)[5], col="red") hist(fm2.mcmc$Sol[,3], breaks=50) abline(v=coef(fm2)[6], col="red") hist(fm2.mcmc$VCV[,1], breaks=50) abline(v=VarCorr(fm2)$judge[1], col="red") hist(fm2.mcmc$VCV[,4], breaks=50) abline(v=VarCorr(fm2)$judge[4], col="red") Quoting Christian Brauner <christianvanbrauner at gmail.com> on Fri, 29 Aug 2014 19:20:42 +0200:
Hi Malcolm,
Interesting. I just read the reference manual for the ordinal package (v.
July 2, 2014) and indeed at the beginning it states "random slopes (random
coefficient models) are not yet implemented" (p. 3). If this i indeed true
then I'm puzzled by some things:
(a) later on he explains the usage of the extractor function VarCorr() for
class "clmm" and states "[...] a model can contain a random intercept (one
column) or a random intercept and a random slope (two columns) for the
same grouping factor" (p. 52).
(b) at first glance this seems inconsistent with the output of clmm():
library(ordinal)
fm2 <- clmm(rating ~ temp + contact + (temp | judge),
data = wine,
Hess = TRUE)
Now look at
summary(fm2)
Random effects:
Groups Name Variance Std.Dev. Corr
judge (Intercept) 1.15608 1.0752
tempwarm 0.02801 0.1674 0.649
Number of groups: judge 9
and
ranef(fm2)
ranef(fm2)
$judge
(Intercept) tempwarm
1 1.61166859 0.20664909
2 -0.49269459 -0.06317359
3 0.93254466 0.11957142
4 -0.08571746 -0.01099074
5 0.13893821 0.01781474
6 0.44710565 0.05732815
7 -1.77974875 -0.22820043
8 -0.26589478 -0.03409318
9 -0.51044493 -0.06544955
that looks like a random intercept and random slope model.
What I accidently did in my models (illustrating here with his example) is
not to replace "temp" in the fixed part but in the ranom part:
fm2 <- clmm(rating ~ temp + contact + (temp | judge),
data = wine,
Hess = TRUE)
fm2a <- clmm(rating ~ temp + contact + (1 | judge),
data = wine,
Hess = TRUE)
anova(fm2a, fm2)
The anova output is fairly similar to mine; the p is extremely high as
well.
All the standard stuff works without a warning, e.g.:
fm2b <- clmm(rating ~ temp + contact + (1 | judge) + (temp + 0 | judge),
data = wine,
Hess = TRUE)
So what is happening here? Anybody got an idea?
Best,
Christian
On Fri, Aug 29, 2014 at 09:41:26AM -0700, Malcolm Fairbrother wrote:
Hi Christian, The ordinal package (which is otherwise very handy) does not allow for random slopes (only random intercepts), so I don't think you can have tested what you think you tested using that package. You could try the MCMCglmm package instead, which allows for ordinal models *and* random slopes. Regarding random slopes more generally, Barr et al.'s (2013) paper "Random effects structure for confirmatory hypothesis testing: Keep it maximal" shows pretty definitively that not allowing for random slopes can often be anticonservative. So if there's a gap between the p values you get with and without random slopes, I'd be more inclined to trust the value *with* random slopes. Hope that's useful. Cheers, Malcolm
Date: Fri, 29 Aug 2014 13:31:03 +0200 From: Christian Brauner <christianvanbrauner at gmail.com> To: r-sig-mixed-models at r-project.org Subject: [R-sig-ME] Random effects in clmm() of package ordinal Hello, fitting linear mixed models it is often suggested that testing for random effects is not the best idea; mainly because the value of the random effects parameters lie at the boundary of the parameter space. Hence, it is preferred to not test for random effects and rather judge the inclusion of a random effect by the design of the experiment. Or if one really wants to do this use computation intensive methods like parametric bootstraps etc. I have adapted the strategy of not testing for random effects with linear mixed models. Now I'm in a situation were I need to analyse ordinal data in a repeated measures design. The package I decided would best suit this purpose is the ordinal package (suggestions of alternatives are of course welcome). And this got me wondering about random effects again. I was testing a random effect (in fact by accidence as I did a faulty automated regexp substitution) and it got a p of 0.99. More precisely I was testing for the significance of a random slope in contrast to only including a random intercept. As the boundary-of-parameter-space argument is about maximum likelihood estimation in general it also applies to the proportional odds cummulative mixed model. But, and here is were I'm unsure what to do in this particular case the inclusion of a random slope in the clmm will turn a p of 0.004 into 0.1 for my main effect. In contrast all other methods (e.g. treating my response not as an ordered factor but as a continuous variable and using a repeated measures anova) will give me a p of 0.004. This is the only reason why I'm concerned about this. This difference worries me and I'm unsure of what to do. Is it advisable to test here for a random effect? Best, Christian
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
-- The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.