Skip to content
Prev 5106 / 20628 Next

Error message in ordinal() package running cumulative link mixed models function clmm()

Dear Sverre,

You seem to be using clmm as it was intended to be used and my best
guess is that the problems you observe are related to data structure
and model identifiability. I have a few ideas that might help, but for
a full diagnose I would need to see the data.

1) My guess is that the clmm did not converge and that the last
warning is exactly a warning about this. You probably cannot see that
unless you suppress the warnings in the random effects update in the
inner loop. You can do this by appending control =
clmm.control(innerCtrl = "noWarn") to your clmm call.

2) The condition number of the Hessian in the fixed effect clm
indicates that the likelihood function is ill-defined and possibly
there is a (near-) ridge. Looking at the threshold estimates, the
first two and the last two are very close and with rather peculiar
codings - is that intended? I would expect that the likelihood is more
well-defined if you collapse the first two and the last two response
categories, and this might be enough to make the clmm converge to a
well-defined optimum. Collapsing categories does not change the nature
of the model interpretations, so it is safe to do so - especially in a
case like this where the thresholds are indistinguishably for all
practical purposes.

2.2) Starting with the "full" model does not always work in mixed
effects models: You could try to fit the clmm without fixed effects at
all. If the convergence problem pertains, this means that the
thresholds are most likely the problem. Given the significance of your
fixed effects in the clm, I don't think the fixed effects are the
problem (though removing Trial could help). You could take a look at
the correlation between parameter estimates from
summary(FullModel.clm, correlation = TRUE) to get a better feeling for
which parameters are hard to distinguish.

3) Sometimes the likelihood function as defined by the Laplace
approximation is not sufficiently well-behaved on the way to the
optimum that the default optimizer used in clmm (ucminf) will converge
smoothly. This can often be rectified by using the more accurate
adaptive Gauss-Hermite quadrature (AGQ) approximation. If you append
nAGQ = 10 to your clmm call you will ask for 10 quadrature nodes which
usually gives reasonable precision.

4) In hard-to-optimize situations it should help to allow more
iterations in the inner loop (random effect update) and you can do
that by setting, e.g. control = clmm.control(maxIter = 200,
maxLineIter = 200). Decreasing gradTol to 1e-4 or 1e-5 might also
help. That being said, I have never seen a case where this was
necessary for a well-defined clmm, so i suggest that you consider the
points above first.

In CLMs model identifiability is a much more frequent issue than in
GLMs, LMs etc. and this is magnified when mixed effects versions are
in play. To provide more ideas, I would need more information, in
particular the results of str(dataset) and sessionInfo() would be
valuable. Ultimately I would need the data  - if you feel
uncomfortable sharing them with the list, you can send them to me
privately.

Cheers,
Rune

Ps: You will not be able to have two random effect terms (Subject and
Pair) in clmm in its current implementation.
On 12 January 2011 03:49, Sverre Stausland <johnsen at fas.harvard.edu> wrote: