Skip to content
Prev 16270 / 20628 Next

Error message in running CLMM models from Ordinal package: "optimizer nlminb failed to converge"

Dear Rune, 

Thank you very much for your thorough answer this is of great help. I
realised that the random effect structure suggested in my models is very
complicated (model fm5 here) and could be the cause of all my problems:
I think my reliance on such structures comes from my lack of fully
understanding the thinking behind random effects. When discussing it in
literature on linear mixed models people like Barr, Levy, Scheepers & Tily
(2013) and to some degree Matuschek, Kliegel, Vasishth, Baayen & Bates
(2017) argue that intercept only models increase the Type 1 error and as
such random slopes should be considered in building linear mixed models to
avoid the error inflation. This was the approach I took with my data. Of
course the authors there talk about linear mixed models while I am using
the CLMM framework and additionally I investigate interactions in my fixed
effects which tend to not be used as examples in mixed models (I assume
due to the complexity?).

My approach to choosing the random effect structure for my models starts
with assuming the maximal complexity of random effects that would make
sense for the design. For this experiment I investigate aspects of
confirmation bias and my hypothesis is that participants behave
differently based on their preexisting.beliefs and engagement with those
beliefs. Therefore I started with the random effect structure of
(preexisting.belief*engagement | id) and (engagement | item), however, I
would have also been happy with the random effect structure of
(preexisting.belief | id). I then compare the fm5 model with the anova()
function to the simplest alternative such as a random intercept model you
suggested in fm1:



fm1 <- clmm( value.statement ~ preexisting.belief * engagement + (1 |id) +
(1 | item), data=ucl.ordered)



This gives me the p-val of <2.2e-16 which I understood as a better fit for
the random slope model. I then move on to suggesting more complex
alternative models such as fm3 and fm4 and do the same comparison. In all
cases fm5 shows better fit: lower AIC, higher -LogLik and a small p-val. I
took this as an indication that in terms of model selection there is
enough evidence to prefer the random effect structure in fm5. Now it could
be the case that do to the complexity of the random structure in fm5 as
well as my fixed effects this might not be the best approach and could
actually provide me with misleading results. Is my approach flawed by
assuming such a complex random effect structure for this design (which in
itself is complex) and following the steps described? Is there a rule of
thumb on how complex one should go with these models?

Finally I am a bit confused about the difference between specifying: (1 |
item:engagement) and (engagement | item) as a random effect. If I assume
that the results of the experiment are affected by how engagement
functions with certain items (e.g. for some items engagement will vary
more and for some less and that will affect the behaviour of items)
wouldn't the (engagement | item) specification be more appropriate?


Once more thank you for your help and sorry for the long questions.
 
Best regards,
Davis
On 27/02/18 10:15, "Rune Haubo" <rune.haubo at gmail.com> wrote: