Skip to content
Prev 11408 / 20628 Next

Slight differences in fitted coefficients in lme4_1.0-6 compared to lme4_0.999999-2

One thing I noticed is that the intercepts of my models seem quite different from what I got before. Is the new version not using dummy coding by default or something? Or where could that come from? @Jake: how do I specify they bobyqa optimizer actually?

Cheers,
Tom

-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Jake Westfall
Sent: 07 February 2014 20:10
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Slight differences in fitted coefficients in lme4_1.0-6 compared to lme4_0.999999-2

Not sure if this thread is the time/place for me to bring this up, but here goes... I *routinely* find that the new Nelder-Mead optimizer in lme4 >= 1.0 provides worse solutions than the old bobyqa optimizer -- "worse" in the sense that, comparing the same model fitted to the same dataset using NM vs. bobyqa, the coefficients are noticeably different and deviance for the former model is noticeably higher. When I switch to bobyqa I pretty much reproduce the results of my models fitted under lme4 < 1.0... and bobyqa is faster too! At this point, I've gotten to where I just always instruct lme4 to use bobyqa and don't even check anymore to see what Nelder-Mead comes up with. One very important thing to mention here is that the overwhelming majority of models that I fit involve crossed random effects. So maybe the new Nelder-Mead optimizer fairly consistently outperforms bobyqa for nested random effects models, and this is the motivation for making it the new lme4 default, but i!
 n my experience, for the kind of models that I fit, bobyqa pretty much always does better.

Jake
_______________________________________________
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models