Skip to content

Slight differences in fitted coefficients in lme4_1.0-6 compared to lme4_0.999999-2

8 messages · Ben Bolker, Jake Westfall, Tom Wenseleers +2 more

#
On 14-02-07 03:34 PM, Tom Wenseleers wrote:
I think you should be able to install lme4.0 from
http://lme4.r-forge.r-project.org/repos/  to reproduce previous outputs.
 You *might* be able to reproduce previous results by setting
control=lmerControl(optimizer="optimx",optCtrl=list(method="nlminb")),
but I don't think we could guarantee that -- too much of the internal
machinery has changed too radically.

  Ben Bolker
#
One thing I noticed is that the intercepts of my models seem quite different from what I got before. Is the new version not using dummy coding by default or something? Or where could that come from? @Jake: how do I specify they bobyqa optimizer actually?

Cheers,
Tom

-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Jake Westfall
Sent: 07 February 2014 20:10
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Slight differences in fitted coefficients in lme4_1.0-6 compared to lme4_0.999999-2

Not sure if this thread is the time/place for me to bring this up, but here goes... I *routinely* find that the new Nelder-Mead optimizer in lme4 >= 1.0 provides worse solutions than the old bobyqa optimizer -- "worse" in the sense that, comparing the same model fitted to the same dataset using NM vs. bobyqa, the coefficients are noticeably different and deviance for the former model is noticeably higher. When I switch to bobyqa I pretty much reproduce the results of my models fitted under lme4 < 1.0... and bobyqa is faster too! At this point, I've gotten to where I just always instruct lme4 to use bobyqa and don't even check anymore to see what Nelder-Mead comes up with. One very important thing to mention here is that the overwhelming majority of models that I fit involve crossed random effects. So maybe the new Nelder-Mead optimizer fairly consistently outperforms bobyqa for nested random effects models, and this is the motivation for making it the new lme4 default, but i!
 n my experience, for the kind of models that I fit, bobyqa pretty much always does better.

Jake
_______________________________________________
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
#
The development version of lme4 on github now has the default optimizer 
switched back to bobyqa.  To explicitly set the optimizer, use something 
like:

fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy,
             control = lmerControl(optimizer = "bobyqa"))


Cheers,
Steve
On 2/7/2014, 5:31 PM, Tom Wenseleers wrote:
i!
#
Hi Jake,
So is setting control=lmerControl(optimizer="bobyqa") enough then?

Cheers,Tom

-----Original Message-----
From: r-sig-mixed-models-bounces at r-project.org [mailto:r-sig-mixed-models-bounces at r-project.org] On Behalf Of Jake Westfall
Sent: 07 February 2014 20:10
To: r-sig-mixed-models at r-project.org
Subject: Re: [R-sig-ME] Slight differences in fitted coefficients in lme4_1.0-6 compared to lme4_0.999999-2

Not sure if this thread is the time/place for me to bring this up, but here goes... I *routinely* find that the new Nelder-Mead optimizer in lme4 >= 1.0 provides worse solutions than the old bobyqa optimizer -- "worse" in the sense that, comparing the same model fitted to the same dataset using NM vs. bobyqa, the coefficients are noticeably different and deviance for the former model is noticeably higher. When I switch to bobyqa I pretty much reproduce the results of my models fitted under lme4 < 1.0... and bobyqa is faster too! At this point, I've gotten to where I just always instruct lme4 to use bobyqa and don't even check anymore to see what Nelder-Mead comes up with. One very important thing to mention here is that the overwhelming majority of models that I fit involve crossed random effects. So maybe the new Nelder-Mead optimizer fairly consistently outperforms bobyqa for nested random effects models, and this is the motivation for making it the new lme4 default, but i!
 n my experience, for the kind of models that I fit, bobyqa pretty much always does better.

Jake
_______________________________________________
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
#
@ Jake: I am not in any sense a statistician nor programmer and hence no
reference, but I can just support this observation. I often have noisy
data which in 90% of the cases cannot be fit using NM (non-convergence
although with many iterations) but which is consistently dealt with by
using bobyqa (with an equal amount of iterations)...
And I do not have the impression that its results seem to be inaccurate,
but that NM gets stuck in many situations.

Ulf

Am 07.02.2014 23:09, schrieb Jake Westfall:

  
    
#
This is all very helpful, and reinforces our decision (mentioned by
Steve Walker) to switch to a bobyqa default in an imminent release. We
had hoped to do more systematic testing, but anecdotal evidence from
many users is better than anecdotal evidence just from the problems the
authors have stumbled across.  If there are users out there who have
encountered the *opposite* scenario (Nelder-Mead works better than
bobyqa) we'd love to hear it, but we know this is harder to detect
(because N-M is the default, it is more likely that people will notice
problems with N-M and switch to bobyqa than the opposite).

Tom: default contrasts haven't changed, so I don't know what's up with
your intercept terms.  Maybe options(contrasts=...) was set formerly?
You could check attributes(old_lme4_model at frame) (in lme4.0) or
attributes(model.frame(new_lme4_model)) (in lme4) to compare ...

If you are an experienced user it would be great if you could try the
most recent development version (via
devtools::install_github("lme4","lme4") and report unusual or
interesting results to the list ... we're particularly interested in (1)
bobyqa glitches and (2) obvious false positive warnings from the new
convergence testing code -- especially examples of singular fits that
report large gradients (more generally, any last-minute comments or
pleas for bug fixes/minor features should be reported to us soon). We
don't currently consider any of the issues at
https://github.com/lme4/lme4/issues?state=open release-critical, except
https://github.com/lme4/lme4/issues/120, which should be closed as soon
as we can convince ourselves there aren't too many false positives.
On 14-02-07 06:26 PM, Ulf K?ther wrote:
but I don't think we could guarantee that -- too much of the internal