Skip to content

upcoming changes to lme4 default optimizers (lmer only)

2 messages · Ben Bolker, Alday, Phillip

#
tl;dr If you want your results to stay identical in the upcoming
release of lme4 (1.1-20), you'll need to use
lmerControl(optimizer="bobyqa") to specify the current (but changing)
default optimizer.

  As of 1.1-20 (headed to CRAN soon) the default optimizer for lmer
changes from "bobyqa" (Powell's BOBYQA method as implemented in the
minqa package) to "nloptwrap" (the same method as implemented in the
nloptr package).  We have found only minor changes (often for the
better) in our own tests, and we haven't seen any problems with
downstream package tests, but this modification *will* change results
from lmer; generally only by a small amount, but in some unstable cases
could lead do quite different answers. For example, here:

https://stats.stackexchange.com/questions/384528/lme-and-lmer-giving-conflicting-results/384539#384539

bobyqa gives the wrong answer (it finds a mode away from zero) while
nloptwrap gets it right. In this case the difference is a clear
improvement, but nonlinear optimization being what it is we can't
guarantee that some changes won't be for the worse.

  We have not changed the glmer default (time window is too short for
this release, which should go to CRAN by 27 Jan), but probably will in
the future.

  If you can test the development version (on GitHub) and report [here
or on the lme4 issues list] any significant problems you encounter  in
the next few days, that would be great.

  cheers
    Ben Bolker
#
For psycholinguistics datasets (especially EEG and eye movements), I
have noticed that nloptwrap sometimes reports convergence failure when
"bobyqa" does not. I think the "problem" is with xtol_abs -- changing
maxeval has no impact, but allowing for smaller x-step changes does.
This is in itself perhaps indicative of other problems with the model
and data, but could once again lead to previously "converged" models not
converging in a new lme4 version. The non-converged nloptwrapr models
are definitely faster to compute though.

I haven't tested this systematically (neither in terms of comparing fits
of bobyqa vs non-converged vs converged nloptwrap nor in terms of
datasets) and there is no free lunch when it comes to optimizers, but it
might nonetheless be convenient to list some of these optimizer-specific
parameters in ?convergence or ?lmerControl beyond their use in the
examples. If not, then this message on a public mailing list might still
be useful hint for posterity. :)

So far I haven't noticed any issues with local optima like the
CrossValidated question.

Best,

Phillip
On 22/1/19 6:57 pm, Ben Bolker wrote: