False convergence when testing random effects
On Fri, Aug 12, 2011 at 10:01 AM, David Duffy <davidD at qimr.edu.au> wrote:
On Thu, 11 Aug 2011, Finlayson, Ian wrote:
Hello, I've been building models with lmer() and keep running into False Convergence warnings. I keep encountering cases where the addition of one parameter seems to lead to the warning, but then the subsequent addition of another remedies it.
[...]
What's particularly alarming is that I ran the same analyses about 8 or 9 months ago without any problems.
Some of the developers will hopefully pipe up, but I believe there has been a certain amount of fiddling around with the code, changing the maximizer etc. ?Are your example data shareable? ?It is probably a good idea to run these types of model in a couple of different packages anyway. Just 2c, David Duffy.
The warning about false convergence comes from the nlminb optimizer that is used in the lme4 package. We have done some modifications in the still-not-officially-released lme4a package to use an different optimizer from the minqa package that seems to behave better. However, there are other problems with the compilation and testing of the lme4a package that mean that it is still in the testing stages. In general it is difficult to get a reliable optimum in such cases, exactly as you (Ian) described, because of the very low value of the intercept. If you know what the logistic curve looks like you will realize that it loses all sensitivity to the value of the parameter when it is that small. Hence determining the optimum is a bit of hit-and-miss. The optimizer used in lme4 (the R function nlminb) is conservative about declaring an optimum and will give this false convergence message in this case of insensitivity of the deviance criterion to the value of a parameter. For the purposes of model building, however, you do know that the deviance reported at the end of the iterations is an upper bound on the deviance for the fitted model, and probably a tight upper bound in this case.