Skip to content

Wrong convergence warnings with glmer in lme4 version 1.1-6??

2 messages · UG, Ben Bolker

UG
#
Tom Davis <tomd792 at ...> writes:
Hi,

Sorry for the length of the message below but I also would need some help
and advice.

I am having similar issues (warnings about convergence, may be more) than
other people on this subject but I have been totally lost among all
suggestions and their meanings.

I test the effect of various covriant including inbreeding (O or 1) on
number of occurence of a given behaviour. As it is counting data, I use a
poisson family

The output of my model is :

Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge with max|grad| = 1.26172 (tol = 0.001)
2: In if (resHess$code != 0) { :
  the condition has length > 1 and only the first element will be used
3: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model is nearly unidentifiable: very large eigenvalue
 - Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
 - Rescale variables?
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) ['glmerMod']
 Family: poisson ( log )
Formula: behaviour~ inbreeding+ opponent inbreeding + sexratio + opponent
aggressiveness+ opponent mass + duration of the test +  
    inbreeding:sexratio + opponent inbreeding:opponent mass+ (1 |
family/IDfocal)
   Data: datamal

     AIC      BIC   logLik deviance df.resid 
  1469.3   1507.5   -723.6   1447.3      228 

Scaled residuals: 
   Min     1Q Median     3Q    Max 
-3.211 -1.223 -0.279  0.847  4.491 

Random effects:
 Groups       Name        Variance Std.Dev.
 ID:family (Intercept) 0.25751  0.5075  
 family      (Intercept) 0.04598  0.2144  
Number of obs: 239, groups: ID:family, 63; family, 19

Fixed effects:
                  Estimate Std. Error z value Pr(>|z|)    
(Intercept)      0.4300096  0.3798771   1.132  0.25765    
inb             -0.6387831  0.4348942  -1.469  0.14188    
inbopp           1.2304585  0.4218720   2.917  0.00354 ** 
sexratio        -0.7260132  0.4096143  -1.772  0.07632 .  
oppaggre         0.0017798  0.0004272   4.166  3.1e-05 ***
oppmass          0.0010126  0.0078721   0.129  0.89764    
duration         0.0065125  0.0003442  18.922  < 2e-16 ***
inb:sexratio     1.7354628  0.6444371   2.693  0.00708 ** 
inbopp:oppmass  -0.0499209  0.0175064  -2.852  0.00435 ** 
---
Signif. codes:  0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1

Correlation of Fixed Effects:
               (Intr) inb    inbopp sexrat oppaggr oppmass dura  inb:sx
inb            -0.598                                                 
inbopp         -0.241 -0.004                                          
sexratio       -0.761  0.686 -0.007                                   
oppaggre       -0.121  0.021 -0.116  0.013                            
oppmass        -0.545  0.048  0.482  0.052 -0.101                     
duration       -0.079 -0.053 -0.013 -0.019 -0.331 -0.095              
inb:sexrati     0.499 -0.900  0.004 -0.665 -0.039 -0.048  0.100       
inbopp:oppmass  0.198  0.012 -0.987  0.012  0.154 -0.436 -0.001 -0.012
Analysis of Deviance Table (Type III Wald chisquare tests)

Response: behaviour
                   Chisq Df Pr(>Chisq)    
(Intercept)       1.2814  1   0.257647    
inb               2.1574  1   0.141880    
inbopp            8.5069  1   0.003538 ** 
sexratio          3.1415  1   0.076323 .  
oppaggre         17.3554  1    3.1e-05 ***
oppmass           0.0165  1   0.897644    
duration        358.0516  1  < 2.2e-16 ***
inb:sexratio      7.2522  1   0.007081 ** 
inbopp:oppmass    8.1315  1   0.004350 ** 

As usual, I looked at the distribution of residuals: nicely normal.

After that, according to the suggestions, I tested that:
relgrad <- with(test at optinfo$derivs,solve(Hessian,gradient))
max(abs(relgrad))
[1] 0.0002681561
-> if this is <0.001 or 0.002, is it enough to assume that the model is ok? 

In addition, I tested other optimizer, even if I do not really understand
what it means.
I started with:
testbobyqa <- update(test,control=glmerControl(optimizer="bobyqa"))
[1] 0.01506629
testnelder <- update(test,control=glmerControl(optimizer="Nelder_Mead"))
[1] 0.001706096
both showed the same warnings and similar results than the first model.

I continued with:
library(optimx)
testnlminb <- update(test,control=glmerControl(optimizer="optimx",
                                                  
optCtrl=list(method="nlminb")))
testBFGS <- update(test,control=glmerControl(optimizer="optimx",
                                                  
optCtrl=list(method="L-BFGS-B")))

-> in both case: error
testnlminb: 
Error in ans.ret[meth, ] <- c(ans$par, ans$value, ans$fevals, ans$gevals,  : 
  number of items to replace is not a multiple of replacement length
In addition: Warning messages:
1: In optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower,  :
  Parameters or bounds appear to have different scalings.
  This can cause poor performance in optimization. 
  It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.
2: In pwrssUpdate(pp, resp, tolPwrss, GQmat, compDev, fac, verbose) :
  Cholmod warning 'not positive definite' at
file:../Cholesky/t_cholmod_rowfac.c, line 431
3: In pwrssUpdate(pp, resp, tolPwrss, GQmat, compDev, fac, verbose) :
  Cholmod warning 'not positive definite' at
file:../Cholesky/t_cholmod_rowfac.c, line 431

testBFGS:
Error in eval(expr, envir, enclos) : 
  (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate
In addition: Warning messages:
1: In optimx.check(par, optcfg$ufn, optcfg$ugr, optcfg$uhess, lower,  :
  Parameters or bounds appear to have different scalings.
  This can cause poor performance in optimization. 
  It is important for derivative free methods like BOBYQA, UOBYQA, NEWUOA.
2: In pwrssUpdate(pp, resp, tolPwrss, GQmat, compDev, fac, verbose) :
  Cholmod warning 'not positive definite' at
file:../Cholesky/t_cholmod_rowfac.c, line 431
3: In pwrssUpdate(pp, resp, tolPwrss, GQmat, compDev, fac, verbose) :
  Cholmod warning 'not positive definite' at
file:../Cholesky/t_cholmod_rowfac.c, line 431
4: In optwrap(optimizer, devfun, start, rho$lower, control = control,  :
  convergence code 9999 from optimx

I then used:
library(nloptr)
## from https://github.com/lme4/lme4/issues/98:
defaultControl <- list(algorithm="NLOPT_LN_BOBYQA",xtol_rel=1e-6,maxeval=1e5)
nloptwrap2 <- function(fn,par,lower,upper,control=list(),...) {
  for (n in names(defaultControl)) 
    if (is.null(control[[n]])) control[[n]] <- defaultControl[[n]]
  res <- nloptr(x0=par,eval_f=fn,lb=lower,ub=upper,opts=control,...)
  with(res,list(par=solution,
                fval=objective,
                feval=iterations,
                conv=if (status>0) 0 else status,
                message=message))
}
test.bobyqa2 <- update(g0.bobyqa,control=glmerControl(optimizer=nloptwrap2))
test.NM2 <- update(g0.bobyqa,control=glmerControl(optimizer=nloptwrap2,
                                               
optCtrl=list(algorithm="NLOPT_LN_NELDERMEAD")))
-> similar results than test, testbobyqa, testnelder and warnings for both:
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge with max|grad| = 1.88422 (tol = 0.001)
2: In if (resHess$code != 0) { :
  the condition has length > 1 and only the first element will be used
3: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model is nearly unidentifiable: very large eigenvalue
 - Rescale variables?;Model is nearly unidentifiable: large eigenvalue ratio
 - Rescale variables?
[1] 1.845328e-05
[1] 1.925339e-05

I also tried that:
getpar <- function(x) c(getME(x,c("theta")),fixef(x))
modList <- list(bobyqa=testbobyqa,NM=testnelder ,
+ bobyqa2=test.bobyqa2,NM2=test.NM2)
ctab <- sapply(modList,getpar)
library(reshape2)
mtab <- melt(ctab)
library(ggplot2)
theme_set(theme_bw())
ggplot(mtab,aes(x=Var2,y=value,colour=Var2))+
+ geom_point()+facet_wrap(~Var1,scale="free")
->I don't really understand wht does this graph mean, if you could explain
to me?

I also looked at this:
likList <- sapply(modList,logLik)
round(log10(max(likList)-likList),1)
 bobyqa      NM bobyqa2     NM2 
   -2.3    -2.3    -9.2    -Inf 
->What do those values mean?

lbound <- getME(test,"lower")
theta <- getME(test,"theta")
any(lbound==0 & theta<1e-8)
[1] FALSE
-> it means that it is good?

Finally, I tried rescalling my covariables to see if it could be the source
of warnings by using:
datamal$variable.scaled <- datamal$variable/sd(datamal$variable)
and when I introduced the sclaed variables in the model, I received this
message:
Error in FUN(X[[1L]], ...) : 
  Invalid grouping factor specification, noms:famille

I have tried many things and I am totally lost so if you can help me
clarifying all that, it would be very appreciated.

I hope I wrote down all you needed to understand.

Thanks in advance for your help and sorry again for thlength of the message.
#
UG <uriel.gelin at ...> writes:
[major snippage]
Re-iterating what I may have said earlier:

 * these warnings are a result of changes in the WARNING behaviour, not a
change in the fitting algorithm.  The only major recent change in the
optimizer has been to move in 1.1-6 from Nelder-Mead to bobyqa for
lmer fits <http://cran.r-project.org/web/packages/lme4/news.html>; this
should isn't relevant to glmer fits and we believe should _improve_
results in almost all cases.
 * You should probably ignore the convergence warnings; we now believe
that they are (almost entirely) the result of checking the gradients,
rather than the scaled gradients (i.e. the results of solve(Hessian,grad).
 * Version 1.1-7, now on Github (available via devtools::install_github)
and with a macos binary on http://lme4.r-forge.r-project.org/repos ,
tests the scaled gradients and should make the problem go away.
 * If you want to double-check, either 
   * test with lme4.0 (although we
have had reports  <http://stackoverflow.com/questions/23662589/
r-reverting-to-lme4-0-and-still-getting-inconsistent-results>
(broken URL!) of 'Downdated XtX' problems -- still chasing that down ...
or * use the code at
https://github.com/lme4/lme4/blob/master/misc/issues/allFit.R
to test with a wide range of optimizers.

  Ben Bolker