Skip to content

glmer does not converge, how inaccurate is using nAGQ = 0?

6 messages · Paolo Fraccaro, Ben Bolker, Ken Beath

#
Hi 

I have a dataset of ~200k piece of hardware tested yearly for 10 years or 
until failure (~15k). Therefore, the overall dataset size is ~2,000k. I'm 
trying to fit a mixed effects logistic model with glmer, but the model 
does not converge with the default settings. I tried to increase the 
number of max iterations allowed (from 20 to 100) but still it does not 
converge. I then set the nAGQ = 0 and obtained the less accurate estimate 
of the model.

My questions would be:
Do you have any idea of what parameters I could modify to try to make the 
model converge?
How inaccurate is using nAGQ = 0?

Many thanks.

Paolo
#
You could use a value of nAGQ that is higher, start with 5 and work up.

How good the approximation is, depends. If you are having convergence
problems it probably isn't.
On 2 April 2015 at 01:23, Paolo Fraccaro <paolo.f.genova at gmail.com> wrote:

            

  
    
#
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 15-04-01 06:06 PM, Ken Beath wrote:
What's the magnitude of the max scaled gradient (i.e., the number
in the warning)?  We are *still* struggling with the proper way to
scale the desired gradient as a function of sample size ...

  cheers
    Ben Bolker
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)

iQEcBAEBAgAGBQJVHIrZAAoJEOCV5YRblxUH+bgH/iPX84jD7zg9tB3L6DNt0y1T
+xOCqB7WZ1ZuLQ+EEZ2NNMF0NmDKmpP02zj7lESNmGjfE+L639qk2RhiA9y8OeH+
vdT5A5vXfXZzdLlTwDTor0oswU3ECkMYTMww3L4OfM2ykacBAMIL/QfJ3A7MEuul
vnCZCcvLULX2MSrMfK7v8Jkti+Mi/LQ84VNot3XxkEGF37jbfjY5JGdGciDRH5I+
GFHAF8mnkBki81Pnf52rqAy45iuTVzm/p98k6SQ5rsUb2tJYMfDgPeOTLiOsrNu8
iLe8bzlNj3huiW3m5l20FrmU2eJPQmWxjF4G6fHufdTtsCU29CMjgqCPuZpjbHk=
=UNaY
-----END PGP SIGNATURE-----
#
Hi,

thanks for your suggestions. I left it going overnight still with nAGQ=1
and this time I got this warnings:

Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge with max|grad| = 0.00191069 (tol = 0.001,
component 4)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model is nearly unidentifiable: very large eigenvalue
 - Rescale variables?

Is the solution of increasing nAGQ still the best thing to do?

Many thanks,

Paolo
On 1 April 2015 at 23:06, Ken Beath <ken.beath at mq.edu.au> wrote:

            
#
I think you have other problems, although sometimes this can be a break
down in the approximations, and increasing nAGQ can work. Either your model
is not identifiable, which means that it is overparameterised or some of
your coefficients have become excessively negative, or you may have a very
high random effect variance. The last will be helped by increasing the
quadrature points, the second may be. Without seeing the output it is hard
to tell.
On 2 April 2015 at 20:36, Paolo Fraccaro <paolo.f.genova at gmail.com> wrote:

            

  
    
1 day later
#
Paolo sent me the output, and there are a couple of problems:
1. The intercept is very large about 94, which may cause some problems.
This can be fixed by centering (subtracting the mean or some other suitable
value)  the continuous variables.
2. The standard deveriation for the random effect is large. This usually
requires more quadrature points.
3. As you have a huge number of observations and groups it doesn't really
matter that much about the approximations. Almost anything will be
significant. I would just try centering the continuous variables.
4. You can also try another optimiser, as in a post from Ben, which
will probably remove the convergence error.
On 2 April 2015 at 21:10, Ken Beath <ken.beath at mq.edu.au> wrote: