Skip to content

Is there a way to deal with errors such as this?

14 messages · Jonathan Judge, Daniel Lüdecke, Dimitris Rizopoulos +4 more

#
By "this" I mean as demonstrated in the following code.  The file 
testData.txt is attached.

X <- dget("testData.txt")
library(lme4)
fit <- glmer(cbind(Dead,Alive) ~ (0+Trt)/Dose + (Dose | Rep),
              data=X,family=binomial(link="probit"))

The foregoing falls over with the (rather complex) error message:
I note that mixed_model() from GLMMadaptive seems to be able to deal 
with these data and this model:

library(GLMMadaptive)
fit <- mixed_model(fixed=cbind(Dead,Alive) ~ (0+Trt)/Dose,
                    random=~Dose | Rep,
                    data=X,family=binomial(link="probit"))

The foregoing runs without complaint.

I am applying the glmer() model in the context of doing some fairly 
elaborate simulations (in which "X" gets randomly generated) and the 
error causes the simulations to crash unpleasantly.  So I would *like* a 
magic incantation that I can apply in an automated way to prevent the
error from occurring.

I can of course wrap function calls up in try() and if there is an error
generate a new data set and go again.  However I'm a little apprehensive
that this might bias the results of the simulations in some way.

I could also switch to using mixed_model(), but would prefer to stick 
with the devil I know (i.e. glmer()) for the sake of consistency with 
other work that I have done.  (And who knows?  Maybe in the course of 
the simulations mixed_model() might fall over too, from time to time.)

I'd appreciate any avuncular (or materteral) advice that anyone might be 
inclined to offer.

cheers,

Rolf
#
Rolf:

Sadly, that error has always been the kiss of death for me no matter what I tried. As parameterized, the model probably just won?t optimize in lme4. If you want to stick with pseudo-frequentist inference, you could try glmmTMB, which is developed by some of the same folks (well, one in particular) and it may work using the same notation you are used to. But that error message usually means you need to try a different package. 

Jonathan 

Sent from my iPhone
#
I would sadly agree with this assessment as well.  It would be good
to collect a set of models that fail in this way and use them to
debug/figure out what the problem is: the PIRLS loop is implemented
entirely in C++, which makes it relatively painful to debug. (It might
also be possible to make some progress by using the "pureR"
implementation that Steve Walker wrote a few years ago ...)

 One little clue is that (as I vaguely remembered/suspected) is that
the proximal problem is that iterations that result in NaN are not
dealt with well. Next step: figuring out why the NaNs are generated in
the first place ...

*** pwrssUpdate step 1
pdev=182574; delu_min: -158.148; delu_max: 310.47; delb_min: -7.05005;
delb_max: 60.6557

pwrssUpdate: Entering step halving loop
step-halving iteration 0:  pdev=45331.6; delu_min: -79.2949; delu_max:
155.422; delb_min: -3.50962; delb_max: 30.2776
step-halving iteration 1:  pdev=11437.5; delu_min: -39.8683; delu_max:
77.8977; delb_min: -1.73941; delb_max: 15.0885
*** pwrssUpdate step 2
pdev=nan; delu_min: nan; delu_max: nan; delb_min: nan; delb_max: nan

pwrssUpdate: Entering step halving loop
step-halving iteration 0:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 1:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 2:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 3:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 4:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 5:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 6:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 7:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 8:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
step-halving iteration 9:  pdev=nan; delu_min: nan; delu_max: nan;
delb_min: nan; delb_max: nan
Error in pwrssUpdate(pp, resp, tol = tolPwrss, GQmat = GHrule(0L),
compDev = compDev,  :
  (maxstephalfit) PIRLS step-halvings failed to reduce deviance in pwrssUpdate
On Sat, Dec 14, 2019 at 8:50 PM Jonathan Judge <bachlaw01 at outlook.com> wrote:
#
PS that output was obtained by setting verbose=1000 when running glmer ...
On Sat, Dec 14, 2019 at 9:11 PM Ben Bolker <bbolker at gmail.com> wrote:
#
Thanks to Jonathan Judge and Ben Bolker for their replies.  It appears 
that the short answer to my question is "No."

That being so, I will go with the strategy of wrapping the calls in 
try() and simulating new data if the error occurs.  If this risks 
biasing the simulation results, well, at least I have a good excuse to 
tell the referees! :-)

Perhaps later I will re-run the simulations using mixed_model() from
GLMMadaptive and compare the results.

Thanks again.

cheers,

Rolf
#
The model seems to fit w/o error when you use "glmmTMB". Unlike glmmADAPTIVE, which uses a more time consuming adaptive Gaussian quadrature rule, glmmTMB might be faster to run the models (even faster than glmer(), probably).

 

library(glmmTMB)

fit <- glmmTMB(

  cbind(Dead, Alive) ~ (0 + Trt) / Dose + (Dose | Rep),

  data = dat,

  family = binomial(link = "probit")

)

 

parameters::model_parameters(fit)

#> Parameter             | Coefficient |   SE |         95% CI |     z |  df |      p

#> ----------------------------------------------------------------------------------

#> Trt16hour10deg        |       -0.74 | 0.33 | [-1.38, -0.10] | -2.26 | 109 | 0.024 

#> Trt16hour20deg        |        0.13 | 0.33 | [-0.52,  0.78] |  0.40 | 109 | 0.691 

#> Trt16hour5deg         |       -0.86 | 0.32 | [-1.48, -0.24] | -2.71 | 109 | 0.007 

#> Trt8hour10deg         |       -0.12 | 0.37 | [-0.84,  0.60] | -0.32 | 109 | 0.749 

#> Trt8hour20deg         |       -0.87 | 0.31 | [-1.49, -0.26] | -2.79 | 109 | 0.005 

#> Trt8hour5deg          |       -0.77 | 0.31 | [-1.38, -0.17] | -2.50 | 109 | 0.012 

#> Trt16hour10deg : Dose |        0.16 | 0.01 | [ 0.14,  0.19] | 13.50 | 109 | < .001

#> Trt16hour20deg : Dose |        0.23 | 0.02 | [ 0.18,  0.28] |  9.35 | 109 | < .001

#> Trt16hour5deg : Dose  |        0.08 | 0.01 | [ 0.07,  0.09] | 13.42 | 109 | < .001

#> Trt8hour10deg : Dose  |        0.08 | 0.01 | [ 0.07,  0.10] | 10.84 | 109 | < .001

#> Trt8hour20deg : Dose  |        0.17 | 0.01 | [ 0.15,  0.19] | 16.59 | 109 | < .001

#> Trt8hour5deg : Dose   |        0.05 | 0.00 | [ 0.04,  0.06] | 14.52 | 109 | < .001

 

Best

Daniel

 

-----Urspr?ngliche Nachricht-----
Von: R-sig-mixed-models <r-sig-mixed-models-bounces at r-project.org> Im Auftrag von Rolf Turner
Gesendet: Sonntag, 15. Dezember 2019 02:20
An: r-sig-mixed-models at r-project.org
Betreff: [R-sig-ME] Is there a way to deal with errors such as this?

 

 

By "this" I mean as demonstrated in the following code.  The file 

testData.txt is attached.

 

X <- dget("testData.txt")

library(lme4)

fit <- glmer(cbind(Dead,Alive) ~ (0+Trt)/Dose + (Dose | Rep),

              data=X,family=binomial(link="probit"))

 

The foregoing falls over with the (rather complex) error message:
I note that mixed_model() from GLMMadaptive seems to be able to deal 

with these data and this model:

 

library(GLMMadaptive)

fit <- mixed_model(fixed=cbind(Dead,Alive) ~ (0+Trt)/Dose,

                    random=~Dose | Rep,

                    data=X,family=binomial(link="probit"))

 

The foregoing runs without complaint.

 

I am applying the glmer() model in the context of doing some fairly 

elaborate simulations (in which "X" gets randomly generated) and the 

error causes the simulations to crash unpleasantly.  So I would *like* a 

magic incantation that I can apply in an automated way to prevent the

error from occurring.

 

I can of course wrap function calls up in try() and if there is an error

generate a new data set and go again.  However I'm a little apprehensive

that this might bias the results of the simulations in some way.

 

I could also switch to using mixed_model(), but would prefer to stick 

with the devil I know (i.e. glmer()) for the sake of consistency with 

other work that I have done.  (And who knows?  Maybe in the course of 

the simulations mixed_model() might fall over too, from time to time.)

 

I'd appreciate any avuncular (or materteral) advice that anyone might be 

inclined to offer.

 

cheers,

 

Rolf
#
On 15/12/19 11:20 pm, Daniel L?decke wrote:

            
<SNIP>

Thanks for this advice.  For some reason I had it my head that 
GLMMadaptive worked better than glmmTMB.  I don't know why I got that 
idea.  I will do some experimentation and timing assessments later on.
But for the moment I'm sticking with glmer() and the strategy of 
discarding the simulated data set and generating a new one if glmer() 
throws an error.

Thanks again.

cheers,

Rolf
#
In the datasets for which glmer() fails you could try fitting the model with GLMMadaptive or glmmTMB. The model you?re fitting is always the same, the optimization and numerical integration algorithms change in the different packages.

Moreover, if the main aim of the simulation is to assess some properties of the model and not of the optimization algorithm, you could help the optimization procedure by supplying as starting values for the model parameters the true parameters values from which you simulate the data.

Best,
Dimitris

?-
Dimitris Rizopoulos
Professor of Biostatistics
Erasmus University Medical Center
The Netherlands

________________________________
???: ? ??????? R-sig-mixed-models <r-sig-mixed-models-bounces at r-project.org> ?? ?????? ??? ?????? Rolf Turner <r.turner at auckland.ac.nz>
????????: ???????, ?????????? 15, 2019 21:45
????: Daniel L?decke
????.: r-sig-mixed-models at r-project.org
????: Re: [R-sig-ME] Is there a way to deal with errors such as this?
On 15/12/19 11:20 pm, Daniel L?decke wrote:

            
<SNIP>

Thanks for this advice. For some reason I had it my head that
GLMMadaptive worked better than glmmTMB. I don't know why I got that
idea. I will do some experimentation and timing assessments later on.
But for the moment I'm sticking with glmer() and the strategy of
discarding the simulated data set and generating a new one if glmer()
throws an error.

Thanks again.

cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-mixed-models&amp;data=02%7C01%7Cd.rizopoulos%40erasmusmc.nl%7Cb50fcaf7013c475560e108d7819fc860%7C526638ba6af34b0fa532a1a511f4ac80%7C0%7C0%7C637120395587161755&amp;sdata=kBWvFPzbLuOdLVnIwTL7hTMeb9zsFt6vxauVgk39PQs%3D&amp;reserved=0
#
On 16/12/19 10:08 am, D. Rizopoulos wrote:
<SNIP>
That is a very appealing idea, but I'm afraid that I find the help for 
glmer() to be utterly opaque in terms of the "start" argument.  May I 
impose upon you for some help to get me, uh, started? ( :-) )

Explicitly, suppose that I have a data frame "protoX", and I fit a model:

protoFit <- glmer(cbind(Dead,Alive) ~ (0+Trt)/Dose + (Dose | Rep),
              data=protoX,family=binomial(link="probit"))

Suppose that I then simulate data from protoFit:

X <- protoX
X[,c("Dead","Alive") <- simulate(protoFit)[,1]

The "true" values of the parameters in respect of fitting a model to X, 
will be the fitted parameters contained in protoFit.  How do I specify 
these as starting values in a call to glmer()?

I would do something like:

fit <- glmer(cbind(Dead,Alive) ~ (0+Trt)/Dose + (Dose | Rep),
              data=X,family=binomial(link="probit"),start=???)

What do I use for "???" ?  The help seems to indicate that it should be 
a list with components "fixef" and "theta".  I would conjecture that I'd 
get the "fixef" component as fixef(protoFit).  However it is totally 
mysterious to me how I would get "theta" (or even what "theta" *is*).

Can you (or someone) point me in the right direction?  (An *example* of 
the use of "start" in the help file would have been nice. :-( )

cheers,

Rolf
#
Hi Rolf,

'theta' is the vector of random-effect parameters, obtained by getME(model,'theta'). I'm sure someone more knowledgeable than I can tell you their exact definition - I thought they are the entries of the lower Cholesky factor of the random-effects design matrix scaled by the model's residual error, but I may have gotten a few things mixed up here. 'fixef' is indeed just fixef(model).

HTH,
Cesko

-----Oorspronkelijk bericht-----
Van: R-sig-mixed-models <r-sig-mixed-models-bounces at r-project.org> Namens Rolf Turner
Verzonden: maandag 16 december 2019 00:08
Aan: D. Rizopoulos <d.rizopoulos at erasmusmc.nl>
CC: r-sig-mixed-models at r-project.org
Onderwerp: Re: [R-sig-ME] Is there a way to deal with errors such as this?
On 16/12/19 10:08 am, D. Rizopoulos wrote:
<SNIP>
That is a very appealing idea, but I'm afraid that I find the help for
glmer() to be utterly opaque in terms of the "start" argument.  May I impose upon you for some help to get me, uh, started? ( :-) )

Explicitly, suppose that I have a data frame "protoX", and I fit a model:

protoFit <- glmer(cbind(Dead,Alive) ~ (0+Trt)/Dose + (Dose | Rep),
              data=protoX,family=binomial(link="probit"))

Suppose that I then simulate data from protoFit:

X <- protoX
X[,c("Dead","Alive") <- simulate(protoFit)[,1]

The "true" values of the parameters in respect of fitting a model to X, will be the fitted parameters contained in protoFit.  How do I specify these as starting values in a call to glmer()?

I would do something like:

fit <- glmer(cbind(Dead,Alive) ~ (0+Trt)/Dose + (Dose | Rep),
              data=X,family=binomial(link="probit"),start=???)

What do I use for "???" ?  The help seems to indicate that it should be a list with components "fixef" and "theta".  I would conjecture that I'd get the "fixef" component as fixef(protoFit).  However it is totally mysterious to me how I would get "theta" (or even what "theta" *is*).

Can you (or someone) point me in the right direction?  (An *example* of the use of "start" in the help file would have been nice. :-( )

cheers,

Rolf

--
Honorary Research Fellow
Department of Statistics
University of Auckland
Phone: +64-9-373-7599 ext. 88276

_______________________________________________
R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
#
On 17/12/19 1:49 am, Voeten, C.C. wrote:

            
That's great.  Thanks hugely.  The "getME(model,'theta')" bit was what I 
really needed to know (and no clue how to find).

Thanks again.

cheers,

Rolf
6 days later
#
For completeness, you can also get use getME(model, "beta") to get the 
fixed effects estimates as a simple, unnamed vector. :)
On 17/12/19 4:40 am, Rolf Turner wrote:
#
On 24/12/19 3:20 am, Phillip Alday wrote:

            
Thank you.  That is useful to know.

cheers,

Rolf

P. S. Someday I must try to teach the world how to write help 
files/software documentation!!! :-)

R. T.
#
Comments welcome at https://github.com/lme4/lme4/issues/552

  In the grand tradition of R documentation, none of the existing
package documentation is actually *incorrect* (that I know of ...) but
some is very sketchy.
On 2019-12-23 4:07 p.m., Rolf Turner wrote: