Skip to content

lme4

2 messages · Ebi Safaie, Ben Bolker

#
Dear Ben Bolker, 
Thank you very much for your informative reply.
Yes, I followed Barr et al (2013).

I did what you kindly sent me. I'm not sure I've done it correctly but it came to false

It would be a good idea to check for a singular fit, i.e.

  t <- getME(mod.15,"theta")
  lwr <- getME(mod.15,"lower")
  any(t[lwr==0]< 1e-6)



t <- getME(mod.15,"theta") > lwr <- getME(mod.15,"lower") > any(t[lwr==0]< 1e-6) [1] FALSE


I increased the number of iterations as you suggested

summary(mod.15<-glmer(ErrorRate~1 +cgroup*cgrammaticality*cHeadNoun*cVerbType+(1|itemF)+(1+grammaticality*HeadNoun*VerbType|participantF),data =e3, + family="binomial",na.action=na.exclude,control=glmerControl(optCtrl=list(maxfun=1e6))))

but it came to the following message

Warning messages: 1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  : Model failed to converge with max|grad| = 0.113924 (tol = 0.001, component 29) 2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  : Model failed to converge: degenerate  Hessian with 1 negative eigenvalues
Actually the following two interactions are important for me
 because they are representing two hypothesis
2 way 

cgroup*cgrammaticality


4 way interaction
cgroup*cgrammaticality*cHeadNoun*cVerbType

Earlier, I used odds ratio to calculate the effect sizes (Table below) and I was able to
dissociate between these two interactions (i.e., two hypotheses) via their effect sizes. 
Due to wider range of the lower and upper limits of 95% CI I supported the 2 way interaction. 
Am I on the right track? 
Given that I want to use the newer version of lme4 (as you recommended) 
I would really appreciate your help to let me know what to do you with this really 
complex design.


Thanks for your help in advance
Kind regards,
Ebrahim




  
  Table 9.Experiment 1a: Fixed-effects from mixed-effects logistic regression model fit to data from both NSs and the NNSs for S-V agreement  Main analysis 
Fixed effects:           Odds Ratio (OR) 95% CI
For OR 
  Estimate Std. Error z value Pr(>|z|)  LL UL 
(Intercept) -1.9745 0.1274 -15.494 < 2e-16 *** 0.14 0.11 0.18 
Group (NNSs) 1.5843 0.1789 8.854 < 2e-16 *** 4.88 3.43 6.92 
Grammaticality (Ungrammatical) 0.5245 0.2182 2.404 0.0162 * 1.69 1.1 2.59 
Head Noun (SG) -0.272 0.1853 -1.468 0.1422   0.76 0.53 1.1 
Verb Type (THEMA)  0.7591 0.2326 3.263 0.0011 ** 2.14 1.35 3.37 
Group (NNSs)? Grammaticality (Ungrammatical) 1.5796 0.3586 4.404 1.06e-05 *** 4.85 2.4 9.8 
Group (NNSs)? Head Noun (SG) 0.0475 0.3537 0.134 0.8932   1.05 0.52 2.1 
Grammaticality (Ungrammatical)  ? Head Noun (SG) 0.5368 0.4338 1.237 0.2159   1.71 0.73 4 
Group (NNSs) ? Verb Type (THEMA) -0.2441 0.3472 -0.703 0.4821   0.78 0.4 1.55 
Grammaticality (Ungrammatical)  ? Verb Type (THEMA) -0.4861 0.4185 -1.162 0.2454   0.61 0.27 1.4 
 Head Noun (SG) ?Verb Type (THEMA) -0.1563 0.3969 -0.394 0.6936   0.86 0.39 1.86 
Group (NNSs) ?Grammaticality (Ungrammatical)  ? Head Noun (SG) 0.2659 0.7161 0.371 0.7104   1.3 0.32 5.31 
Group (NNSs)? Grammaticality (Ungrammatical) ? Verb Type (THEMA) -0.4691 0.6945 -0.675 0.4994   0.63 0.16 2.44 
Group (NNSs)? Head Noun (SG) ? Verb Type (THEMA) 0.7661 0.6916 1.108 0.2679   2.15 0.55 8.34 
Grammaticality (Ungrammatical)  ? Head Noun (SG) ? Verb Type (THEMA) 0.9104 0.9147 0.995 0.3196   2.49 0.41 14.93 
Group (NNSs)? Grammaticality (Ungrammatical)  ? Head Noun (SG) ? Verb Type (THEMA) 3.1326 1.3994 2.239 0.0252 * 22.93 1.48 356.16 
summary(mod.15<-glmer(ErrorRate~1 +cgroup*cgrammaticality*cHeadNoun*cVerbType+(1|itemF)+(1+grammaticality*HeadNoun*VerbType|participantF),data =e3,+ family="binomial",na.action=na.exclude))
On Wednesday, October 15, 2014 8:30 PM, Ben Bolker <bbolker at gmail.com> wrote:

            
that I did not mention my
Note
 that this is a very large (15*15) random-effects
variance-covariance matrix to estimate: I know that this is
recommended by Barr et al 2013, but see recent
 discussion
on this list, e.g.
http://article.gmane.org/gmane.comp.lang.r.lme4.devel/12492/

  It would be a good idea to check for a singular fit, i.e.

  t <- getME(mod.15,"theta")
  lwr <- getME(mod.15,"lower")
  any(t[lwr==0]< 1e-6)
< 2e-16 ***
0.1789   8.854  < 2e-16 ***
0.0475     0.3537   0.134   0.8932
cgrammaticality:cHeadNoun                    0.5368     0.4338   1.237   0.2159
0.6945  -0.675   0.4994
0.7661     0.6916   1.108   0.2679
These estimated effects look only very slightly different to me
than the ones below (i.e., only a few percent differences in point
estimates, always much smaller than the estimated standard error, and
no qualitative differences in Z/P values).  Can you specify whether
there are any differences that particularly concern you?
I
You definitely need to increase the number of iterations: see
?lmerControl,
specifically the "optCtrl" setting (e.g.
control=lmerControl(optCtrl=list(maxfun=1e6)))
failed to converge with max|grad| = 0.0928109 (tol = 0.001, component 28)
These are convergence *warnings*.  They do not indicate that your fit
is actually any worse than previously, just that we have increased the
sensitivity of the tests.  Can you specify what version you are using?

   I wouldn't recommend moving back to an earlier version of lme4,
but you could check out https://github.com/lme4/lme4/blob/master/README.md
for instructions on how to install the lme4.0 package if you
 really
want ...
Estimate Std. Error z value Pr(>|z|)
0.75503    0.25615   2.948   0.0032 **
-1.077   0.2813
-0.14235    0.44553  -0.319   0.7493
automatically it installs the most recent one. As such, it
checked
If you want to install 1.0-4 you can either get the tarball from here:
http://cran.r-project.org/src/contrib/Archive/lme4/lme4_1.0-4.tar.gz

but you will either need to be able to install it from source (i.e.
have compilers etc. installed) or modify the DESCRIPTION file to
make yourself the maintainer and ship it off to
ftp://win-builder.r-project.org.

*OR* (possibly a better idea) you can retrieve a binary/.zip file from

http://lme4.r-forge.r-project.org/repos/bin/windows/contrib/3.0/lme4_1.0-4.zip

and install it.

  (You didn't specify your actual error messages from the attempted
lme4 installation.)
that  I've made things clear now.
_______________________________________________
R-sig-mixed-models at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
2 days later
#
Ebi Safaie <safaie124 at ...> writes:
that's good -- that means that your model is at least
bounded away from zero for constrained parameters.
These warnings do suggest that your model is at the very least
unstably fitted. You could try some of the strategies listed
at 

http://rpubs.com/bbolker/lme4trouble1

to reassure yourself that the model fit is in fact OK.

I want to emphasize again that your model is **not** actually
fitting worse than it did before/with previous versions; rather,
the default warning level has been turned up so that you're
getting more warnings than before.
Comparing previous results just for these terms --

previous
                                 est     stderr       Z        P
cgroup:cgrammaticality        1.5796     0.3586   4.404 1.06e-05 *** 
cgroup:cgrammaticality:       3.1326     1.3994   2.239   0.0252 *
  cHeadNoun:cVerbType

current

cgroup:cgrammaticality        1.57010    0.36695  4.279 1.88e-05 ***
cgroup:cgrammaticality:       3.15344    1.42351  2.215   0.0267 *
   cHeadNoun:cVerbType

As I said before, the new and old results
look the same to me for all practical
purposes.
Don't know what you mean here.  Are you trying to distinguish
which one has a larger effect?  Assuming all your predictors
are categorical (so that you don't have to worry about standardizing
units), the two-way interaction has a smaller _effect_ but also
smaller uncertainty, so it is more statistically significant.
Your table got somewhat mangled in transition to the mailing list,
but appears to be a slightly modified version of the summary() output,
with odds ratios and Wald confidence intervals on odds ratios (i.e.
based on exp(est +/- 1.96*std. err) appended).

   The questions about warning messages from lme4 and what to do
about them are on-topic for this list, but these questions about how to
interpret the fixed effects are pretty generic (e.g. they would
apply pretty much equivalently to a regular linear or generalized linear
model), and would be more appropriate for a more generic stats questions
venue such as CrossValidated <http://stats.stackexchange.com>

  sincerely
    Ben Bolker