Skip to content

statistical significance of group level intercepts in glmer

5 messages · Ben Bolker, Maaz Gardezi, Ben Pelzer

#
Hello,

I am using glmer to model a mixed effects logistic regression. The output
below shows the random intercepts and slope (zcwcexp) for 22 groups. I was
wondering if there is any way of finding whether these are statistically
significant?
$subject
   (Intercept)      zcwcexp
1   0.08913319 -0.007245956
2  -0.06997405  0.009194926
3  -0.77408045  0.247743815
4   0.07087457  0.048671025
5   0.16529502 -0.025846944
6   0.21978190  0.020381977
7   0.22628857 -0.060421227
8   0.67790616 -0.183998656
9   0.25113625 -0.109811064
10 -0.14569194 -0.039863882
11  0.13872081 -0.074646309
12 -0.23401794  0.069596589
13  0.96193273 -0.195829039
14 -0.46834542  0.092182158
15 -0.59465656  0.158177344
16 -0.06963680 -0.019127508
17 -0.25015416  0.067321201
18  0.32001648 -0.100452172
19 -0.40396733  0.120731775
20  0.07413768 -0.061477287
21 -0.08520228  0.044519621
22 -0.09427640  0.002166830

Thanks!

Maaz
#
Unfortunately, you *can't* test significance of conditional modes (the
values that ranef()) returns; this is the price you pay for treating
them as random variables. However, you *can* extract the 'conditional
variances' of the conditional modes (see p. 28 of the vignette that
comes with lme4).  You can visualize these conditional variances via
`lattice::dotplot(ranef(fitted_model,condVar=TRUE))`.  The development
version of lme4 (on Github) has an as.data.frame method that makes it
easier to extract these values ...
On Tue, May 30, 2017 at 12:16 AM, Maaz Gardezi <maaz.gardezi at gmail.com> wrote:
#
Thanks for the clarification.

Maaz
On May 30, 2017 9:00 AM, "Ben Bolker" <bbolker at gmail.com> wrote:

            

  
  
#
Dear list,

With glmmPQL from package MASS,  I ran a logistic regression model with 
an intercept only, which is random across 33 countries:

m1 <- glmmPQL(MATH_Top1 ~ 1,
               random = list(country = ~ 1),
               family=binomial, data=pisas)
summary(m1)
sqrt(vcov(m1))

There is a total of N=22997 students in data.frame pisas.

The summary(m1) functions shows:

Fixed effects: MATH_Top1 ~ 1
                 Value Std.Error    DF  t-value p-value
(Intercept) -2.176564 0.1140811 22964 -19.0791       0


whereas the sqrt(vcov(m1)) function shows:

             (Intercept)
(Intercept)   0.1140786


My question is why the two std. error estimates 0.1140811 and 0.1140786 
of the fixed part of the intercept differ slightly. The same holds for 
std. errors of the fixed effects of predictor variables, which I did not 
add above for reasons of simplicity. Thanks for any help!

Ben.
#
A reproducible example would be nice.  On the other hand, we can
reproduce this with the example in ?glmmPQL:


library(MASS)
g1 <- glmmPQL(y ~ trt + I(week > 2), random = ~ 1 | ID,
                      family = binomial, data = bacteria)
s1 <- summary(g1)$tTab[,"Std.Error"]
s2 <- sqrt(diag(vcov(g1)))
all.equal(s1,s2)
## [1] "Mean relative difference: 0.009132611"

Let's dig in to see what's happening. As a starting point, class(g1)
is c("glmmPQL","lme"), and if we look at methods("vcov"),
methods("summary"), we can see there are no glmmPQL methods, so these
computations are being handled by nlme:::summary.lme and
nlme:::vcov.lme

vcov.lme is

object$varFix

summary.lme has

stdFixed <- sqrt(diag(as.matrix(object$varFix)))
  if (adjustSigma && object$method == "ML")
        stdFixed <- stdFixed * sqrt(object$dims$N/(object$dims$N -
            length(stdFixed)))

So what's happening is that vcov() is giving you the uncorrected
(maximum likelihood) estimate of the variance-covariance matrix, while
summary is giving you the bias-corrected version.  Since your sample
size is large, the difference is tiny.
On Wed, May 31, 2017 at 4:37 AM, Ben Pelzer <b.pelzer at maw.ru.nl> wrote: