Skip to content
Prev 13037 / 20628 Next

extracting p values for main effects of binomial glmm

An issue that has not been raised so far is whether the binomial or 
other GLM type variance takes account of all relevant observation 
level sources of variation.  This is pertinent for all generalised linear
mixed models.

Over-dispersed binomial or Poisson is in my experience much more
common (certainly, e.g. in such areas as ecology) than binomial or
Poisson.  The effect on the variance can be huge, multiplying by a
factor of 3 or 4 or more.  More generally, the multiplier may not be a
constant factor.  This issue is most serious for comparisons where
the relevant variances are the observation level variances.

With glmer(), one way to handle this is to fit an observation level
random effect; the multiplier is not then a constant factor.  glmer() did
at one time allow a constant multiplier.  I think it unfortunate that this
was removed, as it restricted the options available.   Using negative
binomial errors is in principle another possibility, but such models
can be difficult to get to converge.  If the model is wrong, it may well
not converge, which is an obstacle to getting to the point where one
has something half-sensible that does converge ? this is the case 
for glm.nb() as well as for glmer.nb().

With predicted probabilities that are close to 0 or 1, or Poisson means 
that are close to 0, the Hauck-Donner effect where the standard errors
for Wald statistics become nonsense and z-statistics become smaller
as the distance between means that are to be compared increases,
is something more to worry about.

I guess the message is that this is treacherous territory, and it really
is necessary to know what one is doing!

I?d like to echo Jonathon Baron?s comments on the ?main effects in
the presence of interactions issue."

John Maindonald             email: john.maindonald at anu.edu.au
Wellington, NZ
On 6/03/2015, at 0:25, Henrik Singmann <henrik.singmann at psychologie.uni-freiburg.de> wrote: