Skip to content
Prev 1453 / 20628 Next

generalized linear mixed models: large differences when using glmmPQL or lmer with laplace approximation

Due to some travel and the need to attend to other projects, I haven't
been keeping up as closely with this list as I normally do.  Regarding
the comparison between the PQL and Laplace methods for fitting
generalized linear mixed models, I believe that the estimates produced
by the Laplace method are more reliable than those from the PQL
method.  The objective function optimized by the Laplace method is a
direct approxmation, and generally a very good approximation, to the
log-likelihood for the model being fit.  The PQL method is indirect
(the "QL" part of the name stands for "quasi-likelihood") and, because
it involves alternating conditional optimization, can alternate
back-and-forth between two potential solutions, neither of which is
optimal.  (To be fair, such alternating occurs more frequently in the
analogous method for nonlinear mixed-models, in which I was one of the
co-conspirators, than in the PQL method for GLMMs.)

It may be that the problem you are encountering has more to do with
the use of the quasipoisson family than with the Laplace
approximation.  I am not sure that the derivation of the standard
errors in lmer when using the quasipoisson family is correct, in part
because I don't really understand the quasipoisson and quasibinomial
families.  As far as I know, they don't correspond to probability
distributions so the theory is a bit iffy.

Do you need to use the quasipoisson family or could you use the
poisson family?  Generally the motivation for the quasipoisson familiy
is to accomodate overdispersion.  Often in a generalized linear mixed
model the problem is underdispersion rather than overdispersion.

In one of Ben's replies in this thread he discusses the degrees of
freedom attributed to certain t-statistics.  Regular readers of this
list are aware that degrees of freedom is one of my least favorite
topics.  If one has a reasonably large number of observations and a
reasonably large number of groups then the issue is unimportant.
(Uncertainty in degrees of freedom is important only when the value of
the degrees of freedom is small.  In fact, when I first started
studying statistics we used the standard normal in place of the
t-distribution whenever the degrees of freedom exceeded 30).
Considering that the quasi-Poisson doesn't correspond to a probability
distribution in the first place, (readers should feel free to correct
me if I am wrong about this) I find the issue of the number of degrees
of freedom that should be attributed to a distribution of a quantity
calculated from a non-existent distribution to be somewhat off the
point.

I think the problem is more likely that the standard errors are not
being calculated correctly.  Is that what you concluded from your
simulations, Ben?

On Tue, Oct 7, 2008 at 8:21 AM, Martijn Vandegehuchte
<martijn.vandegehuchte at ugent.be> wrote: