Skip to content

Quantifying uncertainty for predictions from lme4 (predict.merMod)

1 message · Ben Bolker

#
Tommaso Jucker <tommasojucker at ...> writes:
The reason this feature is missing is that it's difficult, without
doing full-on MCMC or parametric bootstrapping, to generate standard errors
that appropriately incorporate all sources of error (in particular
the uncertainty in the random-effects parameters). Furthermore, which
components of the error are included depends in a complicated way on
which components of the model are conditioned on in generating the
prediction.  For example, if predictions are generated at the population
level, setting random effects to zero, then (maybe?) the random effects
variance should be incorporated in the prediction error.  Getting all
of this right is quite tricky.
Congratulations, you found a big bug in use.u=TRUE. If you reinstall
from github and use use.u=TRUE, you can get results whose mean agrees
with predict().  If you use use.u=FALSE and take the standard deviations
you can get the uncertainty of population-level predictions due to
random effects and residual error.  This approach also generalizes to
GLMMs.

  This approach will get you predictions that incorporate uncertainty
due *only* to stochasticity (not to uncertainty), which is probably not what
you want.

  Another approach is given in http://glmm.wikidot.com/faq ...
this gets variation due to the fixed-effect uncertainty only --
although for LMMs you can add residual error fairly easily.
You need to vary the FUN argument.

  If you pass FUN=predict, you should get results that incorporate
uncertainty in all model components.

  If you pass FUN=simulate, you should get prediction errors ...

  Ben Bolker