Hi: I fit a Poisson GLMM with only the intercept and two random effects, and the predicted mean was 1.14. When I fit a generalized additive model (GAM) with only the intercept, the predicted mean was 1.6. Does anyone know why this is happening? I'm looking for a theoretical response, as I've checked my code and there are no errors. Thanks! Katie Benton
predicted mean from GLMM lower than mean from GAM
3 messages · benton, Ben Bolker, Jarrod Hadfield
benton <benton at ...> writes:
I fit a Poisson GLMM with only the intercept and two random effects, and the predicted mean was 1.14. When I fit a generalized additive model (GAM) with only the intercept, the predicted mean was 1.6. Does anyone know why this is happening? I'm looking for a theoretical response, as I've checked my code and there are no errors.
A little more information/reproducible example would be helpful; it would be possible for me to invent a reproducible example for myself, but it would be easier (for me!) and more likely to answer your specific question if you provide the example. Have you compared the Poisson GLMM prediction with a Poisson GLM (no random effects) prediction to make sure there's not some funky/surprising difference between the GAM (presumably you're using mgcv::gam()) and the GLMM (presumably you're using lme4::glmer()) ? How are you deriving the predictions? Are you definitely using the same family and link function for both models? In general there can be important differences between the marginal (no-random-effects) and conditional (including-random-effects) predictions, but off the top of my head that should not apply to intercept-only models ... See http://tinyurl.com/reproducible-000 for more info on reproducible examples ... Ben Bolker
Hi, If you take exp(log(1.14)+0.5*v) where v is the sum of the estimated variances do the two estimates then coincide? 1.14 in the GLMM is the predicted modal count (i.e. when the two random effects are zero) and exp(log(1.14)+0.5*v) is the predicted mean count (i.e. averaged over random effects). Cheers, Jarrod Quoting Ben Bolker <bbolker at gmail.com> on Tue, 17 Jul 2012 20:57:55 +0000 (UTC):
benton <benton at ...> writes:
I fit a Poisson GLMM with only the intercept and two random effects, and the predicted mean was 1.14. When I fit a generalized additive model (GAM) with only the intercept, the predicted mean was 1.6. Does anyone know why this is happening? I'm looking for a theoretical response, as I've checked my code and there are no errors.
A little more information/reproducible example would be helpful; it would be possible for me to invent a reproducible example for myself, but it would be easier (for me!) and more likely to answer your specific question if you provide the example. Have you compared the Poisson GLMM prediction with a Poisson GLM (no random effects) prediction to make sure there's not some funky/surprising difference between the GAM (presumably you're using mgcv::gam()) and the GLMM (presumably you're using lme4::glmer()) ? How are you deriving the predictions? Are you definitely using the same family and link function for both models? In general there can be important differences between the marginal (no-random-effects) and conditional (including-random-effects) predictions, but off the top of my head that should not apply to intercept-only models ... See http://tinyurl.com/reproducible-000 for more info on reproducible examples ... Ben Bolker
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336.