glmm AIC/LogLik reliability
Perhaps the question was not clear enough (I helped Duncan try and articulate this....) Lets assume that we maintain random effects structure in all models, but we have a large multiple regression problem in the fixed effects (say 8 variables potentially affecting reproduction in a population). Can we assume that the LogLik calculations work in this instance? If we can say yes to this, then we can assume that some calculation of AIC is possible. The adjustement of the LogLik by # of paramters can be manipulated by the researcher, deciding on what df means to him or her, etc. The crux of the questions is not whether inference is correct, but whether the bits/mechanics about getting an AIC value for a set of nested models with the same random effects are internally consistent. Andrew
On 28 Jan 2009, at 19:11, Ben Bolker wrote:
I would argue that there's very little we *can* trust in the realm of GLMM inference, with the exception of randomization/parametric bootstrapping (and possibly Bayesian) approaches. I think AIC is no worse than anything else in this regard, except that it hasn't been explored as carefully as some of the alternatives: thus we suspect by analogy that there are problems similar to those of the LRT, but we don't know for sure. Vaida and Blanchard (2005), Greven (2008), and Burnham and White (2002) are good references. There are two basic issues: (1) if you choose to include models that differ in their random effects components, how do you count "effective" degrees of freedom? (2) how big a sample does it take to reach the "asymptopia" of AIC? If you're not there, what is the best strategy for finite-size correction? If you use AICc, what should you put in for effective residual degrees of freedom? Ben Bolker D O S Gillespie wrote:
Dear R-Sig-ME - Lets assume that I am going to use a model averaging AIC based approach to evaluate nested glmm's. I would like to assume that the estimation of AIC and LogLik in the glmm's of lmer are consistent enough (precise, if not accurate) to use in this framework. I realize that we don't trust anova(m1, m2), mainly due to df and tests statistics issues. I realise that some of you may suggest that this is not the correct framework. If so, can you distinguish arguments about the philosophy of AIC model averaging from the practical implementation - i.e. is the output consistent enough to use if, even if you don't believe the answer. Perhaps they are too intertwined. Thanks, Duncan Gillespie
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
-- Ben Bolker Associate professor, Biology Dep't, Univ. of Florida bolker at ufl.edu / www.zoology.ufl.edu/bolker GPG key: www.zoology.ufl.edu/bolker/benbolker-publickey.asc
_______________________________________________ R-sig-mixed-models at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models