Skip to content
Prev 33823 / 398513 Next

logLik.lm()

Dear Prof. Ripley:

	  I gather you disagree with the observation in Burnham and Anderson 
(2002, ch. 2) that the "complexity penalty" in the Akaike Information 
Criterion is a bias correction, and with this correction, they can use 
"density = exp(-AIC/2)" to compute approximate posterior probabilities 
comparing even different distributions?

	  They use this even to compare discrete and continuous distributions, 
which makes no sense to me.  However, with a common dominating measure, 
it seems sensible to me.  They cite a growing literature on "Bayesian 
model averaging".  What I've seen of this claims that Bayesian model 
averaging produces better predictions than predictions based on any 
single model, even using these approximate posteriors ("Akaike weights") 
in place of full Bayesian posteriors.

	  I don't have much experience with this, but so far, I seem to have 
gotten great, informative answers to my clients' questions.  If there 
are serious deficiencies with this kind of procedure, I'd like to know.

Comments?
Best Wishes,
Spencer Graves
###############
REFERENCE:  Burnam and Anderson (2002) Model Selection and Multimodel 
Inference, 2nd ed. (Springer)
Prof Brian Ripley wrote: