Skip to content
Prev 1400 / 7420 Next

AIC / BIC vs P-Values / MAM

On 10-08-04 10:55 AM, Chris Mcowen wrote:
OK, you asked for it ...
In my opinion model selection via AIC shares most of the 
disadvantages of p-value based model selection. "All model" selection is 
slightly better than stepwise approaches because it is less susceptible 
to getting stuck in some weird local branch, but whether you select 
models via p-value or AIC *should* be based on whether you are trying to 
test hypotheses or make predictions, and you should seriously question 
whether you should be doing model selection in the first place. You 
should *not* select a model and then make inferences about the 
'significance' of what remains in the model ...

   AIC is great but it's not a panacea.

    Now -- on to "p vs AIC" question.
My best guess as to what's going on here is that you have a good 
deal of correlation among your factors (in this case, with
discrete factors, that means that some combinations of factors are 
under/overrepresented in the data set), which means that quite
different combinations of factors can fit/explain the data approximately 
equally well.
    It's really hard to say without going through the data in detail.
    My advice would be to (a) read [or skim] Frank Harrell's book on 
Regression Modeling Strategies, particularly about the
dangers of model reduction; (b) if you're interested in **testing 
hypotheses about which factors are important**, simply fit
the full model and base your inference on the estimates and confidence 
intervals from the full model.

   good luck,
     Ben Bolker