Skip to content
Prev 1403 / 7419 Next

AIC / BIC vs P-Values / MAM

My approach was to rank the model according to -  AIC  (model of interest) ? AICmin (aic value of minimum model) = relative AIC difference and then only use model averaging on the set of models where the value was 0-2 - (Burnham & Anderson, 2002).
Sorry i was trying to say i then need to think of a way of validating the goodness of fit as i want to use my training data to predict my test data, and i have never used a model to predict unknown values. But i am sure i will come to it if  read around!

Thanks for all your help, it is greatly appreciated
On 4 Aug 2010, at 20:09, Ben Bolker wrote:

        
On 10-08-04 01:13 PM, Chris Mcowen wrote:
If you are *really* trying to predict (rather than test hypotheses), and you really use model averaging, then I would be fine with this approach -- but then you wouldn't be spending any time worrying about which models were weighted how strongly (although I do admit that wondering why p-values and AIC gave different rankings is worth thinking about -- I'm just not sure there's a short answer without looking through all of the data).

 You should take a look at the AICcmodavg and MuMIn packages on CRAN -- one or the other may (?) be able to handle lmer fits.
Often but not necessarily.  Zuur et al have a recent paper in Methods in Ecology and Evolution you might want to look at.
I don't quite understand.

 Ben