I think that's a different (though not unrelated) issue -- namely,
model selection. Asymptotically, AIC is equivalent to leave-one-out
cross validation, Mallow's Cp, and some other methods for model
selection. However I don't see using a model selection method as
equivalent to validating the predictive ability of a model.
As far as how to show predictive ability - I think that's context
dependent. Along with various quantitative measures, I've found
plotting to be useful. For example, for each fold of a k-fold cross
validation plotting the observed vs predicted in a scatter plot, using
color to identify an important categorical variable (e.g. sex,
species, region etc.) and pch to identify another. Or, if it's
spatial data actually mapping the RMSEs of the cross validations to
get an idea of where the model is performing well/poorly.
Conditional plots and parallel coordinate plots can be good tools for
these types of 'validation' as well. One thing to remember -- if these
methods are used as part of the model selection process there should
be a final hold-out dataset that was never used in any way in making
modeling decisions. This is a luxury, but if there's enough data it
can provide strong evidence for the models' predictive traits.