AFAIK, the better solutions proposed for comparing the relative
importance of variables use measures of e.g, SSEs or partial
correlations over all possible orderings of the model; but I believe
that in the face of multicollinearity you are still faced w/ problems
in interpreting 'importance'. ?It's just a tough problem...
It is indeed tough, but I don't think partial correlations/SSEs are a
good route. What methods are you referring to in particular? I can't
see how this would help except in the simplest linear models.
My aim wasn't to hold up those metrics as improved measures of
importance, but rather to mention the idea calculating a metric over
all possible orderings of the model. ?E.g, see the Gromping paper I
cited earlier in the thread, or for more seminal work:
@article{1987,
title = {Relative Importance by Averaging Over Orderings},
author = {Kruskal, William},
journal = {The American Statistician},
volume = {41},
number = {1},
jstor_formatteddate = {Feb., 1987},
pages = {6--10},
abstract = {Many ways have been suggested for explicating the
ambiguous concept of relative importance for independent variables in
a multiple regression setting. There are drawbacks to all the
explications, but a relatively acceptable one is available when the
independent variables have a relevant, known ordering: consider the
proportion of variance of the dependent variable linearly accounted
for by the first independent variable; then consider the proportion of
remaining variance linearly accounted for by the second independent
variable; and so on. When, however, the independent variables do not
have a relevant ordering, that approach fails. The primary suggestion
of this article is to rescue the idea by averaging relative importance
over all orderings of the independent variables. Variations and
extensions of the idea are described.},
year = {1987},
publisher = {American Statistical Association}
}