Skip to content
Prev 227 / 523 Next

[RsR] [R-SIG-Finance] Outliers in the market model that's used to estimate `beta' of a stock

Haven't read "Fooled by randomness", but did start reading Black Swan, 
and although in general I like provocative books that challenge my 
points of view, I found his main thesis to be too short to warrant so 
many words... I took it that his main argument was with those who 
misinterpret and misuse statistics (particularly when they do it for 
their own benefit), not with statistics itself, which is always based on 
assumptions etc.
Which is a fair statement, that also applies to science in general, 
"theories work until they are proved wrong", and the whole 
"falsifiability" argument (cf. Popper vs. Kuhn vs. Feyerabend vs...). I 
believe robust statistics can help you determine when your model 
(theory) has stopped to work.

In any case, with respect to the old "data cleaning versus robust 
estimators" discussion, I would point the interest reader to the first 
chapter of Maronna, Martin and Yohai's book 
(http://books.google.com/books?id=YD--AAAACAAJ&dq=martin+maronna+yohai), 
and for some more specific inference implications, to the first chapter 
of my PhD dissertation. Essentially, a couple of main issues are: (a) 
detecting outliers using non-robust estimators does not work well in 
general (but even if / when it does, see my next point); (b) if you 
remove (or alter) observations, all subsequent probabilistic statements 
(p-values, standard errors, etc) are all conditional on the very 
non-linear cleaning operation you did, and thus both wrong at face 
value, and not easy to correct. Robust estimators incorporate the 
down-weighting and its effect on the corresponding inference at once, 
and are thus, IMHO, to be preferred.

Matias
markleeds at verizon.net wrote: