Skip to content
Prev 323 / 7420 Next

Are likelihood approaches frequentist?

Paulo In?cio de Knegt L?pez de Prado wrote:
Paulo,
The likelihood function is the central concept of statistical inference, 
so working with the likelihood you can have Bayesian, frequentist 
(better called sampling-distribution inference), or likelihoodist 
inference, depending on what do you do with your likelihoods. In 
Bayesian inference the likelihood function updates prior opinion by 
bringing the data into the inference, in sampling-distribution inference 
(a.k.a. frequentist) it allows the building of better confidence 
intervals by finding in the sample space likelihood values that could 
have occurred if data similar to the data you have had been obtained, 
and in the direct-likelihood approach the likelihood is directly used to 
compare two hypotheses or equivalently to build direct-likelihood 
intervals. For example, the likelihood ratio test (not to be confused 
with the pure likelihood ratio, or differences in support) based on a 
limiting Chi-square distribution is a likelihood-based frequentist 
method. Frequentist statisticians evaluate the likelihood from the 
sample, and then proceed to evaluate the likelihood for other potential 
samples, thus building their confidence intervals and p-values. On the 
other hand Bayesian and likelihoodist statisticians only use the 
likelihood evaluated at the actual sample that was obtained. From that 
point of view one can say that Bayesian and likelihoodist are closer to 
each other than to frequentists, however both Bayesian and frequentists 
base their inference on probabilities (posterior probabilities or error 
rates) whereas likelihoodists base their inference on, well, likelihood 
only.
Royall's points are very convincing indeed, at least they were for me 
too. Royall's concept of evidence in the sample about competing 
hypotheses and on approximate likelihoods for problems with nuisance 
parameters, plus Edwards' mathematical proofs of the properties of the 
support function, plus Jim Lindsey's arguments about Akaike's index in 
model selection, provide a complete theory of statistical inference, 
based exclusively on the likelihood, IMHO.
"There is nothing more practical than a good theory". I'm not sure who 
was the original author of that quote (in a book I read long ago it was 
said that the author was Einstein) but it applies here. Likelihoodist, 
frequentist, and Bayesian inferences are not compatible. Especially 
likelihoodist and Bayesian versus frequentist, so the pragmatst who 
change allegiance is making an error at some point.
Two other great statisticians that subscribe to the likelihoodist school 
of inference are Jim Lindsey
and John Nelder.
At least once a year I hear someone at a meeting say that there are two 
modes of inference:
frequentist and Bayesian. That this sort of nonsense should be so 
regularly propagated shows how
much we have to do. To begin with there is a flourishing school of 
likelihood inference, to which I
belong.
I tend to think there is a place for Bayesian inference in prediction.

Rub?n