Skip to content
Prev 171825 / 398503 Next

statistical significance of accuracy increase in classification

On 26 Feb 2009, at 14:14, Max Kuhn wrote:

            
You might also want to take a look at this survey article on kappa and  
its alternatives:

	Artstein, Ron and Poesio, Massimo (2008). Survey article: Inter-coder  
agreement for computational linguistics. Computational Linguistics,  
34(4), 555?596.

which you can download from

	http://www.aclweb.org/anthology-new/J/J08/

Alternatives to the standard Fleiss-Cohen asymptotic confidence  
intervals in the elementary 2x2 case are discussed in

	Lee, J.J., Tu, Z. N.:"A Better Confidence for Kappa on Measuring  
Agreement Between Two Raters with Binary Outcomes" Journal of  
Computational and Graphical Statistics, 3:301-321, 1994.

which is available from JSTOR:

	http://www.jstor.org/stable/1390914

An S implementation of their approximations can be downloaded here:

	http://lib.stat.cmu.edu/S/kappa

I've started to evaluate the accuracy of these approximations with  
simulation experiments some time ago, but haven't found the time to  
follow up on it.

Hope this helps,
Stefan