Talks

Evaluate inter-rater agreement within the framework of signal detetion theory

78
reads

Yen-Ling Kuo
2012-06-08  13:00 - 14:40
Room 103, Mathematics Research Center Building (ori. New Math. Bldg.)



Evaluating the extent of agreement between two raters is common in social, behavioral and medical science. In this talk, we first introduce two statistics for measuring inter- rater agreement: Cohen’s kappa coefficient (1960) and Gwet’s AC1 statistic (2008). Next, we present three different frameworks of modeling: random rating model, mixture of random rating and certain rating model, and signal detection theory model. We conclude that the performance of Cohen’s kappa coefficient is as expected under random rating model but poorly under the other two models. However, Gwet’s AC1 statistic performs much better than Cohen’s kappa coefficient under any of three models. We will also provide the simulation results for Cohen’s kappa coefficient and Gwet’s AC1 statistic under those three different models.