Beyond kappa: A review of interrater agreement measures
In 1960, Cohen introduced the kappa coefficient to measure chance‐corrected nominal scale agreement between two raters. Since then, numerous extensions and generalizations of this interrater agreement measure have been proposed in the literature. This paper reviews and critiques various approaches to the study of interrater agreement, for which the relevant data comprise either nominal or ordinal categorical ratings from multiple raters. It presents a comprehensive compilation of the main statistical approaches to this problem, descriptions and characterizations of the underlying models, and discussions of related statistical methodologies for estimation and confidence‐interval construction. The emphasis is on various practical scenarios and designs that underlie the development of these measures, and the interrelationships between them.
The Canadian Journal of Statistics
Banerjee, Mousumi; Capozzoli, Michelle; McSweeney, Laura; and Sinha, Debajyoti, "Beyond kappa: A review of interrater agreement measures" (1999). Mathematics Faculty Publications. 60.
Banerjee, M., Capozzoli, M., McSweeney, L. & Sinha, D. (1999). “Beyond kappa: A review of interrater agreement measures,” The Canadian Journal of Statistics, Vol. 27, No 1, pp. 3-23. https://doi.org/10.2307/3315487