DOI QR코드

DOI QR Code

Weighted Hω and New Paradox of κ

가중 합치도 Hω와 κ의 새로운 역설

  • 권나영 (가톨릭대학교 의학통계학과) ;
  • 김진곤 (가톨릭대학교 의학통계학과) ;
  • 박용규 (가톨릭대학교 의학통계학과)
  • Published : 2009.10.31

Abstract

For ordinal categorical $R{\times}R$ tables, a weighted measure of association, $H_{\omega}$, was proposed and its maximum likelihood estimator and asymptotic variance were drived. We redefined the last paradox of ${\kappa}$ and proved its relation to marginal distributions. We also introduced the new paradox of ${\kappa}$ and summaried the general relationships between ${\kappa}$ and marginal distributions.

두 평정자가 R개의 순서형 반응 범주로 각 개체를 분류한 $R{\times}R$ 분할표에 대해, 불합치의 정도를 가중치로 부여한 가중 합치도 $H_{\omega}$를 제안하고, 최대 우도추정량 및 분산을 유도하였다. 또한 $2{\times}2$ 분할표에서 Feinstein과 Cicchetti(1990)가 제기한 마지막 역설을 새롭게 정의하고 증명하였으며, ${\kappa}$의 새로운 역설을 제기하고, ${\kappa}$와 주변분포의 전반적인 관계를 정리하였다.

Keywords

References

  1. 김진곤, 박미희, 박용규 (2009). m ${\times}$ m 분할표에서의 합치도 H, <한국통계학회논문집>, 16, 753-762 https://doi.org/10.5351/CKSS.2009.16.5.753
  2. 박미희, 박용규 (2007). COHEN의 합치도의 두 가지 역설을 해결하기 위한 새로운 합치도의 제안, <응용통계연구>, 20, 117-132 https://doi.org/10.5351/KJAS.2007.20.1.117
  3. Agresti, A. (2002). Categorical data analysis, Wiley, New York
  4. Cicchetti, D. V. and Allison, T. (1971). A new procedure for assessing reliability of scoring EEG sleep recordings, The American Journal of EEG Technology, 11, 101-109
  5. Cohen, J. (1960). A coefficient of agreement for nominal scales, Educational and Psychological Measurement, 20, 37-46 https://doi.org/10.1177/001316446002000104
  6. Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit, Psychological Bulletin, 70, 213-220 https://doi.org/10.1037/h0026256
  7. Conger, A. J. (1980). Integration and generalization of kappa for multiple raters, Psychological Bulletin, 88, 322-328 https://doi.org/10.1037/0033-2909.88.2.322
  8. Feinstein, A. R. and Cicchetti, D. V. (1990). High agreement but low kappa: 1. The problems of two paradoxes, Journal of Clinical Epidemiology, 43, 543-549 https://doi.org/10.1016/0895-4356(90)90158-L
  9. Ferger, W. F. (1931). The nature and use of the harmonic mean, Journal of the American Statistical Association, 26, 36-40
  10. Fleiss, J. L. and Cohen, J. (1973). The equivalence of weighted Kappa and the intraclass correlation coefficient as measures of reliability, Educational and Psychological Measurement, 33, 613-619 https://doi.org/10.1177/001316447303300309
  11. Gwet, K. (2001). Handbook of Inter-Rater Reliability, STATAXIS Publishing Company, Gaithersburg
  12. Holley, J. W. and Guilford, J. P. (1964). A note on the G index of agreement, Educational and Psychological Measurement, 24, 749-753 https://doi.org/10.1177/001316446402400402
  13. Kendall, M. G. and Smith, B. B. (1939). The problem of m rankings, The Annals of Mathematical Statistics, 10, 275-287 https://doi.org/10.1214/aoms/1177732186
  14. Kendall, M. G. and Stuart, A. (1963). The Advanced Theory of Statistics, Hafner, New York
  15. Scott, W. A. (1955). Reliability of content analysis: The case of nominal scale coding, Public Opinion Quarterly, 19, 321-325 https://doi.org/10.1086/266577