1 |
Holley, J. W. and Guilford, J. P. (1964). A note on the G index of agreement, Educational and Psychological Measurement, 24, 749-753
DOI
|
2 |
Kendall, M. G. and Smith, B. B. (1939). The problem of m rankings, The Annals of Mathematical Statistics, 10, 275-287
DOI
ScienceOn
|
3 |
Kendall, M. G. and Stuart, A. (1963). The Advanced Theory of Statistics, Hafner, New York
|
4 |
Scott, W. A. (1955). Reliability of content analysis: The case of nominal scale coding, Public Opinion Quarterly, 19, 321-325
DOI
ScienceOn
|
5 |
Fleiss, J. L. and Cohen, J. (1973). The equivalence of weighted Kappa and the intraclass correlation coefficient as measures of reliability, Educational and Psychological Measurement, 33, 613-619
DOI
|
6 |
Gwet, K. (2001). Handbook of Inter-Rater Reliability, STATAXIS Publishing Company, Gaithersburg
|
7 |
김진곤, 박미희, 박용규 (2009). m m 분할표에서의 합치도 H, <한국통계학회논문집>, 16, 753-762
DOI
|
8 |
박미희, 박용규 (2007). COHEN의 합치도의 두 가지 역설을 해결하기 위한 새로운 합치도의 제안, <응용통계연구>, 20, 117-132
과학기술학회마을
DOI
|
9 |
Agresti, A. (2002). Categorical data analysis, Wiley, New York
|
10 |
Cicchetti, D. V. and Allison, T. (1971). A new procedure for assessing reliability of scoring EEG sleep recordings, The American Journal of EEG Technology, 11, 101-109
|
11 |
Cohen, J. (1960). A coefficient of agreement for nominal scales, Educational and Psychological Measurement, 20, 37-46
DOI
|
12 |
Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit, Psychological Bulletin, 70, 213-220
DOI
|
13 |
Conger, A. J. (1980). Integration and generalization of kappa for multiple raters, Psychological Bulletin, 88, 322-328
DOI
|
14 |
Feinstein, A. R. and Cicchetti, D. V. (1990). High agreement but low kappa: 1. The problems of two paradoxes, Journal of Clinical Epidemiology, 43, 543-549
DOI
ScienceOn
|
15 |
Ferger, W. F. (1931). The nature and use of the harmonic mean, Journal of the American Statistical Association, 26, 36-40
|