• Title/Summary/Keyword: sets of raters

Search Result 3, Processing Time 0.015 seconds

A Measure of Agreement for Multivariate Interval Observations by Different Sets of Raters

  • Um, Yong-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.957-963
    • /
    • 2004
  • A new agreement measure for multivariate interval data by different sets of raters is proposed. The proposed approach builds on Um's multivariate extension of Cohen's kappa. The proposed measure is compared with corresponding earlier measures based on Berry and Mielke's approach and Janson and Olsson approach, respectively. Application of the proposed measure is exemplified using hypothetical data set.

  • PDF

Development of English Speech Recognizer for Pronunciation Evaluation (발성 평가를 위한 영어 음성인식기의 개발)

  • Park Jeon Gue;Lee June-Jo;Kim Young-Chang;Hur Yongsoo;Rhee Seok-Chae;Lee Jong-Hyun
    • Proceedings of the KSPS conference
    • /
    • 2003.10a
    • /
    • pp.37-40
    • /
    • 2003
  • This paper presents the preliminary result of the automatic pronunciation scoring for non-native English speakers, and shows the developmental process for an English speech recognizer for the educational and evaluational purposes. The proposed speech recognizer, featuring two refined acoustic model sets, implements the noise-robust data compensation, phonetic alignment, highly reliable rejection, key-word and phrase detection, easy-to-use language modeling toolkit, etc., The developed speech recognizer achieves 0.725 as the average correlation between the human raters and the machine scores, based on the speech database YOUTH for training and K-SEC for test.

  • PDF

Permutation p-values for specific-category kappa measure of agreement (특정 범주에 대한 평가자간 카파 일치도의 퍼뮤테이션 p값)

  • Um, Yonghwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.899-910
    • /
    • 2016
  • Asymptotic tests are often not suitable for the analysis of sparse ordered contingency tables as asymptotic p-values may either overestimate or underestimate the true pvalues. In this pater, we describe permutation procedures in which we compute exact or resampling p-values for a weighted specific-category agreement in ordered $k{\times}k$ contingency tables. We use the weighted specific-category kappa proposed by $Kv{\dot{a}}lseth$ to measure the extent to which two independent raters agree on the specific categories. We carried out comparison studies between exact p-values, resampling p-values and asymptotic p-values using $3{\times}3$ contingency data (real and artificial data sets) and $4{\times}4$ artificial contingency data.