• 제목/요약/키워드: Raters

검색결과 168건 처리시간 0.028초

논문형 고사 평가에서 평가치 조정과 평가원의 신뢰도 향상에 유효한 CDM 모형의 응용 (Application of the Categorical Data Model for Enhanching the Reliability of the Raters' Ratings and Score Adjustment of the Essay Type Test)

  • 홍석강
    • 한국수학교육학회지시리즈A:수학교육
    • /
    • 제38권2호
    • /
    • pp.165-172
    • /
    • 1999
  • $\sub$e/$\^$2/, that were results from those three sources of such imperfection. Especially to eliminate the differences in severity among many raters the randomization procedure of raters sample was very effective in enhancing the reliability of ratings with comparatively small groups of examinees and raters. And we also introduced the new rating methods, i.e. the 2-step diagnostic procedures to check the sizes of the reliability stability of raters and the sore adjustment method to enumerate the optimal mean values in rating the examinees.

  • PDF

A FACETS Analysis of Rater Characteristics and Rater Bias in Measuring L2 Writing Performance

  • Shin, You-Sun
    • 영어어문교육
    • /
    • 제16권1호
    • /
    • pp.123-142
    • /
    • 2009
  • The present study used multi-faceted Rasch measurement to explore the characteristics and bias patterns of non-native raters when they scored L2 writing tasks. Three raters scored 254 writing tasks written by Korean university students on two topics adapted from the TOEFL Test of Written English (TWE). The written products were assessed using a five-category rating scale (Content, Organization, Language in Use, Grammar, and Mechanics). The raters only showed a difference in severity with regard to rating categories but not in task types. Overall, the raters scored Grammar most harshly and Organization most leniently. The results also indicated several bias patterns of ratings with regard to the rating categories and task types. In rater-task bias interactions, each rater showed recurring bias patterns in their rating between two writing tasks. Analysis of rater-category bias interaction showed that the three raters revealed biased patterns across all the rating categories though they were relatively consistent in their rating. The study has implications for the importance of rater training and task selection in L2 writing assessment.

  • PDF

한국어능력시험(TOPIK) 쓰기 평가의 채점 특성 연구 (A Study on the Features of Writing Rater in TOPIK Writing Assessment)

  • 안수현;김정숙
    • 한국어교육
    • /
    • 제28권1호
    • /
    • pp.173-196
    • /
    • 2017
  • Writing is a subjective and performative activity. Writing ability has multi-facets and compoundness. To understand the examinees's writing ability accurately and provide effective writing scores, raters first ought to have the competency regarding assessment. Therefore, this study is significant as a fundamental research about rater's characteristics on the TOPIK writing assessment. 150 scripts of the 47th TOPIK examinees were selected randomly, and were further rated independently by 20 raters. The many-facet Rasch model was used to generate individualized feedback reports on each rater's relative severity and consistency with respect to particular categories of the rating scale. This study was analyzed using the FACETS ver 3.71.4 program. Overfit and misfit raters showed many difficulties for noticing the difference between assessment factors and interpreting the criteria. Writing raters appear to have much confusion when interpreting the assessment criteria, and especially, overfit and misfit teachers interpret the criteria arbitrarily. The main reason of overfit and misfit is the confusion about assessment factors and criteria in finding basis for scoring. Therefore, there needs to be more training and research is needed for raters based on this type of writing assessment characteristics. This study is recognized significantly in that it collectively examined writing assessment characteristics of writing raters, and visually confirmed the assessment error aspects of writing assessment.

An evaluation of Korean students' pronunciation of an English passage by a speech recognition application and two human raters

  • Yang, Byunggon
    • 말소리와 음성과학
    • /
    • 제12권4호
    • /
    • pp.19-25
    • /
    • 2020
  • This study examined thirty-one Korean students' pronunciation of an English passage using a speech recognition application, Speechnotes, and two Canadian raters' evaluations of their speech according to the International English Language Testing System (IELTS) band criteria to assess the possibility of using the application as a teaching aid for pronunciation education. The results showed that the grand average percentage of correctly recognized words was 77.7%. From the moderate recognition rate, the pronunciation level of the participants was construed as intermediate and higher. The recognition rate varied depending on the composition of the content words and the function words in each given sentence. Frequency counts of unrecognized words by group level and word type revealed the typical pronunciation problems of the participants, including fricatives and nasals. The IELTS bands chosen by the two native raters for the rainbow passage had a moderately high correlation with each other. A moderate correlation was reported between the number of correctly recognized content words and the raters' bands, while an almost a negligible correlation was found between the function words and the raters' bands. From these results, the author concludes that the speech recognition application could constitute a partial aid for diagnosing each individual's or the group's pronunciation problems, but further studies are still needed to match human raters.

A study on evaluator factors affecting physician-patient interaction scores in clinical performance examinations: a single medical school experience

  • Park, Young Soon;Chun, Kyung Hee;Lee, Kyeong Soo;Lee, Young Hwan
    • Journal of Yeungnam Medical Science
    • /
    • 제38권2호
    • /
    • pp.118-126
    • /
    • 2021
  • Background: This study is an analysis of evaluator factors affecting physician-patient interaction (PPI) scores in clinical performance examination (CPX). The purpose of this study was to investigate possible ways to increase the reliability of the CPX evaluation. Methods: The six-item Yeungnam University Scale (YUS), four-item analytic global rating scale (AGRS), and one-item holistic rating scale (HRS) were used to evaluate student performance in PPI. A total of 72 fourth-year students from Yeungnam University College of Medicine in Korea participated in the evaluation with 32 faculty and 16 standardized patient (SP) raters. The study then examined the differences in scores between types of scale, raters (SP vs. faculty), faculty specialty, evaluation experience, and level of fatigue as time passes. Results: There were significant differences between faculty and SP scores in all three scales and a significant correlation among raters' scores. Scores given by raters on items related to their specialty were lower than those given by raters on items out of their specialty. On the YUS and AGRS, there were significant differences based on the faculty's evaluation experience; scores by raters who had three to ten previous evaluation experiences were lower than others' scores. There were also significant differences among SP raters on all scales. The correlation between the YUS and AGRS/HRS declined significantly according to the length of evaluation time. Conclusion: In CPX, PPI score reliability was found to be significantly affected by the evaluator factors as well as the type of scale.

중학생 과학탐구활동 수행평가 시 총체적 채점에서 나타나는 채점자간 불일치 유형 분석 (An Analysis on Rater Error in Holistic Scoring for Performance Assessments of Middle School Students' Science Investigation Activities)

  • 김형준;유준희
    • 한국과학교육학회지
    • /
    • 제32권1호
    • /
    • pp.160-181
    • /
    • 2012
  • 본 연구의 목적은 과학탐구활동 수행평가 시 총체적 채점의 신뢰도를 높이기 위하여 채점자간 불일치의 정도와 유형을 이해하는 것이다. 이를 위하여 중학생 60명을 대상으로 과학탐구 수행평가를 실시하였고, 4명의 훈련된 채점자 채점을 실시하였다. 분산 분석결과 교사 관련 분산성분에 의해 전체 분산의 25%를 설명할 수 있으며, 4명의 채점자중 2명은 관대한 채점자, 2명은 엄격한 채점자의 경향을 지닌 것으로 나타났다. 전체 240 채점 사례 중 4명의 채점자가 모두 일치한 사례는 51사례이다. 채점자간 불일치가 나타나는 189사례에 대하여 채점자 협의를 통하여 확인한 결과, 채점자간 중요하게 생각하는 부분의 차이 때문에 발생하는 불일치 유형1이 38%, 채점자의 관대함과 엄격함에 의해 발생하는 불일치 유형2가 25%, 채점자가 중요하게 생각하지 않은 부분에서 간과하기 때문에 발생하는 등 실수에 의한 불일치 유형3이 31%로 나타났다. 불일치 유형1은 채점자마다 중요하게 생각하는 과제 요소와 평가 요소가 다른 경우로 나누어서 나타났으며, 맥락상의 의미를 강조하는 채점자는 관대한 경향을, 특정 요소를 강조하여 분석적으로 해석하는 채점자는 엄격한 경향을 나타냈다. 불일치 유형 2는 많은 경우 채점 척도의 경계에 학생의 응답에 대하여 나타났으며, 채점자들은 이러한 학생 응답에 대하여 옳은 서술의 개수를 세는 등 분석적인 채점을 수행하는 것을 확인할 수 있었다. 또한 불일치 유형3은 채점자의 실수로 발생하는 불일치로 주로 학생의 응답 중 평가 기준에 부합하는 부분인데 채점자가 중요하게 생각하지 않기 때문에 간과하여 발생하는 것으로 파악할 수 있었다. 이상과 같은 채점자간 불일치를 제어하기 위해서는 채점자가 중요하게 생각하는 과제 요소와 평가 요소에 대하여 사전 및 진행 중 협의를 할 필요가 있다고 판단된다. 또한 총체적 채점을 하는 경우도 각 수준에 해당하는 평가 기준과 함께 경계에 놓인 학생 응답을 변별하는 기준을 제시하는 것이 필요하다. 채점자들은 자신의 채점경향이 엄격한지 관대한지를 파악하고 경계에 놓인 학생의 응답에 대한 판단을 주의 깊게 하여야 불일치를 줄일 수 있다. 실수에 의한 오차를 줄이기 위해서는 여러 명의 채점자가 교차 채점하는 것이 필요하다. 동일한 채점 기준에 대한 채점자의 해석이 다르게 나타나는 경향과 원인에 대한 추후 연구가 필요하다.

A Joint Agreement Measure Between Multiple Raters and One Standard

  • Um, Yong-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권3호
    • /
    • pp.621-628
    • /
    • 2005
  • This article addresses the problem of measuring a joint agreement between multiple raters and a standard set of responses. A new agreement measure based on Um's approach is proposed. The proposed agreement measure is used for multivariate interval responses. Comparison is made between the proposed measure and other corresponding agreement measures using hypothetical data set.

  • PDF

A Measure of Agreement for Multivariate Interval Observations by Different Sets of Raters

  • Um, Yong-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • 제15권4호
    • /
    • pp.957-963
    • /
    • 2004
  • A new agreement measure for multivariate interval data by different sets of raters is proposed. The proposed approach builds on Um's multivariate extension of Cohen's kappa. The proposed measure is compared with corresponding earlier measures based on Berry and Mielke's approach and Janson and Olsson approach, respectively. Application of the proposed measure is exemplified using hypothetical data set.

  • PDF

한국인 영어 발음의 좋음과 나쁨 인지 평가에 영향을 미치는 초분절 매개변수 연구 (A study on the Suprasegmental Parameters Exerting an Effect on the Judgment of Goodness or Badness on Korean-spoken English)

  • 강석한;이석재
    • 말소리와 음성과학
    • /
    • 제3권2호
    • /
    • pp.3-10
    • /
    • 2011
  • This study investigates the role of suprasegmental features with respect to the intelligibility of Korean-spoken English judged by Korean and English raters as being good or bad. It has been hypothesized that Korean raters would have different evaluations from English native raters and that the effect may vary depending on the types of suprasegmental factors. Four Korean and four English native raters, respectively, took part in the evaluation of 14 Korean subjects' English speaking. The subjects read a given paragraph. The results show that the evaluation for 'intelligibility' is different for the two groups and that the difference comes from their perception of L2 English suprasegmentals.

  • PDF

The Reliability of a Pediatric Balance Scale Based on the Raters' Clinical Work Experience and Test Experience

  • Kim, Gi-Won;Ko, Joo-Yeon;Baek, Soon-Gi
    • The Journal of Korean Physical Therapy
    • /
    • 제22권6호
    • /
    • pp.35-42
    • /
    • 2010
  • Purpose: To investigate the rater reliability of a Pediatric Balance Scale (PBS) for children with cerebral palsy, and to investigate possible differences among raters according to their clinical work experience and testing experience. Methods: Study participants included 18 children with spastic cerebral palsy who could walk. They were instructed by pediatric physical therapists, two of whom had ten years of clinical work experience and two who had less than one year of experience. The children's ability to achieve physical balance was videotaped for PBS items. The raters watched the tapes and evaluated each child twice. Rater reliability was analyzed using the intraclass correlation coefficient (ICC). Differences between experienced and novice raters were analyzed using a paired t-test. The statistical significance level was set to 0.05. Results: The total PBS scores averaged 45.78~48.00 and 45.72~47.67 for first and second tests. Intra-rater reliability was very high (ICC=0.89~0.99), and the repeated measurement coincidence was high (p>0.05). Inter-rater reliability was high (ICC=0.83~0.84), but there was a bit of a difference in the coincidence (p<0.05). The experienced raters' reliability and coincidence were higher than those of the novices, and there were differences in reliance and coincidence between experienced and novice raters (p<0.05). Conclusion: Inter-rater and intra-rater reliability is very high. However, rater reliability showed defferences depending on clinical work experience and testing experience. When testing pediatric patients with the PBS, the rater's clinical experience and test experience may affect the test results.