• Title/Summary/Keyword: Raters

Search Result 168, Processing Time 0.028 seconds

Application of the Categorical Data Model for Enhanching the Reliability of the Raters' Ratings and Score Adjustment of the Essay Type Test (논문형 고사 평가에서 평가치 조정과 평가원의 신뢰도 향상에 유효한 CDM 모형의 응용)

  • 홍석강
    • The Mathematical Education
    • /
    • v.38 no.2
    • /
    • pp.165-172
    • /
    • 1999
  • $\sub$e/$\^$2/, that were results from those three sources of such imperfection. Especially to eliminate the differences in severity among many raters the randomization procedure of raters sample was very effective in enhancing the reliability of ratings with comparatively small groups of examinees and raters. And we also introduced the new rating methods, i.e. the 2-step diagnostic procedures to check the sizes of the reliability stability of raters and the sore adjustment method to enumerate the optimal mean values in rating the examinees.

  • PDF

A FACETS Analysis of Rater Characteristics and Rater Bias in Measuring L2 Writing Performance

  • Shin, You-Sun
    • English Language & Literature Teaching
    • /
    • v.16 no.1
    • /
    • pp.123-142
    • /
    • 2009
  • The present study used multi-faceted Rasch measurement to explore the characteristics and bias patterns of non-native raters when they scored L2 writing tasks. Three raters scored 254 writing tasks written by Korean university students on two topics adapted from the TOEFL Test of Written English (TWE). The written products were assessed using a five-category rating scale (Content, Organization, Language in Use, Grammar, and Mechanics). The raters only showed a difference in severity with regard to rating categories but not in task types. Overall, the raters scored Grammar most harshly and Organization most leniently. The results also indicated several bias patterns of ratings with regard to the rating categories and task types. In rater-task bias interactions, each rater showed recurring bias patterns in their rating between two writing tasks. Analysis of rater-category bias interaction showed that the three raters revealed biased patterns across all the rating categories though they were relatively consistent in their rating. The study has implications for the importance of rater training and task selection in L2 writing assessment.

  • PDF

A Study on the Features of Writing Rater in TOPIK Writing Assessment (한국어능력시험(TOPIK) 쓰기 평가의 채점 특성 연구)

  • Ahn, Su-hyun;Kim, Chung-sook
    • Journal of Korean language education
    • /
    • v.28 no.1
    • /
    • pp.173-196
    • /
    • 2017
  • Writing is a subjective and performative activity. Writing ability has multi-facets and compoundness. To understand the examinees's writing ability accurately and provide effective writing scores, raters first ought to have the competency regarding assessment. Therefore, this study is significant as a fundamental research about rater's characteristics on the TOPIK writing assessment. 150 scripts of the 47th TOPIK examinees were selected randomly, and were further rated independently by 20 raters. The many-facet Rasch model was used to generate individualized feedback reports on each rater's relative severity and consistency with respect to particular categories of the rating scale. This study was analyzed using the FACETS ver 3.71.4 program. Overfit and misfit raters showed many difficulties for noticing the difference between assessment factors and interpreting the criteria. Writing raters appear to have much confusion when interpreting the assessment criteria, and especially, overfit and misfit teachers interpret the criteria arbitrarily. The main reason of overfit and misfit is the confusion about assessment factors and criteria in finding basis for scoring. Therefore, there needs to be more training and research is needed for raters based on this type of writing assessment characteristics. This study is recognized significantly in that it collectively examined writing assessment characteristics of writing raters, and visually confirmed the assessment error aspects of writing assessment.

An evaluation of Korean students' pronunciation of an English passage by a speech recognition application and two human raters

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.19-25
    • /
    • 2020
  • This study examined thirty-one Korean students' pronunciation of an English passage using a speech recognition application, Speechnotes, and two Canadian raters' evaluations of their speech according to the International English Language Testing System (IELTS) band criteria to assess the possibility of using the application as a teaching aid for pronunciation education. The results showed that the grand average percentage of correctly recognized words was 77.7%. From the moderate recognition rate, the pronunciation level of the participants was construed as intermediate and higher. The recognition rate varied depending on the composition of the content words and the function words in each given sentence. Frequency counts of unrecognized words by group level and word type revealed the typical pronunciation problems of the participants, including fricatives and nasals. The IELTS bands chosen by the two native raters for the rainbow passage had a moderately high correlation with each other. A moderate correlation was reported between the number of correctly recognized content words and the raters' bands, while an almost a negligible correlation was found between the function words and the raters' bands. From these results, the author concludes that the speech recognition application could constitute a partial aid for diagnosing each individual's or the group's pronunciation problems, but further studies are still needed to match human raters.

A study on evaluator factors affecting physician-patient interaction scores in clinical performance examinations: a single medical school experience

  • Park, Young Soon;Chun, Kyung Hee;Lee, Kyeong Soo;Lee, Young Hwan
    • Journal of Yeungnam Medical Science
    • /
    • v.38 no.2
    • /
    • pp.118-126
    • /
    • 2021
  • Background: This study is an analysis of evaluator factors affecting physician-patient interaction (PPI) scores in clinical performance examination (CPX). The purpose of this study was to investigate possible ways to increase the reliability of the CPX evaluation. Methods: The six-item Yeungnam University Scale (YUS), four-item analytic global rating scale (AGRS), and one-item holistic rating scale (HRS) were used to evaluate student performance in PPI. A total of 72 fourth-year students from Yeungnam University College of Medicine in Korea participated in the evaluation with 32 faculty and 16 standardized patient (SP) raters. The study then examined the differences in scores between types of scale, raters (SP vs. faculty), faculty specialty, evaluation experience, and level of fatigue as time passes. Results: There were significant differences between faculty and SP scores in all three scales and a significant correlation among raters' scores. Scores given by raters on items related to their specialty were lower than those given by raters on items out of their specialty. On the YUS and AGRS, there were significant differences based on the faculty's evaluation experience; scores by raters who had three to ten previous evaluation experiences were lower than others' scores. There were also significant differences among SP raters on all scales. The correlation between the YUS and AGRS/HRS declined significantly according to the length of evaluation time. Conclusion: In CPX, PPI score reliability was found to be significantly affected by the evaluator factors as well as the type of scale.

An Analysis on Rater Error in Holistic Scoring for Performance Assessments of Middle School Students' Science Investigation Activities (중학생 과학탐구활동 수행평가 시 총체적 채점에서 나타나는 채점자간 불일치 유형 분석)

  • Kim, Hyung-Jun;Yoo, June-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.32 no.1
    • /
    • pp.160-181
    • /
    • 2012
  • The purpose of this study is to understand raters' errors in rating performance assessments of science inquiry. For this, 60 middle school students performed scientific inquiry about sound propagation and 4 trained raters rated their activity sheets. Variance components estimation for the result of the generalizability analysis for the person, task, rater design, the variance components for rater, rater by person and rater by task are about 25%. Among 4 raters, 2 raters' severity is higher than the other two raters and their severities were stabilized. Four raters' rating agreed with each other in 51 cases among the 240 cases. Through the raters' conferences, the rater error types for 189 disagreed cases were identified as one of three types; different salience, severity, and overlooking. The error type 1, different salience, showed 38% of the disagreed cases. Salient task and salient assessment components are different among the raters. The error type 2, severity, showed 25% and the error type 3, overlooking showed 31%. The error type 2 seemed to have happened when the students responses were on the borders of two levels. Error type 3 seemed to have happened when raters overlooked some important part of students' responses because she or he immersed her or himself in one's own salience. To reduce the above rater errors, raters' conference in salience of task and assesment components are needed before performing the holistic scoring of complex tasks. Also raters need to recognize her/his severity and efforts to keep one's own severity. Multiple raters are needed to prevent the errors from being overlooked. The further studies in raters' tendencies and sources of different interpretations on the rubric are suggested.

A Joint Agreement Measure Between Multiple Raters and One Standard

  • Um, Yong-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.3
    • /
    • pp.621-628
    • /
    • 2005
  • This article addresses the problem of measuring a joint agreement between multiple raters and a standard set of responses. A new agreement measure based on Um's approach is proposed. The proposed agreement measure is used for multivariate interval responses. Comparison is made between the proposed measure and other corresponding agreement measures using hypothetical data set.

  • PDF

A Measure of Agreement for Multivariate Interval Observations by Different Sets of Raters

  • Um, Yong-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.4
    • /
    • pp.957-963
    • /
    • 2004
  • A new agreement measure for multivariate interval data by different sets of raters is proposed. The proposed approach builds on Um's multivariate extension of Cohen's kappa. The proposed measure is compared with corresponding earlier measures based on Berry and Mielke's approach and Janson and Olsson approach, respectively. Application of the proposed measure is exemplified using hypothetical data set.

  • PDF

A study on the Suprasegmental Parameters Exerting an Effect on the Judgment of Goodness or Badness on Korean-spoken English (한국인 영어 발음의 좋음과 나쁨 인지 평가에 영향을 미치는 초분절 매개변수 연구)

  • Kang, Seok-Han;Rhee, Seok-Chae
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.3-10
    • /
    • 2011
  • This study investigates the role of suprasegmental features with respect to the intelligibility of Korean-spoken English judged by Korean and English raters as being good or bad. It has been hypothesized that Korean raters would have different evaluations from English native raters and that the effect may vary depending on the types of suprasegmental factors. Four Korean and four English native raters, respectively, took part in the evaluation of 14 Korean subjects' English speaking. The subjects read a given paragraph. The results show that the evaluation for 'intelligibility' is different for the two groups and that the difference comes from their perception of L2 English suprasegmentals.

  • PDF

The Reliability of a Pediatric Balance Scale Based on the Raters' Clinical Work Experience and Test Experience

  • Kim, Gi-Won;Ko, Joo-Yeon;Baek, Soon-Gi
    • The Journal of Korean Physical Therapy
    • /
    • v.22 no.6
    • /
    • pp.35-42
    • /
    • 2010
  • Purpose: To investigate the rater reliability of a Pediatric Balance Scale (PBS) for children with cerebral palsy, and to investigate possible differences among raters according to their clinical work experience and testing experience. Methods: Study participants included 18 children with spastic cerebral palsy who could walk. They were instructed by pediatric physical therapists, two of whom had ten years of clinical work experience and two who had less than one year of experience. The children's ability to achieve physical balance was videotaped for PBS items. The raters watched the tapes and evaluated each child twice. Rater reliability was analyzed using the intraclass correlation coefficient (ICC). Differences between experienced and novice raters were analyzed using a paired t-test. The statistical significance level was set to 0.05. Results: The total PBS scores averaged 45.78~48.00 and 45.72~47.67 for first and second tests. Intra-rater reliability was very high (ICC=0.89~0.99), and the repeated measurement coincidence was high (p>0.05). Inter-rater reliability was high (ICC=0.83~0.84), but there was a bit of a difference in the coincidence (p<0.05). The experienced raters' reliability and coincidence were higher than those of the novices, and there were differences in reliance and coincidence between experienced and novice raters (p<0.05). Conclusion: Inter-rater and intra-rater reliability is very high. However, rater reliability showed defferences depending on clinical work experience and testing experience. When testing pediatric patients with the PBS, the rater's clinical experience and test experience may affect the test results.