• Title/Summary/Keyword: 채점자 교육

Search Result 68, Processing Time 0.021 seconds

Development and Application of an Online Scoring System for Constructed Response Items (서답형 문항 온라인 채점 시스템의 개발과 적용)

  • Cho, Jimin;Kim, Kyunghoon
    • The Journal of Korean Association of Computer Education
    • /
    • v.17 no.2
    • /
    • pp.39-51
    • /
    • 2014
  • In high-stakes tests for large groups, the efficiency with which students' responses are distributed to raters and how systematic scoring procedures are managed is important to the overall success of the testing program. In the scoring of constructed response items, it is important to understand whether the raters themselves are making consistent judgments on the responses, and whether these judgments are similar across all raters in order to establish measures of rater reliability. The purpose of this study was to design, develop and carry out a pilot test of an online scoring system for constructed response items administered in a paper-and-pencil test to large groups, and to verify the system's reliability. In this study, we show that this online system provided information on the scoring process of individual raters, including intra-rater and inter-rater consistency, compared to conventional scoring methods. We found this system to be especially effective for obtaining reliable and valid scores for constructed response items.

  • PDF

A Study on design of The Internet-based scoring system for constructed responses (서답형 문항의 인터넷 기반 채점시스템 설계 연구)

  • Cho, Ji-Min;Kim, Kyung-Hoon
    • The Journal of Korean Association of Computer Education
    • /
    • v.10 no.2
    • /
    • pp.89-100
    • /
    • 2007
  • Scoring the constructed responses in large-scale assessments needs great efforts and time to reduce the various types of error in Paper-based training and scoring. For the purpose of eliminating the complexities and problems in Paper and pencil based training and scoring, many of countries including U.S.A and England already have applied online scoring system. There, however, has been few studies to develop the scoring system for the constructed responses items in Korea. The purpose of this study is to develop the basic design of the Internet-based scoring system for the constructed responses. This study suggested the algorithms for assigning scorers to constructed responses, employing methods for monitoring reliability, etc. This system can ensure reliable, quick scoring such as monitor scorer consistency through ongoing reliability checks and assess the quality of scorer decision making through frequent various checking procedures.

  • PDF

A Study on Validity, Reliability and Practicality of a Concept Map as an Assessment Tool of Biology Concept Understandings (생물 개념 이해의 평가 도구로서 개념도의 타당도, 신뢰도 그리고 현실 적용 가능성에 대한 연구)

  • Cho, Jung-II;Kim, Jung
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.2
    • /
    • pp.398-409
    • /
    • 2002
  • The purpose of this study was to investigate the validity, reliability and practicality of a concept map as an assessment tool in the context of biology concept learning. Forty undergraduate students participated in concept mapping, and the maps were scored by preservice science teachers, using one of three different scoring methods, that is, concept map scoring methods developed by Burry-Stock, Novak & Gowin and McClure & Bell. Two scorers were assigned to each scoring method. As far as the validity of the assessment methods was concerned, two of the three methods were found to be very valid, while Burry-Stock's scoring method was shown little valid. As far as the internal consistency of the methods was concerned, considerably high consistencies were shown between every pair of scorers, judging from high correlation coefficients between the pair of scorers for each scoring method. It took from 1.13 minutes to 3.70 minutes to assess a map at the average. It showed that concept mapping could be used in school classrooms with the limited resources of time and people. These findings suggest that the concept mapping can be an appropriate tool for assessing biology concept understandings.

Developing Scoring Rubric and the Reliability of Elementary Science Portfolio Assessment (초등 과학과 포트폴리오의 채점기준 개발과 신뢰도 검증)

  • Kim, Chan-Jong;Choi, Mi-Aee
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.1
    • /
    • pp.176-189
    • /
    • 2002
  • The purpose of the study is to develop major types of scoring rubrics of portfolio system, and estimate the reliability of the rubrics developed. The portfolio system was developed by Science Education Laboratory, Chongju National University of Education in summer, 2000. The portfolio is based on the Unit 2, The Layer and Fossil, and Unit 4, Heat and Change of Objects at fourth-grade level. Four types of scoring rubrics, holistic-general, holistic-specific, analytical-general, and analytical-specific, were developed. Students' portfolios were scored and inter-rater and intra-rater reliability were calculated. To estimate inter-rater reliability, 3 elementary teachers per each rubric(total 12) scored 12 students' portfolios. Teachers who used analytical-specific rubric scored only six portfolios because it took much more time than other rubrics. To estimate intra-rater reliability, second scoring was administered by two raters per rubric in two and half month. The results show that holistic-general rubric has high inter-rater and moderate intra-rater reliability. Holistic-specific rubric shows moderate inter- and intra-rater reliability. Analytical-general rubric has high inter-rater and moderate intra-rater reliability. Analytical-specific rubric shows high inter- and intra-rater reliability. The raters feel that general rubrics seems to be practical but not clear. Specific rubrics provide more clear guidelines for scoring but require more time and effort to develop the rubrics. Analytical-specific rubric requires more than two times of time to score each portfolio and is proved to be highly reliable but less practical.

An Analysis on Rater Error in Holistic Scoring for Performance Assessments of Middle School Students' Science Investigation Activities (중학생 과학탐구활동 수행평가 시 총체적 채점에서 나타나는 채점자간 불일치 유형 분석)

  • Kim, Hyung-Jun;Yoo, June-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.32 no.1
    • /
    • pp.160-181
    • /
    • 2012
  • The purpose of this study is to understand raters' errors in rating performance assessments of science inquiry. For this, 60 middle school students performed scientific inquiry about sound propagation and 4 trained raters rated their activity sheets. Variance components estimation for the result of the generalizability analysis for the person, task, rater design, the variance components for rater, rater by person and rater by task are about 25%. Among 4 raters, 2 raters' severity is higher than the other two raters and their severities were stabilized. Four raters' rating agreed with each other in 51 cases among the 240 cases. Through the raters' conferences, the rater error types for 189 disagreed cases were identified as one of three types; different salience, severity, and overlooking. The error type 1, different salience, showed 38% of the disagreed cases. Salient task and salient assessment components are different among the raters. The error type 2, severity, showed 25% and the error type 3, overlooking showed 31%. The error type 2 seemed to have happened when the students responses were on the borders of two levels. Error type 3 seemed to have happened when raters overlooked some important part of students' responses because she or he immersed her or himself in one's own salience. To reduce the above rater errors, raters' conference in salience of task and assesment components are needed before performing the holistic scoring of complex tasks. Also raters need to recognize her/his severity and efforts to keep one's own severity. Multiple raters are needed to prevent the errors from being overlooked. The further studies in raters' tendencies and sources of different interpretations on the rubric are suggested.

A Study on the Features of Writing Rater in TOPIK Writing Assessment (한국어능력시험(TOPIK) 쓰기 평가의 채점 특성 연구)

  • Ahn, Su-hyun;Kim, Chung-sook
    • Journal of Korean language education
    • /
    • v.28 no.1
    • /
    • pp.173-196
    • /
    • 2017
  • Writing is a subjective and performative activity. Writing ability has multi-facets and compoundness. To understand the examinees's writing ability accurately and provide effective writing scores, raters first ought to have the competency regarding assessment. Therefore, this study is significant as a fundamental research about rater's characteristics on the TOPIK writing assessment. 150 scripts of the 47th TOPIK examinees were selected randomly, and were further rated independently by 20 raters. The many-facet Rasch model was used to generate individualized feedback reports on each rater's relative severity and consistency with respect to particular categories of the rating scale. This study was analyzed using the FACETS ver 3.71.4 program. Overfit and misfit raters showed many difficulties for noticing the difference between assessment factors and interpreting the criteria. Writing raters appear to have much confusion when interpreting the assessment criteria, and especially, overfit and misfit teachers interpret the criteria arbitrarily. The main reason of overfit and misfit is the confusion about assessment factors and criteria in finding basis for scoring. Therefore, there needs to be more training and research is needed for raters based on this type of writing assessment characteristics. This study is recognized significantly in that it collectively examined writing assessment characteristics of writing raters, and visually confirmed the assessment error aspects of writing assessment.

A Case Study on Rater Training for Pre-service Korean Language Teacher of Native Speakers and Chinese Speakers (한국인과 중국인 예비 한국어 교사 대상 채점자 교육 사례)

  • Lee, Duyong
    • Journal of Korean language education
    • /
    • v.29 no.1
    • /
    • pp.85-108
    • /
    • 2018
  • This study pointed out the reality that many novice Korean language teachers who lack rater training are scoring the learners' writing skill. The study performed and analyzed a case where pre-service teachers were educated in order to explore the possibility of promoting rater training in a Korean language teacher training course. The pre-service teachers majoring in Korean language education at the graduate school scored TOPIK compositions and were provided feedback by the FACETS program, which were further discussed at the rater meeting. In three scoring processes, the raters scored with conscious of own rating patterns and showed positive change or over correction due to excessive consciousness. Consequentially, ongoing training can improve rating ability, and considering the fact that professional rater training is hard to progress, the method composed of FACETS analysis and rater training revealed positive effects. On the other hand, the rater training including native Korean and non-native(Chinese) speakers together showed no significant difference by mother tongue but by individual difference. This can be interpreted as a positive implication to the rating reliability of non-native speakers possessing advanced Korean language abilities. However, this must be supplemented through extended research.

Research on Subjective-type Grading System Using Syntactic-Semantic Tree Comparator (구문의미트리 비교기를 이용한 주관식 문항 채점 시스템에 대한 연구)

  • Kang, WonSeog
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.6
    • /
    • pp.83-92
    • /
    • 2018
  • The subjective question is appropriate for evaluation of deep thinking, but it is not easy to score. Since, regardless of same scoring criterion, the graders are able to produce different scores, we need the objective automatic evaluation system. However, the system has the problem of Korean analysis and comparison. This paper suggests the Korean syntactic analysis and subjective grading system using the syntactic-semantic tree comparator. This system is the hybrid grading system of word based and syntactic-semantic tree based grading. This system grades the answers on the subjective question using the syntactic-semantic comparator. This proposed system has the good result. This system will be utilized in Korean syntactic-semantic analysis, subjective question grading, and document classification.

A Grading System of Word Processor Practical Skill Using HWPML (HWPML을 이용한 워드프로세서 실기 채점 시스템)

  • Ha, Jin-Seok;Jin, Min
    • Journal of The Korean Association of Information Education
    • /
    • v.7 no.1
    • /
    • pp.37-47
    • /
    • 2003
  • A grading system of practical word processor skills is designed and implemented by using HWPML(Hangul Word Processor Markup Language) which is a product of Hangul and Computer Co Ltd. By using HWPML, which is a markup tag structure of Hangul file, Hangul files can be edited in other application programs. Authorized users can make questions. However, only the manager is allowed to register answers to the questions in order to maintain the correctness of grading. The result of test is stored in the database and the statistics on pass or failure can be shown interactively. The number of taking test and scores for each user are stored in the database and they can be accessed to whenever the user wants them. Comments on the test results are provided by the manager so that learners can intensity their weak points.

  • PDF

Difference in Results according to Scorer and Test Date in Clinical Practice Test (진료수행 시험에서 채점자 및 시험 일자에 따른 결과 차이)

  • Kwon, So-Hee;Kim, Young-Jon
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.345-352
    • /
    • 2018
  • The purpose of this study is to clarify the difference between the scoring results by scorer(doctors and standardization patients) and examination dates. A total of 101 students in the fourth grade of medical school participated in four clinical practice test. Students were randomly assigned to either day-1 or day-2, which was consisted of a standardized patient scoring set or a physician scoring set. Station checklists consisted of history taking, physical examination, patient education, physician-patient relationship and clinical courtesy. The achievement scores of each case and each domain were converted to the standard score, and the differences between groups were compared. Female students' achievement scores were significantly higher than male students' achievement scores in all domains. There was no significant difference between means by the standardized patients' group and doctors group. Day-2 group was significantly higher than day-1 group in both of history taking and physical examination domains. If the principles of checklist are clearly defined, the scorer status (either physician or standardized patients) does not determine the difference of students' practice test scores.