• Title/Summary/Keyword: 채점자간 신뢰도

Search Result 20, Processing Time 0.022 seconds

Developing Scoring Rubric and the Reliability of Elementary Science Portfolio Assessment (초등 과학과 포트폴리오의 채점기준 개발과 신뢰도 검증)

  • Kim, Chan-Jong;Choi, Mi-Aee
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.1
    • /
    • pp.176-189
    • /
    • 2002
  • The purpose of the study is to develop major types of scoring rubrics of portfolio system, and estimate the reliability of the rubrics developed. The portfolio system was developed by Science Education Laboratory, Chongju National University of Education in summer, 2000. The portfolio is based on the Unit 2, The Layer and Fossil, and Unit 4, Heat and Change of Objects at fourth-grade level. Four types of scoring rubrics, holistic-general, holistic-specific, analytical-general, and analytical-specific, were developed. Students' portfolios were scored and inter-rater and intra-rater reliability were calculated. To estimate inter-rater reliability, 3 elementary teachers per each rubric(total 12) scored 12 students' portfolios. Teachers who used analytical-specific rubric scored only six portfolios because it took much more time than other rubrics. To estimate intra-rater reliability, second scoring was administered by two raters per rubric in two and half month. The results show that holistic-general rubric has high inter-rater and moderate intra-rater reliability. Holistic-specific rubric shows moderate inter- and intra-rater reliability. Analytical-general rubric has high inter-rater and moderate intra-rater reliability. Analytical-specific rubric shows high inter- and intra-rater reliability. The raters feel that general rubrics seems to be practical but not clear. Specific rubrics provide more clear guidelines for scoring but require more time and effort to develop the rubrics. Analytical-specific rubric requires more than two times of time to score each portfolio and is proved to be highly reliable but less practical.

Developing a Scoring Rubric for Students' Mind Maps and Its Reliability (마인드 맵의 채점 기준 개발 및 신뢰도 검증)

  • Lee, Su-Jung;Su-Jung, Chan-Jong
    • Journal of the Korean earth science society
    • /
    • v.23 no.8
    • /
    • pp.632-639
    • /
    • 2002
  • The purpose of the study is to develop a scoring rubric for students’ mind maps. The participants of this research were students in two fourth-grade classes selected from an elementary school in Pyungtaek-shi. After receiving basic training, students developed mind maps four times while teaming two science units. In order to score the mind maps, a scoring rubric was developed. To estimate the reliability of the rubric, selected mind maps were marked by three teachers and correlational coefficients were calculated with SPSS. As a result of the study, a scoring rubric consisted of three domains, central circle, branches, and expression were developed. The reliability of the rubric is proven to be high to very high.

An Analysis on Reliabilities of Scoring Methods and Rubric Ratings Number for Performance Assessments of Middle School Students' Science Investigation Activities (중학생 과학탐구활동 수행평가 시 채점 방식 및 척도의 수에 따른 신뢰도 분석)

  • Kim, Hyung-Jun;Yoo, June-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.30 no.2
    • /
    • pp.275-290
    • /
    • 2010
  • In this study, reliabilities of holistic scoring method and analytic scoring method were analyzed in performance assessments of middle school students' science investigation activity. Reliabilities of 2, 3, and 4~7-level rubric ratings for analytic scoring methods were compared to figure out optimized numbers of rubric ratings. Two trained raters rated four activity sheets of 60 students by two rating methods and three kinds of rubric ratings. Internal consistency reliabilities of holistic scoring methods were higher than those of analytic scoring methods, while intrarater reliabilities of analytic scoring were higher than those of holistic scoring methods. Internal consistency reliabilities and intra-rater reliabilities of 3-level rubric rating showed similar patterns of 4~7-level rubric ratings. But students' discriminations, item difficulties and item-response curves showed that the 3-level rubric ratings was reliable. These results suggest that holistic scoring method could be adapted to increase internal consistency reliabilities with improvement in intra-rater reliabilities by rater's conferences. Also, the 3-level rubric rating would be enough for good reliability in case of adapting analytic scoring methods.

A Study on Validity, Reliability and Practicality of a Concept Map as an Assessment Tool of Biology Concept Understandings (생물 개념 이해의 평가 도구로서 개념도의 타당도, 신뢰도 그리고 현실 적용 가능성에 대한 연구)

  • Cho, Jung-II;Kim, Jung
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.2
    • /
    • pp.398-409
    • /
    • 2002
  • The purpose of this study was to investigate the validity, reliability and practicality of a concept map as an assessment tool in the context of biology concept learning. Forty undergraduate students participated in concept mapping, and the maps were scored by preservice science teachers, using one of three different scoring methods, that is, concept map scoring methods developed by Burry-Stock, Novak & Gowin and McClure & Bell. Two scorers were assigned to each scoring method. As far as the validity of the assessment methods was concerned, two of the three methods were found to be very valid, while Burry-Stock's scoring method was shown little valid. As far as the internal consistency of the methods was concerned, considerably high consistencies were shown between every pair of scorers, judging from high correlation coefficients between the pair of scorers for each scoring method. It took from 1.13 minutes to 3.70 minutes to assess a map at the average. It showed that concept mapping could be used in school classrooms with the limited resources of time and people. These findings suggest that the concept mapping can be an appropriate tool for assessing biology concept understandings.

An Analysis on Rater Error in Holistic Scoring for Performance Assessments of Middle School Students' Science Investigation Activities (중학생 과학탐구활동 수행평가 시 총체적 채점에서 나타나는 채점자간 불일치 유형 분석)

  • Kim, Hyung-Jun;Yoo, June-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.32 no.1
    • /
    • pp.160-181
    • /
    • 2012
  • The purpose of this study is to understand raters' errors in rating performance assessments of science inquiry. For this, 60 middle school students performed scientific inquiry about sound propagation and 4 trained raters rated their activity sheets. Variance components estimation for the result of the generalizability analysis for the person, task, rater design, the variance components for rater, rater by person and rater by task are about 25%. Among 4 raters, 2 raters' severity is higher than the other two raters and their severities were stabilized. Four raters' rating agreed with each other in 51 cases among the 240 cases. Through the raters' conferences, the rater error types for 189 disagreed cases were identified as one of three types; different salience, severity, and overlooking. The error type 1, different salience, showed 38% of the disagreed cases. Salient task and salient assessment components are different among the raters. The error type 2, severity, showed 25% and the error type 3, overlooking showed 31%. The error type 2 seemed to have happened when the students responses were on the borders of two levels. Error type 3 seemed to have happened when raters overlooked some important part of students' responses because she or he immersed her or himself in one's own salience. To reduce the above rater errors, raters' conference in salience of task and assesment components are needed before performing the holistic scoring of complex tasks. Also raters need to recognize her/his severity and efforts to keep one's own severity. Multiple raters are needed to prevent the errors from being overlooked. The further studies in raters' tendencies and sources of different interpretations on the rubric are suggested.

대학별고사를 위한 문항분석, 표준점수, 검사동등화

  • 성태제
    • Communications for Statistical Applications and Methods
    • /
    • v.1 no.1
    • /
    • pp.206-214
    • /
    • 1994
  • 본 논문은 1994학년도 부터 부활된 대학별고사 실시에 따른 문항분석, 표준 점수제 그리고 검사동등화의 문제점을 지적하기 위하여 교육측정이론의 기본 개념을 소개하는데 있다. 대학별고사의 타당성과 신뢰성을 보장받기 위하여는 양질의 문항제작이 우선하여야하며, 이를 위하여 문항분석은 종전에 사용하던 고전검사이론 보다는 문항반응이론을 이용하는 것이 바람직하다. 문항반응이론에 의한 문항분석은 피험자 집단의 특성에 의하여 문항특성이 달리 분석되지 않는 특징을 지니고 있기 때문이다. 문항이 논술형일 경우 채점자간 신뢰도와 채점자 내 신뢰도를 간과하여서는 안될 것이다. 다양한 선택과목을 채택하는 대학별 고사에서 입학 사정을 위하여 원점수를 사용하거나, 표준점수 혹은 검사동등화 방법을 이용하고 있으나 이는 교육측정이론에 위배된다. 다른 과목에 대한 인가의 능력을 상대비교 할 수 없으며, 표준점수와 검사동등화는 동일 능력에 대한 상대비교를 위한 방법이다. 특히 검사동등화는 동일 특성, 공정성, 모교집단 불변성, 대칭성을 전제한다. 표준점수제에 의하여 수험생들의 다른 능력을 상대 비교하는 방법은 다른 능력이 점수로 표현되기 때문에 가능하나 그 점수가 무엇을 의미하는 가를 분석할 때는 교육평가의 기본 철학에도 위배된다.

  • PDF

The Reliability and Validity of Clock Drawing Test as a Screening Tool for Cognitive Impairment in Clients after Cerebrovascular Accident (뇌졸중 클라이언트의 인지 손상 선별 도구로서 CDT의 신뢰도 및 타당도)

  • Lee, Sang-Heon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.10
    • /
    • pp.4612-4618
    • /
    • 2012
  • The purpose of this study was to analyze the reliability and validity of CDT as a screening tool for cognitive impairment for stroke clients living in local community. Two evaluator assessed 51 clients' cognitive function using CDT and K-MMSE from October 2010 to August 2011 year. The researcher analyzed test-retest reliability, interrater reliability, construct validity, construct validity. The test-retest and interrater reliability was higher than .54(p<.01). The construct validity and concurrent validity was statistically significant(p<.01). So, The CDT using productive method and scoring system of Freedman et al., may be applied to screen cognitive impairment of clients with stroke.

The Development of Assessment Tools to Measure Scientific Creative Problem Solving ability for Middle School Students (중학생의 과학 창의적 문제 해결 능력을 측정하기 위한 도구 개발)

  • Park, In-Suk;Kang, Soon-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.32 no.2
    • /
    • pp.210-235
    • /
    • 2012
  • The purpose of this study was to develop a valid and reliable assessment tool for measuring scientific creative problem solving ability for middle school students. To achieve this aim, an assessment framework, four assessment items, and detailed rubrics for scientific creative problem solving were developed. The assessment framework had three dimensions (i.e. science contents, inquiry process, and thinking skills) and sub-elements for each dimension. The assessment items were tested with 320 middle school students in order to determine reliability, difficulty, and item discrimination. Science teachers and experts in science education checked the validity of the items and the rubrics. The results proved that the assessment tool was reliable enough to evaluate students' scientific creative problem solving skills.

Reliability of Standardized Patients as Raters in Objective Structured Clinical Examination (객관 구조화 절차 기술 평가에서 채점자로서의 표준화환자의 신뢰도)

  • Son, Hee-Jeong;Moon, Joong-Bum;Lee, Hyang-Ah;Roh, Hye-Rin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.1
    • /
    • pp.318-326
    • /
    • 2011
  • The purpose of this study is to investigate whether standardized patient(SP) can be used as a reliable examiner in Objective Structured Clinical Examination(OSCE). 4 SPs and 4 faculties who have more than 2 years experience of OSCE scoring were selected. For 1 assignment 2 members of faculty and 2 SPs were designated as raters. SPs were educated for assessing 2 technical skills, male Foley catheter insertion and wound dressing, for 8 hours (4 hours / day, each topic). The definition, method, cautions and complications for each of procedural skills were covered in the education. Theoretical lectures, video learning, faculty demonstration and practical training on mannequins were employed. The 8 raters were standardized for an hour with simulated OSCE scoring using previous videos on the day before the OSCE. Each assessment was composed of 14 checklists and 1 global rate. The allotted time for each assignment was 5minutes and for evaluation time 2 minutes per student. The evaluation from the faculty and SPs were compared and analyzed with the GENOVA program. The overall generalizability coefficient (G coefficient) was 0.839 from two cases of OASTS. The reliability of the raters was high, 0.946. The inter-rater agreement between faculty group and SP group was 0.949 for checklist and 0.908 for global rating. Therefore SPs can play a role of raters in OSCE for procedural skills, if they are given the appropriate training.

The Development of Performance Scoring Rubrics for the Inquiry-Based General Chemistry Experiments (탐구적 일반화학실험 수행 평가 준거 개발)

  • Kang, Soon-Hee;Kim, Yang-Hyun;Park, Jong-Yoon
    • Journal of The Korean Association For Science Education
    • /
    • v.19 no.4
    • /
    • pp.507-515
    • /
    • 1999
  • This study is to develope the performance scoring rubrics for the inquiry-based experiments of general chemistry course in the college of education. Two types of analytic scoring rubrics have been developed for nine different experiments. The first one is to assess scientific process skills from the written experimental reports. These analytic scoring rubrics include seven process skills selected from the Lawson's 'creative and critical thinking skills' and other known process skills. The second one is to assess the individual manipulative skills and experimental attitudes through direct observations by the teacher. The content validity of all scoring rubrics was testified by six science educators. Also the inter-scorer reliability of analytic scoring rubrics administered on the students' experimental reports was examined. The correlation coefficient between the scores obtained from the experiments and those of the written test for theoretical knowledges was found to be r=.663(p <.01). From the variance($r^2$=.440), we would say indirectly that the 56% of this experimental assessment does not overlap with the theoretical knowledges test and assesses students' science process skills, manipulative skills, and attitudes.

  • PDF