• Title/Summary/Keyword: rubric

Search Result 125, Processing Time 0.023 seconds

A Study on the Effects and Evaluation of Movies Education through Application of Rubric (루브릭 적용을 통한 영화교육 평가 및 효과 연구)

  • Sung, Chang-Hwan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.471-478
    • /
    • 2022
  • In a good class, the elements that make up the class are organically related as a system. Unilateral assessment without sufficient explanation or agreement on assessment criteria, subjective assessment that does not guarantee the reliability of the assessment process and decolonized evaluation separate from the learning process can be a threat to a good class or healthy learning ecosystem. This study analyzed the evaluation through rubric and its effects to solve problems related to educational evaluation. 'Rubrick' is a descriptive evaluation tool that details the criteria for evaluating performance tasks based on class goals and the quality of performance in several stages. The rubric applied for movie literacy evaluation is 'analytical rubric'. It covers literacy to understand movies, movie making literacy and movie utilization literacy. For rubric, learners recognized it as a valid and very useful learning reflection tool.

An Analysis on Reliabilities of Scoring Methods and Rubric Ratings Number for Performance Assessments of Middle School Students' Science Investigation Activities (중학생 과학탐구활동 수행평가 시 채점 방식 및 척도의 수에 따른 신뢰도 분석)

  • Kim, Hyung-Jun;Yoo, June-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.30 no.2
    • /
    • pp.275-290
    • /
    • 2010
  • In this study, reliabilities of holistic scoring method and analytic scoring method were analyzed in performance assessments of middle school students' science investigation activity. Reliabilities of 2, 3, and 4~7-level rubric ratings for analytic scoring methods were compared to figure out optimized numbers of rubric ratings. Two trained raters rated four activity sheets of 60 students by two rating methods and three kinds of rubric ratings. Internal consistency reliabilities of holistic scoring methods were higher than those of analytic scoring methods, while intrarater reliabilities of analytic scoring were higher than those of holistic scoring methods. Internal consistency reliabilities and intra-rater reliabilities of 3-level rubric rating showed similar patterns of 4~7-level rubric ratings. But students' discriminations, item difficulties and item-response curves showed that the 3-level rubric ratings was reliable. These results suggest that holistic scoring method could be adapted to increase internal consistency reliabilities with improvement in intra-rater reliabilities by rater's conferences. Also, the 3-level rubric rating would be enough for good reliability in case of adapting analytic scoring methods.

The Development of Rubrics to Assess Scientific Argumentation (과학적 논증과정 평가를 위한 루브릭 개발)

  • Yang, Il-Ho;Lee, Hyo-Jeong;Lee, Hyo-Nyong;Cho, Hyun-Jun
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.2
    • /
    • pp.203-220
    • /
    • 2009
  • The purpose of this study was to develop a rubric for assessing students' scientific argumentation. Through the analysis of relevant literature related to argument in science education for developing rubric, the procedure in development and the category in assessment for rubric were elicited. According to the general procedure in developing rubric, the standard for evaluating the argumentation derived three categories such as a form, contents, and attitude. The form category was further segmented into sub-functions composition, claim, ground, and conclusion in the whole. The category for contents was segmented into sub-functions understanding, credibility, and inference. And the category for attitude was set to sub-functions participatory level and openness. The standard for evaluating sub-functions in each of the categories formed in this way was minutely suggested with five stages. The rubric, which was developed on the basis of literature, was inspected through a regular seminar in one expert in science education and fellow researchers. The rubric, which was developed in the early days, was again modified by being verified on problem and improvement matter after being entrusted to four experts in scientific education. And, the finally-completed rubric indicated to be high with 0.96 in the content validity index by being verified the validity by the four experts in science education. The developed rubric will lead to being able to increase the understanding about demonstration in students, and to being available for being utilized as the criteria for developing the argumentation process program and for evaluating the argumentation activity.

Filling the understanding gap of the misplacement of ESL learner's writing placement test (ESL 학습자의 쓰기배치고사상의 오배치에 따른 이해도 차이 연구)

  • Kim, Jung-Tae
    • English Language & Literature Teaching
    • /
    • v.12 no.3
    • /
    • pp.147-166
    • /
    • 2006
  • This study investigates the effect of misplacement in a written Computer-delivered ESL Placement Test (CEPT) context. The study aims to address the following two research questions: a) which scoring rubric features cause the misplacement of ESL learner's writing scores? and b) which scoring rubric features improve ESL learner's writing scores? Thirty-four international examinees took the test and participated in surveys at the University of Illinois. Twelve examinees of them attended the CEPT workshop test. In the workshop test, they carried out self-evaluation on their first essays using a scoring rubric and compared with expert raters' results. After the workshop, the examinees responded to a survey and interview. For the first research question, the results of the survey and interview addressed that the majority disagreed with the raters' rating results. The self-evaluation results also indicated that their misunderstanding of organization feature caused the misplacement. For the second question, the CEPT workshop scores were improved due to the score improvement in the organization feature while the contribution of other features to the total scores was little. Most of the examinees pointed out that a lesson on the scoring rubric enhanced their understanding of the writing features of the rubric so that their placement scores were generally improved.

  • PDF

Applying Clinical Judgment Rubric for Evaluation of Simulation Practice for Nursing Students : A Non-Randomized Controlled Trial

  • Kim, Hyun-Ju
    • International Journal of Contents
    • /
    • v.14 no.2
    • /
    • pp.35-40
    • /
    • 2018
  • The purpose of this study is to investigate the effects of debriefing using Lasater's Clinical Judgment Rubric to study nursing students' academic self-efficacy, clinical performance, and clinical judgment. The experiment group was subjected to debriefing by applying the Clinical Judgment Rubric, while general debriefing was applied to the control group. The results of the study are as follows: Clinical judgment scores were improved after debriefing for both groups, significantly higher for students in the experimental group compared to the control group. However, there was no significant difference between the two groups in academic self-efficacy or clinical performance. In conclusion, the debriefing based on the Clinical Judgment Rubric used in this study proved to be effective in improving the clinical judgment of nursing students.

Rubric Development for Performance Evaluation of Middle School Home Economics - Focusing on Experiment and Practice Methods - (중학교 가정교과 수행평가를 위한 루브릭(rubric) 개발 - 실험.실습법에 적용 -)

  • Bum, Sun-Hwa;Chae, Jung-Hyun
    • Journal of Korean Home Economics Education Association
    • /
    • v.20 no.3
    • /
    • pp.85-105
    • /
    • 2008
  • The purpose of this study was to develop a narrative analytic scoring rubric through teacher-students negotiations, as an assessment of tasks using methods of experiment and practice for home economic(HE) in the middle school. In this study. an analytic rubric had been developed in the following three stages: In the first stage, all the things for rubric development were defined and prepared, by selecting tasks used for rubric application through a questionnaire survey, providing detailed directions on methods and procedures and needed items, and selecting a class for rubric negotiation and setting the development schedule. In addition, the method suggested by Ainsworth and Christinson(1998) in Student Generated Rubrics was used. In the second stage, performance criteria for tasks in terms of knowledge, skills, and attitude were developed, setting scoring framework and scales depending on assessment areas. Referring to selected scoring framework and assessment criteria, observable and assessable behaviors were used to describe rubric based on A, B, and C scale. Then, a primary rubric was developed through teacher-students negotiations, using rubrics made by group. In the last stage, the developed primary rubric was reviewed by an expert of HE education to test the validity. Moreover, the analysis to test the suitability of the final rubric assessment tool employed 46 copies of questionnaire collected from incumbent home economics teachers selected by way of random sampling mainly focusing on those teachers who were in the Master's degree program or completed the program at one university. As a result, the average of suitability of aa the rubrics were over 4.0 in th 5-point scale.

  • PDF

Development of Rubric for Assessing Computational Thinking Concepts and Programming Ability (컴퓨팅 사고 개념 학습과 프로그래밍 역량 평가를 위한 루브릭 개발)

  • Kim, Jae-Kyung
    • The Journal of Korean Association of Computer Education
    • /
    • v.20 no.6
    • /
    • pp.27-36
    • /
    • 2017
  • Today, a computational thinking course is being introduced in elementary, secondary and higher education curriculums. It is important to encourage a creative talent built on convergence of computational thinking and various major fields. However, proper analysis and evaluation of computational thinking assessment tools in higher education are currently not sufficient. In this study, we developed a rubric to evaluate computational thinking skills in university class from two perspectives: conceptual learning and practical programming training. Moreover, learning achievement and relevance between theory and practice were assessed. The proposed rubric is based on Computational Thinking Practices for assessing the higher education curriculum, and it is defined as a two-level structure which consists of four categories and eight items. The proposed rubric has been applied to a liberal art class in university, and the results were discussed to make future improvements.

Evaluation of Lasater Clinical Judgment Rubric to Measure Nursing Student' Performance of Emergency Management Simulation of Hypoglycemia (간호대학생의 저혈당 응급관리 시뮬레이션 실습 수행 평가를 위한 임상판단 루브릭 적용)

  • Hur, Hea Kung;Park, So Mi;Kim, Ki Kyong;Jung, Ji Soo;Shin, Yoon Hee;Choi, Hyang Ok
    • Journal of Korean Critical Care Nursing
    • /
    • v.5 no.2
    • /
    • pp.15-27
    • /
    • 2012
  • Purpose: To evaluate the applicability of Lasater Clinical Judgment Rubric (LCJR) as an evaluation tool for hypoglycemia simulation practicum on Korean nursing students. Methods: The methodological study was done to evaluate the reliability and validity of the LCJR. Based on Benner's 4 levels of nursing grading rubric, ten items of the LCJR was evaluated for interrater reliability and internal consistency. The content validity was tested by eight experts and concurrent validity was done by Clark (2006)'s clinical simulation grading rubric. Fifty five video-taped cases of senior nursing students in Y University were used for the reliability and concurrent validity of the LCJR. Results: The interrater reliability was r=.90 (p<.001); Kendall tau b=.87 (p <.001), and Cronbach's alpha was .90. A value of item content validity index of the LCJR was .97 and correlation coefficient between the LCJR and Clark's instrument was .90 (p<.001). The mean (${\pm}SD$) of the nursing students' clinical judgment was 2.04 (${\pm}50$). Conclusion: The LCJR is a useful tool to examine the simulation performance evaluation for improving competency among nursing students. The results indicated that the LCJR may provide valuable information regarding clinical judgment of nursing students and thus, suggested to use to develop a simulation-based education program.

  • PDF

The Assessment Rubric Development of Mathematical Communication Ability (수학적 의사소통 능력의 평가 기준 개발)

  • 이종희;김선희;채미애
    • Journal of Educational Research in Mathematics
    • /
    • v.11 no.1
    • /
    • pp.207-221
    • /
    • 2001
  • The purpose of this study is to develop the assessment rubric of mathematical communication ability by each type: listening, speaking, reading, writing, and graphic. This rubric is qualified in the content-validity and reliability by professional educators' evaluation and correlation coefficient. 16 math educators judged that this involves the results of learning mathematical communication, the results of possible instruction, and the content of scoring mathematical communication by teachers. 170 middle school students were tested by the assessment task according to the types of mathematical communication. After two researchers and two teachers scored the tasks, correlation coefficient was calculated between evaluators. The coefficient is evaluated high in that it is more than 0.70.

  • PDF

Engaging pre-service English teachers in the rubric development and the evaluation of a creative English poetry (예비 영어교사 주도에 의한 영미시 평가표 제작 및 평가 수행에 관한 연구)

  • Lee, Ho;Jun, So-Yeon
    • English Language & Literature Teaching
    • /
    • v.17 no.4
    • /
    • pp.339-356
    • /
    • 2011
  • This study explored pre-service English teachers' participation in the development of a rubric and examined evaluation of their own English poetry. The current study would investigate: 1) the pre-service English teachers' perception as a rubric developer and self-evaluator, 2) the number of analytic area that the participants included in their rubrics and the scoring scheme that they designed in their rubrics, and 3) the inter-rater differences between self-assessemnt and expert-assessment across analytic areas. Twenty-four EFL learners participated in the current study. The researchers analyzed the learners' own English poetry, their field notes which contained the process of their writing, their rubrics, scores of self-assessment, and expert raters' scores. The results revealed that learners showed positive responses on learner-directed assessment, that 'content' is the most important area, and that inter-rater difference is small across all analytic areas.

  • PDF