• Title/Summary/Keyword: 선다형

Search Result 90, Processing Time 0.018 seconds

선다형 평가문항을 통한 오류분석

  • Choe, Yeong-Gi;Hong, Gap-Ju;Do, Jong-Hun;Kim, Min-Jeong
    • Communications of Mathematical Education
    • /
    • v.14
    • /
    • pp.151-162
    • /
    • 2001
  • 교육평가는 선발의 기능을 할 뿐 아니라 학습자의 학습상태나 오류경향에 대한 진단을 통해 학습자의 학습에 도움을 주어야 한다. 이르한 측면에서 오류분석은 평가가 지녀야 하는 가장 중요한 기능 중 하나라고 할 수 있다. 인터넷의 대중적 보급과 시간적, 공간적, 기술적 효율성은 평가환경에 대한 새로운 가능성을 제시하고 있다. 인터넷 환경에서의 평가문항은 선다형을 기본으로 하게 되는데, 선다형 평가문항을 통해서는 학습자들의 오류를 직접적으로 관출할 수 없다는 한계가 있다. 그러나 선다형 평가문항을 통한 오류분석이 불가능한 것은 아니다. 본 연구의 목적은 선다형 평가문항에서의 오류분석을 위한 구체적인 방법과 절차를 모색하는 것이다.

  • PDF

A Relative Effectiveness of Item Types for Estimating Science Ability in TIMSS-R (문항 유형에 따른 과학 능력 추정의 효율성 비교)

  • Park, Chung;Hong, Mi-Young
    • Journal of The Korean Association For Science Education
    • /
    • v.22 no.1
    • /
    • pp.122-131
    • /
    • 2002
  • Recently, performance assessment that makes growing use of free response items in a large scale assessment has been emphasized. This study is an empirical examination of the effectiveness of free response items in comparison with multiple choice items. Using the information function in Item Response Theory (IRT) framework, item information of free response items and multiple-choice items from the Third International Mathematics and Science Study-Repeat (TlMSS-R) were obtained. Test information of the whole science area as well as each area of science contents was computed. On average, free response items yielded more information than multiple choice items, especially in earth science, physics, chemistry, and life science. This study also showed that free response items were appropriate for students in high science ability. Also, free response items estimated students' science ability more accurately than multiple choice items with smaller number of free response items.

A Computerized Testing system that Reduces Backward Reasoning in Multiple-choice Items (선다형 문항에서 역행추리를 줄이는 컴퓨터화 검사 방식)

  • Park, Joo-Yong
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.3
    • /
    • pp.275-289
    • /
    • 2009
  • A new computerized testing system, called the Computerized Multiple-choice Testing (CMMT) system, was introduced. In this system, questions of multiple choice (MC) items are presented first without options, so that students must generate answers for themselves. They can click for the options when they are ready, and can respond within a brief, specified time period. The present study was performed to examine whether this system is effective in reducing backward reasoning, I. e., using the options of MC items as cues to find the correct answer. One hundred and seventy-seven 6th grade students (12 year olds) were divided into two groups so that mean scores from a prior test were equal: The experimental group took an intervening computerized test in the new format, and the control group in the MC format. Five days after the computerized intervening test, a short answer paper-and-pencil final test was given. Testing effect was greater in the new system than in the MC system. Analysis of the final test response in relation to the intervening test response showed that i) the students retained the correct answer in the new system more than in the MC testing system, and that ii) students corrected their previous failures in the intervening CMMT format more than those in the MC format. These results suggest that the new system is effective in reducing backward reasoning.

  • PDF

Over-Efficacy in Problem Solving and Overconfidence of Knowledge on Photosynthesis: A Study of Comparison Between Multiple-Choice and Supply-Type Test Formats (광합성 문제 해결에 대한 과잉 효능감과 과잉확신: 선다형과 서답형의 비교 연구)

  • Ha, Minsu;Lee, Jun-Ki
    • Journal of The Korean Association For Science Education
    • /
    • v.34 no.1
    • /
    • pp.1-9
    • /
    • 2014
  • This study aimed to explore the over-efficacy in problem solving and overconfidence of knowledge of students performing assessments in two different test formats: multiple-choice and supply-type. Two hundred and four female middle school students participated in this study. Multiple-choice and supply-type formats of tests on photosynthesis were used, and each item contained scales indicating one's self-efficacy on problem-solving and confidence of knowledge. The results showed that the correlation coefficients of performance between the two different assessment formats were less than 0.5 and the correlation coefficients between efficacy/confidence and actual performance were less than 0.45. Moreover, students tended to exhibit more over-efficacy and overconfidence in multiple-choice formats. The percentage of over-efficacy and overconfidence was higher in the group that completed the multiple-choice test first followed by the supply-type assessment than in the group that started with the supply-type followed by the multiple-choice assessment. From this study, it can be suggested that more use of supply-type assessment is required in science education. If test administrators require the combination of both multiple-choice and supply-type in an assessment, the supply-type assessment format should come first so that students can maintain the appropriate level of efficacy and confidence. In addition, science educators need to develop new learning programs to enhance students' self-monitoring skills of their problem-solving ability and knowledge.

The Effect of Penalizing Wrong Answers Upon the Omission Response in the Computerized Modified Multiple-choice Testing (컴퓨터화 변형 선다형 시험 방식에서 감점제가 시험 점수와 반응 포기에 미치는 영향)

  • Song, Min Hae;Park, Jooyong
    • Korean Journal of Cognitive Science
    • /
    • v.28 no.4
    • /
    • pp.315-328
    • /
    • 2017
  • Even though assessment using information and communication technology will most likely lead the future of educational assessment, there is little domestic research on this topic. Computerized assessment will not only cut costs but also measure students' performance in ways not possible before. In this context, this study introduces a tool which can overcome the problems of multiple choice tests, which are most widely used type of assessment in current Korean educational setting. Multiple-choice tests, in which options are presented with the questions, are efficient in that grading can be automated; however, they allow for students who don't know the answer, to find the correct answer from the options. Park(2005) has developed a modified multiple-choice testing system (CMMT) using the interactivity of computers, that presents questions first, and options later for a short time when the student requests for them. The present study was conducted to find out if penalizing wrong answers could lower the possibility of students choosing an answer among the options when they don't know the correct answer. 116 students were tested with the directions that they will be penalized for wrong answers, but not for no response. There were 4 experimental conditions: 2 conditions of high or low percentage of penalizing, each in traditional multiple-choice or CMMT format. The results were analyzed using a two-way ANOVA for the number of no response, the test score and self-report score. Analysis showed that the number of no response was significantly higher for the CMMT format and that test scores were significantly lower when the penalizing percentage was high. The possibility of applying CMMT format tests while penalizing wrong answers in actual testing settings was addressed. In addition, the need for further research in the cognitive sciences to develop computerized assessment tools, was discussed.

Applying Information and Communication Technology for Advancing Educational Assessment (교육 평가의 혁신을 위한 테크놀러지의 활용)

  • Park, Joo-Yong
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.112-121
    • /
    • 2005
  • 평가는 교수와 함께 교육 목표를 이루는데 있어 핵심적인 역할을 한다. 그렇지만 평가는 교수법의 발전 속도에 비교하면 상당히 느릴 뿐만 아니라, 교사는 물론 학생들에게 부정적으로 비추어지고 있다. 본 논문에서는 시험이 학생들을 동기화시킬 뿐만 아니라 그 자체로 중요한 학습 경험임을 강조하면서, 학습을 위한 평가를 활성화시키기 위해 테크놀러지를 이용한 평가 방식들을 소개하였다. 개발을 물론 어느 정도 연구가 이루어진 테크놀러지를 활용한 평가 기법을, 수행평가와 선다형의 개선안으로 대별하였다. 수행평가를 위한 기법에서는 논술채점, 지식 지도 제작법, 그리고 학습자 모형을 이용한 평가기법이, 선다형의 개선안으로 적응형 컴퓨터화 검사, 다중평가 및 변형선다형 방식이 각각 상술되었다. 결론에서는 평가를 통한 교육 효과를 극대화하기 위해서는, 교사는 물론 학생들의 평가에 대한 태도 변화 교육과 실제 교육 현장에서 쉽게 활용될 수 있는 컴퓨터를 이용한 평가기법의 개발과 보급의 중요성이 강조되었다.

  • PDF

Effect of Guessing on the Correct Answer in a Multiple Choice (객관식 선다형문항에서 추측이 정답에 미치는 영향)

  • Kwon, Boseob
    • The Journal of Korean Association of Computer Education
    • /
    • v.23 no.1
    • /
    • pp.29-36
    • /
    • 2020
  • Various items were used as evaluation tools that identify the student's abilities accurately to confirm the completion of learning. Among them, the multiple choice item has the advantages of high objectivity and reliability in scoring, but it cannot remove the factor of guessing. In this paper, the multiple choice items are classified into two types according to the relationship between the questionnaire and the choices. One is the type used in the classical test theory with the probability of guessing 1/k for k choices and the other is the novel proposed type which introduces the concept of partial knowledge. In the proposed type, the probability of guessing when the number of knowledge i is (i+1)/k for k choices. Based on the assumptions of the previous theories about multiple choice items, we derive the guessing parameter about the proposed type. And we analyzed the effect of the guess on the correct answer in the existing type and the proposed type. This shows that the proposed type has more question guessing than the existing type.

Making Good Multiple Choice Problems at College Mathematics Classes (대학수학에서 바람직한 선다형문제 만들기)

  • Kim, Byung-Moo
    • Communications of Mathematical Education
    • /
    • v.22 no.4
    • /
    • pp.489-503
    • /
    • 2008
  • It is not an easy matter to develop problems which help students understand mathematical concepts correctly and precisely. The aim of this paper is to review the merits and demerits of three problem types (i.e. one answer problems, multiple choice problems and proof problems) and to suggest some points that should be taken into consideration in problem making. First, we presented the merits and demerits of three types of problems by examining actual examples. Second, we discussed some examples of misleading problems and the ways to make desirable ones. Finally, on the basis of our examination and discussion, we suggested some points that should be kept in mind in problem making. The major suggestions are as follows; i) In making one answer problems, we should consider the possibility of sitting a solution by wrong precesses, ii) In formulating multiple choice tests which are layered for their easiness of grading, we should take into account the importance of checking whether the students are fully understanding the concepts, iii) We may depend on the previous research result that multiple choice tests for proof problems can be helpful for the students who have insufficient math background. Besides those suggestions, we made an overall proposal that we should endeavor to find ways to implement the demerits of each problem type and to develop instructive problems that can help students understanding of math.

  • PDF

A Usability Test of a New Computerized Open-ended Math Testing System for Elementary School Students (초등학생용 컴퓨터화 개방형 수학 시험 방식의 사용가능성 검증)

  • Park, Joo-Yong;Kim, Yong-Guk
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.2
    • /
    • pp.283-307
    • /
    • 2010
  • In this study, a new open-ended format math testing system for elementary school students has been proposed. This system is an application of the recently proposed Constructive Multiple-choice Testing (CMT) system on math testing. The CMT system is a testing system in which the examinee has to respond to an item twice, first in an open ended format, and then in the multiple choice format. The advantages of this system is that process information can easily be obtained and that the examinee can receive feedback immediately after the test, based on his/her multiple choice responses. This open-ended format math testing system includes the manager mode, which allows the generation of the test items and student account management, and the testing mode, which allows the students to input their solution process using the menu bar and the keyboard. When two groups, one tested using the CMT system and the other tested using the paper and pencil test, were compared, there was no significant difference in average scores between the two groups although the testing time was longer for the group tested using the CMT system. This result suggests that the open-ended format math testing system proposed in this study can be used effectively in the actual classroom setting.

  • PDF

A Difference of Identifying Variable Skills Assessment between Performance and Multiple Choice Items (수행평가와 선다형 지필 평가에 의한 변인확인 능력 평가의 차이)

  • Lee, Hang-Ro
    • Journal of The Korean Association For Science Education
    • /
    • v.19 no.1
    • /
    • pp.146-158
    • /
    • 1999
  • Since 1960' s. aims of science education have changed from attainment of scientific concepts. principles and laws to improvement of science process(or inquiry) skills. According to the science education philosophy like this. our nation has adopted improvement and evaluation of science process(or inquiry) skills in science education. The purpose of this study was to examine the performance of high school students on 4 types of multiple choice items used to access students ability to identify independent and dependent variables. Stimulus materials were either a question focusing on the relationship between two variables. a hypothesis. a description of an experiment. or a description of results of an experiment. Student performance on these item types was compared to this performance on a standard Piagetian interview task of variable identification. The results of the study included: (1) the "hypothesis" type was the most difficult, while the "question" type appeared to be the easiest; (2) the "procedure" item type had a higher correlation with the total interview than any other item type. Among conclusions reached in this study was that although all four item types operated similarly. they did not correlate very highly with the performance assessment by interview.

  • PDF