• Title/Summary/Keyword: 자동 채점 모델

Search Result 14, Processing Time 0.024 seconds

Automated Scoring of Argumentation Levels and Analysis of Argumentation Patterns Using Machine Learning (기계 학습을 활용한 논증 수준 자동 채점 및 논증 패턴 분석)

  • Lee, Manhyoung;Ryu, Suna
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.3
    • /
    • pp.203-220
    • /
    • 2021
  • We explored the performance improvement method of automated scoring for scientific argumentation. We analyzed the pattern of argumentation using automated scoring models. For this purpose, we assessed the level of argumentation for student's scientific discourses in classrooms. The dataset consists of four units of argumentation features and argumentation levels for episodes. We utilized argumentation clusters and n-gram to enhance automated scoring accuracy. We used the three supervised learning algorithms resulting in 33 automatic scoring models. As a result of automated scoring, we got a good scoring accuracy of 77.59% on average and up to 85.37%. In this process, we found that argumentation cluster patterns could enhance automated scoring performance accuracy. Then, we analyzed argumentation patterns using the model of decision tree and random forest. Our results were consistent with the previous research in which justification in coordination with claim and evidence determines scientific argumentation quality. Our research method suggests a novel approach for analyzing the quality of scientific argumentation in classrooms.

Automated Scoring of Scientific Argumentation Using Expert Morpheme Classification Approaches (전문가의 형태소 분류를 활용한 과학 논증 자동 채점)

  • Lee, Manhyoung;Ryu, Suna
    • Journal of The Korean Association For Science Education
    • /
    • v.40 no.3
    • /
    • pp.321-336
    • /
    • 2020
  • We explore automated scoring models of scientific argumentation. We consider how a new analytical approach using a machine learning technique may enhance the understanding of spoken argumentation in the classroom. We sampled 2,605 utterances that occurred during a high school student's science class on molecular structure and classified the utterances into five argumentative elements. Next, we performed Text Preprocessing for the classified utterances. As machine learning techniques, we applied support vector machines, decision tree, random forest, and artificial neural network. For enhancing the identification of rebuttal elements, we used a heuristic feature-engineering method that applies experts' classification of morphemes of scientific argumentation.

The Automated Scoring of Kinematics Graph Answers through the Design and Application of a Convolutional Neural Network-Based Scoring Model (합성곱 신경망 기반 채점 모델 설계 및 적용을 통한 운동학 그래프 답안 자동 채점)

  • Jae-Sang Han;Hyun-Joo Kim
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.237-251
    • /
    • 2023
  • This study explores the possibility of automated scoring for scientific graph answers by designing an automated scoring model using convolutional neural networks and applying it to students' kinematics graph answers. The researchers prepared 2,200 answers, which were divided into 2,000 training data and 200 validation data. Additionally, 202 student answers were divided into 100 training data and 102 test data. First, in the process of designing an automated scoring model and validating its performance, the automated scoring model was optimized for graph image classification using the answer dataset prepared by the researchers. Next, the automated scoring model was trained using various types of training datasets, and it was used to score the student test dataset. The performance of the automated scoring model has been improved as the amount of training data increased in amount and diversity. Finally, compared to human scoring, the accuracy was 97.06%, the kappa coefficient was 0.957, and the weighted kappa coefficient was 0.968. On the other hand, in the case of answer types that were not included in the training data, the s coring was almos t identical among human s corers however, the automated scoring model performed inaccurately.

Design and Implementation of an Automatic Scoring Model Using a Voting Method for Descriptive Answers (투표 기반 서술형 주관식 답안 자동 채점 모델의 설계 및 구현)

  • Heo, Jeongman;Park, So-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.8
    • /
    • pp.17-25
    • /
    • 2013
  • TIn this paper, we propose a model automatically scoring a student's answer for a descriptive problem by using a voting method. Considering the model construction cost, the proposed model does not separately construct the automatic scoring model per problem type. In order to utilize features useful for automatically scoring the descriptive answers, the proposed model extracts feature values from the results, generated by comparing the student's answer with the answer sheet. For the purpose of improving the precision of the scoring result, the proposed model collects the scoring results classified by a few machine learning based classifiers, and unanimously selects the scoring result as the final result. Experimental results show that the single machine learning based classifier C4.5 takes 83.00% on precision while the proposed model improve the precision up to 90.57% by using three machine learning based classifiers C4.5, ME, and SVM.

Scoring Korean Written Responses Using English-Based Automated Computer Scoring Models and Machine Translation: A Case of Natural Selection Concept Test (영어기반 컴퓨터자동채점모델과 기계번역을 활용한 서술형 한국어 응답 채점 -자연선택개념평가 사례-)

  • Ha, Minsu
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.3
    • /
    • pp.389-397
    • /
    • 2016
  • This study aims to test the efficacy of English-based automated computer scoring models and machine translation to score Korean college students' written responses on natural selection concept items. To this end, I collected 128 pre-service biology teachers' written responses on four-item instrument (total 512 written responses). The machine translation software (i.e., Google Translate) translated both original responses and spell-corrected responses. The presence/absence of five scientific ideas and three $na{\ddot{i}}ve$ ideas in both translated responses were judged by the automated computer scoring models (i.e., EvoGrader). The computer-scored results (4096 predictions) were compared with expert-scored results. The results illustrated that no significant differences in both average scores and statistical results using average scores was found between the computer-scored result and experts-scored result. The Pearson correlation coefficients of composite scores for each student between computer scoring and experts scoring were 0.848 for scientific ideas and 0.776 for $na{\ddot{i}}ve$ ideas. The inter-rater reliability indices (Cohen kappa) between computer scoring and experts scoring for linguistically simple concepts (e.g., variation, competition, and limited resources) were over 0.8. These findings reveal that the English-based automated computer scoring models and machine translation can be a promising method in scoring Korean college students' written responses on natural selection concept items.

Validity Analysis of Python Automatic Scoring Exercise-Problems using Machine Learning Models (머신러닝 모델을 이용한 파이썬 자동채점 연습문제의 타당성 분석)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.193-198
    • /
    • 2023
  • This paper analyzed the validity of exercise problems for each unit in Python programming education. Practice questions presented for each unit are presented through an online learning system, and each student uploads an answer code and is automatically graded. Data such as students' mid-term exam scores, final exam scores, and practice questions scores for each unit are collected through Python lecture that lasts for one semester. Through the collected data, it is possible to improve the exercise problems for each unit by analyzing the validity of the automatic scoring exercise problems. In this paper, Orange machine learning tool was used to analyze the validity of automatic scoring exercises. The data collected in the Python subject are analyzed and compared comprehensively by total, top, and bottom groups. From the prediction accuracy of the machine learning model that predicts the student's final grade from the Python unit-by-unit practice problem scores, the validity of the automatic scoring exercises for each unit was analyzed.

An English Essay Scoring System Based on Grammaticality and Lexical Cohesion (문법성과 어휘 응집성 기반의 영어 작문 평가 시스템)

  • Kim, Dong-Sung;Kim, Sang-Chul;Chae, Hee-Rahk
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.3
    • /
    • pp.223-255
    • /
    • 2008
  • In this paper, we introduce an automatic system of scoring English essays. The system is comprised of three main components: a spelling checker, a grammar checker and a lexical cohesion checker. We have used such resources as WordNet, Link Grammar/parser and Roget's thesaurus for these components. The usefulness of an automatic scoring system depends on its reliability. To measure reliability, we compared the results of automatic scoring with those of manual scoring, on the basis of the Kappa statistics and the Multi-facet Rasch Model. The statistical data obtained from the comparison showed that the scoring system is as reliable as professional human graders. This system deals with textual units rather than sentential units and checks not only formal properties of a text but also its contents.

  • PDF

Subjective Tests Sub-System Applied with Generalized Vector Space Model (일반화된 벡터 공간 모델을 적용한 주관식 문제 채점 보조 시스템)

  • Oh, Jung-Seok;Chu, Seung-Woo;Kim, Yu-Seop;Lee, Jae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.965-968
    • /
    • 2004
  • 기존의 주관식 문제 채점 보조 시스템은 자연어 처리의 어려움으로 인해 채점의 자동화가 어려워 전자우편 등을 통하여 채점자에게 채점 의뢰를 하는 수준이었다. 본 논문에서는 이러한 문제점을 해결하기 위하여 문제 공간을 벡터 공간으로 정의하고 벡터를 구성하는 각 자질간의 상관관계를 고려한 방법을 적용하였다. 먼저 학습자가 답안을 작성할 때 동의어 사용을 한다는 가정하에 출제자가 여러 개의 모범 답안을 작성하고 이들 답안을 말뭉치에 첨가하여 구성한 다음 형태소 분석기를 통하여 색인을 추출한다. 그리고 학습자가 작성한 답안 역시 색인을 추출한 다음, 이들 색인들을 각 자질로 정의한 벡터를 구성한다. 이렇게 구성된 벡터들을 이용하여 답안들간 유사도 측정을 하고, 유사도 범위에 따라 답안을 자동으로 정답과 오답으로 분류하려는 시스템을 제안한다. 170 문항의 주관식 문제을 제안된 방법으로 실험하여, 기존 모델에 비해 성능과 신뢰성 향상을 이룰 수 있었다.

  • PDF

Semi-Automatic Scoring for Short Korean Free-Text Responses Using Semi-Supervised Learning (준지도학습 방법을 이용한 한국어 서답형 문항 반자동 채점)

  • Cheon, Min-Ah;Seo, Hyeong-Won;Kim, Jae-Hoon;Noh, Eun-Hee;Sung, Kyung-Hee;Lim, EunYoung
    • Korean Journal of Cognitive Science
    • /
    • v.26 no.2
    • /
    • pp.147-165
    • /
    • 2015
  • Through short-answer questions, we can reflect the depth of students' understanding and higher-order thinking skills. Scoring for short-answer questions may take long time and may be an issue on consistency of grading. To alleviate such the suffering, automated scoring systems are widely used in Europe and America, but are in the initial stage in research in Korea. In this paper, we propose a semi-automatic scoring system for short Korean free-text responses using semi-supervised learning. First of all, based on the similarity score between students' answers and model answers, the proposed system grades students' answers and the scored answers with high reliability have been included in the model answers through the thorough test. This process repeats until all answers are scored. The proposed system is used experimentally in Korean and social studies in Nationwide Scholastic Achievement Test. We have confirmed that the processing time and the consistency of grades are promisingly improved. Using the system, various assessment methods have got to be developed and comparative studies need to be performed before applying to school fields.

An Automated Essay Scoring Pipeline Model based on Deep Neural Networks Reflecting Argumentation Structure Information (논증 구조 정보를 반영한 심층 신경망 기반 에세이 자동 평가 파이프라인 모델)

  • Yejin Lee;Youngjin Jang;Tae-il Kim;Sung-Won Choi;Harksoo Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.354-359
    • /
    • 2022
  • 에세이 자동 평가는 주어진 에세이를 읽고 자동으로 평가하는 작업이다. 본 논문에서는 효과적인 에세이 자동 평가 모델을 위해 Argument Mining 작업을 사용하여 에세이의 논증 구조가 반영된 에세이 표현을 만들고, 에세이의 평가 항목별 표현을 학습하는 방법을 제안한다. 실험을 통해 제안하는 에세이 표현이 사전 학습 언어 모델로 얻은 표현보다 우수함을 입증했으며, 에세이 평가를 위해 평가 항목별로 다른 표현을 학습하는 것이 보다 효과적임을 보였다. 최종 제안 모델의 성능은 QWK 기준으로 0.543에서 0.627까지 향상되어 사람의 평가와 상당히 일치한다.

  • PDF