• Title/Summary/Keyword: Automatic Scoring Model

Search Result 21, Processing Time 0.024 seconds

Design and Implementation of an Automatic Scoring Model Using a Voting Method for Descriptive Answers (투표 기반 서술형 주관식 답안 자동 채점 모델의 설계 및 구현)

  • Heo, Jeongman;Park, So-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.8
    • /
    • pp.17-25
    • /
    • 2013
  • TIn this paper, we propose a model automatically scoring a student's answer for a descriptive problem by using a voting method. Considering the model construction cost, the proposed model does not separately construct the automatic scoring model per problem type. In order to utilize features useful for automatically scoring the descriptive answers, the proposed model extracts feature values from the results, generated by comparing the student's answer with the answer sheet. For the purpose of improving the precision of the scoring result, the proposed model collects the scoring results classified by a few machine learning based classifiers, and unanimously selects the scoring result as the final result. Experimental results show that the single machine learning based classifier C4.5 takes 83.00% on precision while the proposed model improve the precision up to 90.57% by using three machine learning based classifiers C4.5, ME, and SVM.

An English Essay Scoring System Based on Grammaticality and Lexical Cohesion (문법성과 어휘 응집성 기반의 영어 작문 평가 시스템)

  • Kim, Dong-Sung;Kim, Sang-Chul;Chae, Hee-Rahk
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.3
    • /
    • pp.223-255
    • /
    • 2008
  • In this paper, we introduce an automatic system of scoring English essays. The system is comprised of three main components: a spelling checker, a grammar checker and a lexical cohesion checker. We have used such resources as WordNet, Link Grammar/parser and Roget's thesaurus for these components. The usefulness of an automatic scoring system depends on its reliability. To measure reliability, we compared the results of automatic scoring with those of manual scoring, on the basis of the Kappa statistics and the Multi-facet Rasch Model. The statistical data obtained from the comparison showed that the scoring system is as reliable as professional human graders. This system deals with textual units rather than sentential units and checks not only formal properties of a text but also its contents.

  • PDF

A BERT-Based Automatic Scoring Model of Korean Language Learners' Essay

  • Lee, Jung Hee;Park, Ji Su;Shon, Jin Gon
    • Journal of Information Processing Systems
    • /
    • v.18 no.2
    • /
    • pp.282-291
    • /
    • 2022
  • This research applies a pre-trained bidirectional encoder representations from transformers (BERT) handwriting recognition model to predict foreign Korean-language learners' writing scores. A corpus of 586 answers to midterm and final exams written by foreign learners at the Intermediate 1 level was acquired and used for pre-training, resulting in consistent performance, even with small datasets. The test data were pre-processed and fine-tuned, and the results were calculated in the form of a score prediction. The difference between the prediction and actual score was then calculated. An accuracy of 95.8% was demonstrated, indicating that the prediction results were strong overall; hence, the tool is suitable for the automatic scoring of Korean written test answers, including grammatical errors, written by foreigners. These results are particularly meaningful in that the data included written language text produced by foreign learners, not native speakers.

Automatic scoring of mathematics descriptive assessment using random forest algorithm (랜덤 포레스트 알고리즘을 활용한 수학 서술형 자동 채점)

  • Inyong Choi;Hwa Kyung Kim;In Woo Chung;Min Ho Song
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.165-186
    • /
    • 2024
  • Despite the growing attention on artificial intelligence-based automated scoring technology as a support method for the introduction of descriptive items in school environments and large-scale assessments, there is a noticeable lack of foundational research in mathematics compared to other subjects. This study developed an automated scoring model for two descriptive items in first-year middle school mathematics using the Random Forest algorithm, evaluated its performance, and explored ways to enhance this performance. The accuracy of the final models for the two items was found to be between 0.95 to 1.00 and 0.73 to 0.89, respectively, which is relatively high compared to automated scoring models in other subjects. We discovered that the strategic selection of the number of evaluation categories, taking into account the amount of data, is crucial for the effective development and performance of automated scoring models. Additionally, text preprocessing by mathematics education experts proved effective in improving both the performance and interpretability of the automated scoring model. Selecting a vectorization method that matches the characteristics of the items and data was identified as one way to enhance model performance. Furthermore, we confirmed that oversampling is a useful method to supplement performance in situations where practical limitations hinder balanced data collection. To enhance educational utility, further research is needed on how to utilize feature importance derived from the Random Forest-based automated scoring model to generate useful information for teaching and learning, such as feedback. This study is significant as foundational research in the field of mathematics descriptive automatic scoring, and there is a need for various subsequent studies through close collaboration between AI experts and math education experts.

Sleep Stage Scoring using Neural Network (신경 회로망을 사용한 수면 단계 분석)

  • Han, J.M.;Park, H.J.;Park, K.S.;Jeong, D.U.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.05
    • /
    • pp.395-397
    • /
    • 1997
  • We have applied the neural network method for the neural networkmethod for the automatic scoring of the sleep stage. 17 features are extracted from the recorded EEG, EOG and EMG signals. These features are inputed to tile multilayer perceptron model. Neural network was trained with error-back propagation method. Results are compared with manual scoring of the experts, and show the possibility of application of automatic method in sleep stage scoring.

  • PDF

Building a Sentential Model for Automatic Prosody Evaluation

  • Yoon, Kyu-Chul
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.47-59
    • /
    • 2009
  • The purpose of this paper is to propose an automatic evaluation technique for the prosodic aspect of an English sentence uttered by Korean speakers learning English. The underlying hypothesis is that the consistency of the manual prosody scoring is reflected in an imaginary space of prosody evaluation model constructed out of the three physical properties of the prosody considered in this paper, namely: the fundamental frequency (F0) contour, the intensity contour, and the segmental durations. The evaluation proceeds first by building a prosody evaluation model for the sentence. For the creation of the model, utterances from native speakers of English and Korean learners for the target sentence are manually scored by either native teachers of English or Korean phoneticians in terms of their prosody. Multiple native utterances from the manual scoring are selected as the "model" native utterances against which all the other Korean learners' utterances as well as the model utterances themselves can be semi-automatically evaluated by comparison in terms of the three prosodic aspects [7]. Each learner utterance, when compared to the multiple model native utterances, produces multiple coordinates in a three-dimensional space of prosody evaluation, each axis of which corresponds to the three prosodic aspects. The 3D coordinates from all the comparisons form a prosody evaluation model for the particular sentence and the associated manual scores can display regions of particular scores. The model can then be used as a predictive model against which other Korean utterances of the target sentence can be evaluated. The model from a Korean phonetician appears to support the hypothesis.

  • PDF

Validity Analysis of Python Automatic Scoring Exercise-Problems using Machine Learning Models (머신러닝 모델을 이용한 파이썬 자동채점 연습문제의 타당성 분석)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.193-198
    • /
    • 2023
  • This paper analyzed the validity of exercise problems for each unit in Python programming education. Practice questions presented for each unit are presented through an online learning system, and each student uploads an answer code and is automatically graded. Data such as students' mid-term exam scores, final exam scores, and practice questions scores for each unit are collected through Python lecture that lasts for one semester. Through the collected data, it is possible to improve the exercise problems for each unit by analyzing the validity of the automatic scoring exercise problems. In this paper, Orange machine learning tool was used to analyze the validity of automatic scoring exercises. The data collected in the Python subject are analyzed and compared comprehensively by total, top, and bottom groups. From the prediction accuracy of the machine learning model that predicts the student's final grade from the Python unit-by-unit practice problem scores, the validity of the automatic scoring exercises for each unit was analyzed.

Scoring Methods for Improvement of Speech Recognizer Detecting Mispronunciation of Foreign Language (외국어 발화오류 검출 음성인식기의 성능 개선을 위한 스코어링 기법)

  • Kang Hyo-Won;Kwon Chul-Hong
    • MALSORI
    • /
    • no.49
    • /
    • pp.95-105
    • /
    • 2004
  • An automatic pronunciation correction system provides learners with correction guidelines for each mispronunciation. For this purpose we develope a speech recognizer which automatically classifies pronunciation errors when Koreans speak a foreign language. In order to develope the methods for automatic assessment of pronunciation quality, we propose a language model based score as a machine score in the speech recognizer. Experimental results show that the language model based score had higher correlation with human scores than that obtained using the conventional log-likelihood based score.

  • PDF

Generalized Partially Linear Additive Models for Credit Scoring

  • Shim, Ju-Hyun;Lee, Young-K.
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.4
    • /
    • pp.587-595
    • /
    • 2011
  • Credit scoring is an objective and automatic system to assess the credit risk of each customer. The logistic regression model is one of the popular methods of credit scoring to predict the default probability; however, it may not detect possible nonlinear features of predictors despite the advantages of interpretability and low computation cost. In this paper, we propose to use a generalized partially linear model as an alternative to logistic regression. We also introduce modern ensemble technologies such as bagging, boosting and random forests. We compare these methods via a simulation study and illustrate them through a German credit dataset.

Credit Scoring Using Splines (스플라인을 이용한 신용 평점화)

  • Koo Ja-Yong;Choi Daewoo;Choi Min-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.3
    • /
    • pp.543-553
    • /
    • 2005
  • Linear logistic regression is one of the most widely used method for credit scoring in credit risk management. This paper deals with credit scoring using splines based on Logistic regression. Linear splines and an automatic basis selection algorithm are adopted. The final model is an example of the generalized additive model. A simulation using a real data set is used to illustrate the performance of the spline method.