• Title/Summary/Keyword: automatic scoring

Search Result 73, Processing Time 0.022 seconds

Cyber University records management system for remote testing automatic scoring open-ended research (사이버대학 성적관리를 위한 원격시험 주관식 자동채점 시스템 연구)

  • Park, Ki-Hong;Jang, Hae-Sook
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2014.01a
    • /
    • pp.117-118
    • /
    • 2014
  • 정보통신기술, 멀티미디어 기술 및 관련 소프트웨어 등을 이용하여 형성된 가상의 공간(Cyber-Space)에서 교수자와 학습자 간의 수업이 이뤄지는 사이버대학에서 학생들의 성적관리를 위한 원격시험 주관식 시험문제 자동채점 시스템을 연구하였다.

  • PDF

Implementation of OMR Answer Paper Scoring Method Using Image Processing Method (영상처리기법을 활용한 OMR 답안지 채점방법의 구현)

  • Kwon, Hiok-Han;Hwang, Gi-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.3
    • /
    • pp.169-175
    • /
    • 2011
  • In this paper, an automatic scoring system of the OMR answer sheet is implemented using Gray Scale and image segmentation method. The proposed method was used to extract the OMR data on multiple-choice answer sheet from captured image. In addition, On-line scoring system is developed and implemented to mark the short-answer type on the reverse side. Therefore, teachers can mark the short-answer type for anytime and anywhere within the available time. There were many advantages to mark of the multiple-choice answer sheet without additional OMR reader. In the future, the grading of short-answer type will be more efficient if it were performed by using an automatic scoring system based on image processing.

Validity Analysis of Python Automatic Scoring Exercise-Problems using Machine Learning Models (머신러닝 모델을 이용한 파이썬 자동채점 연습문제의 타당성 분석)

  • Kyeong Hur
    • Journal of Practical Engineering Education
    • /
    • v.15 no.1
    • /
    • pp.193-198
    • /
    • 2023
  • This paper analyzed the validity of exercise problems for each unit in Python programming education. Practice questions presented for each unit are presented through an online learning system, and each student uploads an answer code and is automatically graded. Data such as students' mid-term exam scores, final exam scores, and practice questions scores for each unit are collected through Python lecture that lasts for one semester. Through the collected data, it is possible to improve the exercise problems for each unit by analyzing the validity of the automatic scoring exercise problems. In this paper, Orange machine learning tool was used to analyze the validity of automatic scoring exercises. The data collected in the Python subject are analyzed and compared comprehensively by total, top, and bottom groups. From the prediction accuracy of the machine learning model that predicts the student's final grade from the Python unit-by-unit practice problem scores, the validity of the automatic scoring exercises for each unit was analyzed.

Automatic scoring of mathematics descriptive assessment using random forest algorithm (랜덤 포레스트 알고리즘을 활용한 수학 서술형 자동 채점)

  • Inyong Choi;Hwa Kyung Kim;In Woo Chung;Min Ho Song
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.165-186
    • /
    • 2024
  • Despite the growing attention on artificial intelligence-based automated scoring technology as a support method for the introduction of descriptive items in school environments and large-scale assessments, there is a noticeable lack of foundational research in mathematics compared to other subjects. This study developed an automated scoring model for two descriptive items in first-year middle school mathematics using the Random Forest algorithm, evaluated its performance, and explored ways to enhance this performance. The accuracy of the final models for the two items was found to be between 0.95 to 1.00 and 0.73 to 0.89, respectively, which is relatively high compared to automated scoring models in other subjects. We discovered that the strategic selection of the number of evaluation categories, taking into account the amount of data, is crucial for the effective development and performance of automated scoring models. Additionally, text preprocessing by mathematics education experts proved effective in improving both the performance and interpretability of the automated scoring model. Selecting a vectorization method that matches the characteristics of the items and data was identified as one way to enhance model performance. Furthermore, we confirmed that oversampling is a useful method to supplement performance in situations where practical limitations hinder balanced data collection. To enhance educational utility, further research is needed on how to utilize feature importance derived from the Random Forest-based automated scoring model to generate useful information for teaching and learning, such as feedback. This study is significant as foundational research in the field of mathematics descriptive automatic scoring, and there is a need for various subsequent studies through close collaboration between AI experts and math education experts.

Machine scoring method for speech recognizer detection mispronunciation of foreign language (외국어 발화오류 검출 음성인식기를 위한 스코어링 기법)

  • Kang, Hyo-Won;Bae, Min-Young;Lee, Jae-Kang;Kwon, Chul-Hong
    • Proceedings of the KSPS conference
    • /
    • 2004.05a
    • /
    • pp.239-242
    • /
    • 2004
  • An automatic pronunciation correction system provides users with correction guidelines for each pronunciation error. For this purpose, we propose a speech recognition system which automatically classifies pronunciation errors when Koreans speak a foreign language. In this paper, we also propose machine scoring methods for automatic assessment of pronunciation quality by the speech recognizer. Scores obtained from an expert human listener are used as the reference to evaluate the different machine scores and to provide targets when training some of algorithms. We use a log-likelihood score and a normalized log-likelihood score as machine scoring methods. Experimental results show that the normalized log-likelihood score had higher correlation with human scores than that obtained using the log-likelihood score.

  • PDF

Exploring automatic scoring of mathematical descriptive assessment using prompt engineering with the GPT-4 model: Focused on permutations and combinations (프롬프트 엔지니어링을 통한 GPT-4 모델의 수학 서술형 평가 자동 채점 탐색: 순열과 조합을 중심으로)

  • Byoungchul Shin;Junsu Lee;Yunjoo Yoo
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.187-207
    • /
    • 2024
  • In this study, we explored the feasibility of automatically scoring descriptive assessment items using GPT-4 based ChatGPT by comparing and analyzing the scoring results between teachers and GPT-4 based ChatGPT. For this purpose, three descriptive items from the permutation and combination unit for first-year high school students were selected from the KICE (Korea Institute for Curriculum and Evaluation) website. Items 1 and 2 had only one problem-solving strategy, while Item 3 had more than two strategies. Two teachers, each with over eight years of educational experience, graded answers from 204 students and compared these with the results from GPT-4 based ChatGPT. Various techniques such as Few-Shot-CoT, SC, structured, and Iteratively prompts were utilized to construct prompts for scoring, which were then inputted into GPT-4 based ChatGPT for scoring. The scoring results for Items 1 and 2 showed a strong correlation between the teachers' and GPT-4's scoring. For Item 3, which involved multiple problem-solving strategies, the student answers were first classified according to their strategies using prompts inputted into GPT-4 based ChatGPT. Following this classification, scoring prompts tailored to each type were applied and inputted into GPT-4 based ChatGPT for scoring, and these results also showed a strong correlation with the teachers' scoring. Through this, the potential for GPT-4 models utilizing prompt engineering to assist in teachers' scoring was confirmed, and the limitations of this study and directions for future research were presented.

A Study of Auto Questions and Scoring System in Mobile Application (모바일 시험 자동출제 및 채점 시스템 연구)

  • Park, Jong-Youel;Park, Dea-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.173-176
    • /
    • 2013
  • This paper's questions, and an automatic scoring system written in HTML, and XML-based system that is at issue, the issue questions in a convenient offline automatically how to register, Easy to manage questions of issues, questions and problems of merging the PC and the mobile device in a place that can be obtained without taking the test system study. Server systems, and real-time registration questions merging problem, such as difficulty adjusting to the test required to build the system. Clients communicate with the server using the mobile device and the PC is required to take the exam in the View application, and responses are sent for treatment research.

  • PDF

Mobile Auto questions and scoring system (모바일 시험 자동출제 및 채점 시스템 연구)

  • Park, Jong-Youel;Park, Dea-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.370-372
    • /
    • 2014
  • This study questions, and an automatic scoring system written in HTML, and XML-based system that is at issue, the issue questions in a convenient offline automatically how to register, Easy to manage questions of issues, questions and problems of merging the PC and the mobile device in a place that can be obtained without taking the test system study. Server systems, and real-time registration questions merging problem, such as difficulty adjusting to the test required to build the system. Clients communicate with the server using the mobile device and the PC is required to take the exam in the View application, and responses are sent for treatment research.

  • PDF

Scoring Methods for Improvement of Speech Recognizer Detecting Mispronunciation of Foreign Language (외국어 발화오류 검출 음성인식기의 성능 개선을 위한 스코어링 기법)

  • Kang Hyo-Won;Kwon Chul-Hong
    • MALSORI
    • /
    • no.49
    • /
    • pp.95-105
    • /
    • 2004
  • An automatic pronunciation correction system provides learners with correction guidelines for each mispronunciation. For this purpose we develope a speech recognizer which automatically classifies pronunciation errors when Koreans speak a foreign language. In order to develope the methods for automatic assessment of pronunciation quality, we propose a language model based score as a machine score in the speech recognizer. Experimental results show that the language model based score had higher correlation with human scores than that obtained using the conventional log-likelihood based score.

  • PDF

Automated Scoring of Argumentation Levels and Analysis of Argumentation Patterns Using Machine Learning (기계 학습을 활용한 논증 수준 자동 채점 및 논증 패턴 분석)

  • Lee, Manhyoung;Ryu, Suna
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.3
    • /
    • pp.203-220
    • /
    • 2021
  • We explored the performance improvement method of automated scoring for scientific argumentation. We analyzed the pattern of argumentation using automated scoring models. For this purpose, we assessed the level of argumentation for student's scientific discourses in classrooms. The dataset consists of four units of argumentation features and argumentation levels for episodes. We utilized argumentation clusters and n-gram to enhance automated scoring accuracy. We used the three supervised learning algorithms resulting in 33 automatic scoring models. As a result of automated scoring, we got a good scoring accuracy of 77.59% on average and up to 85.37%. In this process, we found that argumentation cluster patterns could enhance automated scoring performance accuracy. Then, we analyzed argumentation patterns using the model of decision tree and random forest. Our results were consistent with the previous research in which justification in coordination with claim and evidence determines scientific argumentation quality. Our research method suggests a novel approach for analyzing the quality of scientific argumentation in classrooms.