• Title/Summary/Keyword: 영작문 자동 채점 시스템

Search Result 8, Processing Time 0.025 seconds

Building an Automated Scoring System for a Single English Sentences (단문형의 영작문 자동 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo;Jin, Kyung-Ae
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.223-230
    • /
    • 2007
  • The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors, syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.

Accuracy Improvement of an Automated Scoring System through Removing Duplicately Reported Errors (영작문 자동 채점 시스템에서의 중복 보고 오류 제거를 통한 성능 향상)

  • Lee, Hyun-Ah;Kim, Jee-Eun;Lee, Kong-Joo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.173-180
    • /
    • 2009
  • The purpose of developing an automated scoring system for English composition is to score English writing tests and to give diagnostic feedback to the test-takers without human's efforts. The system developed through our research detects grammatical errors of a single sentence on morphological, syntactic and semantic stages, respectively, and those errors are calculated into the final score. The error detecting stages are independent from one another, which causes duplicating the identical errors with different labels at different stages. These duplicated errors become a hindering factor to calculating an accurate score. This paper presents a solution to detecting the duplicated errors and improving an accuracy in calculating the final score by eliminating one of the errors.

Implementing Automated English Error Detecting and Scoring System for Junior High School Students (중학생 영작문 실력 향상을 위한 자동 문법 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.5
    • /
    • pp.36-46
    • /
    • 2007
  • This paper presents an automated English scoring system designed to help non-native speakers of English, Korean-speaking learners in particular. The system is developed to help the 3rd grade students in junior high school improve their English grammar skills. Without human's efforts, the system identifies grammar errors in English sentences, provides feedback on the detected errors, and scores the sentences. Detecting grammar errors in the system requires implementing a special type of rules in addition to the rules to parse grammatical sentences. Error production rules are implemented to analyze ungrammatical sentences and recognize syntactic errors. The rules are collected from the junior high school textbooks and real student test data. By firing those rules, the errors are detected followed by setting corresponding error flags, and the system continues the parsing process without a failure. As the final step of the process, the system scores the student sentences based on the errors detected. The system is evaluated with real English test data produced by the students and the answers provided by human teachers.

Swear Word Detection and Unknown Word Classification for Automatic English Writing Assessment (영작문 자동평가를 위한 비속어 검출과 미등록어 분류)

  • Lee, Gyoung;Kim, Sung Gwon;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.381-388
    • /
    • 2014
  • In this paper, we deal with implementation issues of an unknown word classifier for middle-school level English writing test. We define the type of unknown words occurred in English text and discuss the detection process for unknown words. Also, we define the type of swear words occurred in students's English writings, and suggest how to handle this type of words. We implement an unknown word classifier with a swear detection module for developing an automatic English writing scoring system. By experiments with actual test data, we evaluate the accuracy of the unknown word classifier as well as the swear detection module.

Effect of Application of Ensemble Method on Machine Learning with Insufficient Training Set in Developing Automated English Essay Scoring System (영작문 자동채점 시스템 개발에서 학습데이터 부족 문제 해결을 위한 앙상블 기법 적용의 효과)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1124-1132
    • /
    • 2015
  • In order to train a supervised machine learning algorithm, it is necessary to have non-biased labels and a sufficient amount of training data. However, it is difficult to collect the required non-biased labels and a sufficient amount of training data to develop an automatic English Composition scoring system. In addition, an English writing assessment is carried out using a multi-faceted evaluation of the overall level of the answer. Therefore, it is difficult to choose an appropriate machine learning algorithm for such work. In this paper, we show that it is possible to alleviate these problems through ensemble learning. The results of the experiment indicate that the ensemble technique exhibited an overall performance that was better than that of other algorithms.

Development of automated scoring system for English writing (영작문 자동 채점 시스템 개발 연구)

  • Jin, Kyung-Ae
    • English Language & Literature Teaching
    • /
    • v.13 no.1
    • /
    • pp.235-259
    • /
    • 2007
  • The purpose of the present study is to develop a prototype automated scoring system for English writing. The system was developed for scoring writings of Korean middle school students. In order to develop the automated scoring system, following procedures have been applied. First, review and analysis of established automated essay scoring systems in other countries have been accomplished. By doing so, we could get the guidance for development of a new sentence-level automated scoring system for Korean EFL students. Second, knowledge base such as lexicon, grammar and WordNet for natural language processing and error corpus of English writing of Korean middle school students were established. Error corpus was established through the paper and pencil test with 589 third year middle school students. This study provided suggestions for the successful introduction of an automated scoring system in Korea. The automated scoring system developed in this study should be continuously upgraded to improve the accuracy of the scoring system. Also, it is suggested to develop an automated scoring system being able to carry out evaluation of English essay, not only sentence-level evaluation. The system needs to be upgraded for the improved precision, but, it was a successful introduction of an sentence-level automated scoring system for English writing in Korea.

  • PDF

An English Essay Scoring System Based on Grammaticality and Lexical Cohesion (문법성과 어휘 응집성 기반의 영어 작문 평가 시스템)

  • Kim, Dong-Sung;Kim, Sang-Chul;Chae, Hee-Rahk
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.3
    • /
    • pp.223-255
    • /
    • 2008
  • In this paper, we introduce an automatic system of scoring English essays. The system is comprised of three main components: a spelling checker, a grammar checker and a lexical cohesion checker. We have used such resources as WordNet, Link Grammar/parser and Roget's thesaurus for these components. The usefulness of an automatic scoring system depends on its reliability. To measure reliability, we compared the results of automatic scoring with those of manual scoring, on the basis of the Kappa statistics and the Multi-facet Rasch Model. The statistical data obtained from the comparison showed that the scoring system is as reliable as professional human graders. This system deals with textual units rather than sentential units and checks not only formal properties of a text but also its contents.

  • PDF

Context-sensitive Word Error Detection and Correction for Automatic Scoring System of English Writing (영작문 자동 채점 시스템을 위한 문맥 고려 단어 오류 검사기)

  • Choi, Yong Seok;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.1
    • /
    • pp.45-56
    • /
    • 2015
  • In this paper, we present a method that can detect context-sensitive word errors and generate correction candidates. Spelling error detection is one of the most widespread research topics, however, the approach proposed in this paper is adjusted for an automated English scoring system. A common strategy in context-sensitive word error detection is using a pre-defined confusion set to generate correction candidates. We automatically generate a confusion set in order to consider the characteristics of sentences written by second-language learners. We define a word error that cannot be detected by a conventional grammar checker because of part-of-speech ambiguity, and propose how to detect the error and generate correction candidates for this kind of error. An experiment is performed on the English writings composed by junior-high school students whose mother tongue is Korean. The f1 value of the proposed method is 70.48%, which shows that our method is promising comparing to the current-state-of-the art.