• Title/Summary/Keyword: automatic writing assessment

Search Result 3, Processing Time 0.02 seconds

Swear Word Detection and Unknown Word Classification for Automatic English Writing Assessment (영작문 자동평가를 위한 비속어 검출과 미등록어 분류)

  • Lee, Gyoung;Kim, Sung Gwon;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.381-388
    • /
    • 2014
  • In this paper, we deal with implementation issues of an unknown word classifier for middle-school level English writing test. We define the type of unknown words occurred in English text and discuss the detection process for unknown words. Also, we define the type of swear words occurred in students's English writings, and suggest how to handle this type of words. We implement an unknown word classifier with a swear detection module for developing an automatic English writing scoring system. By experiments with actual test data, we evaluate the accuracy of the unknown word classifier as well as the swear detection module.

Automatic Adverb Error Correction in Korean Learners' EFL Writing

  • Kim, Jee-Eun
    • International Journal of Contents
    • /
    • v.5 no.3
    • /
    • pp.65-70
    • /
    • 2009
  • This paper describes ongoing work on the correction of adverb errors committed by Korean learners studying English as a foreign language (EFL), using an automated English writing assessment system. Adverb errors are commonly found in learners 'writings, but handling those errors rarely draws an attention in natural language processing due to complicated characteristics of adverb. To correctly detect the errors, adverbs are classified according to their grammatical functions, meanings and positions within a sentence. Adverb errors are collected from learners' sentences, and classified into five categories adopting a traditional error analysis. The error classification in conjunction with the adverb categorization is implemented into a set of mal-rules which automatically identifies the errors. When an error is detected, the system corrects the error and suggests error specific feedback. The feedback includes the types of errors, a corrected string of the error and a brief description of the error. This attempt suggests how to improve adverb error correction method as well as to provide richer diagnostic feedback to the learners.

Effect of Application of Ensemble Method on Machine Learning with Insufficient Training Set in Developing Automated English Essay Scoring System (영작문 자동채점 시스템 개발에서 학습데이터 부족 문제 해결을 위한 앙상블 기법 적용의 효과)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1124-1132
    • /
    • 2015
  • In order to train a supervised machine learning algorithm, it is necessary to have non-biased labels and a sufficient amount of training data. However, it is difficult to collect the required non-biased labels and a sufficient amount of training data to develop an automatic English Composition scoring system. In addition, an English writing assessment is carried out using a multi-faceted evaluation of the overall level of the answer. Therefore, it is difficult to choose an appropriate machine learning algorithm for such work. In this paper, we show that it is possible to alleviate these problems through ensemble learning. The results of the experiment indicate that the ensemble technique exhibited an overall performance that was better than that of other algorithms.