• Title/Summary/Keyword: automated English essay scoring

Search Result 6, Processing Time 0.02 seconds

Development of automated scoring system for English writing (영작문 자동 채점 시스템 개발 연구)

  • Jin, Kyung-Ae
    • English Language & Literature Teaching
    • /
    • v.13 no.1
    • /
    • pp.235-259
    • /
    • 2007
  • The purpose of the present study is to develop a prototype automated scoring system for English writing. The system was developed for scoring writings of Korean middle school students. In order to develop the automated scoring system, following procedures have been applied. First, review and analysis of established automated essay scoring systems in other countries have been accomplished. By doing so, we could get the guidance for development of a new sentence-level automated scoring system for Korean EFL students. Second, knowledge base such as lexicon, grammar and WordNet for natural language processing and error corpus of English writing of Korean middle school students were established. Error corpus was established through the paper and pencil test with 589 third year middle school students. This study provided suggestions for the successful introduction of an automated scoring system in Korea. The automated scoring system developed in this study should be continuously upgraded to improve the accuracy of the scoring system. Also, it is suggested to develop an automated scoring system being able to carry out evaluation of English essay, not only sentence-level evaluation. The system needs to be upgraded for the improved precision, but, it was a successful introduction of an sentence-level automated scoring system for English writing in Korea.

  • PDF

Integration of Computerized Feedback to Improve Interactive Use of Written Feedback in English Writing Class

  • CHOI, Jaeho
    • Educational Technology International
    • /
    • v.12 no.2
    • /
    • pp.71-94
    • /
    • 2011
  • How can an automated essay scoring (AES) program, which provides feedback for essays, be a formative tool for improving ESL writing? In spite of the increasing demands for English writing proficiency, English writing instruction has not been effective for teaching and learning because of a lack of timely and accurate feedback. In this context, AES as a possible solution has been gaining the attention of educators and scholars in ESL/EFL writing education because it can provide consistent and prompt feedback for student writers. This experimental study examined the impact of different types of feedback for a college ESL writing program using the Criterion AES system. The results reveal the positive impact of AES in a college-level ESL course and differences between the teacher's feedback and the AES feedback. The findings suggest that AES can be effectively integrated into ESL writing instruction as a formative assessment tool.

Building an Automated Scoring System for a Single English Sentences (단문형의 영작문 자동 채점 시스템 구축)

  • Kim, Jee-Eun;Lee, Kong-Joo;Jin, Kyung-Ae
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.223-230
    • /
    • 2007
  • The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors, syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.

Automatic Detection of Off-topic Documents using ConceptNet and Essay Prompt in Automated English Essay Scoring (영어 작문 자동채점에서 ConceptNet과 작문 프롬프트를 이용한 주제-이탈 문서의 자동 검출)

  • Lee, Kong Joo;Lee, Gyoung Ho
    • Journal of KIISE
    • /
    • v.42 no.12
    • /
    • pp.1522-1534
    • /
    • 2015
  • This work presents a new method that can predict, without the use of training data, whether an input essay is written on a given topic. ConceptNet is a common-sense knowledge base that is generated automatically from sentences that are extracted from a variety of document types. An essay prompt is the topic that an essay should be written about. The method that is proposed in this paper uses ConceptNet and an essay prompt to decide whether or not an input essay is off-topic. We introduce a way to find the shortest path between two nodes on ConceptNet, as well as a way to calculate the semantic similarity between two nodes. Not only an essay prompt but also a student's essay can be represented by concept nodes in ConceptNet. The semantic similarity between the concepts that represent an essay prompt and the other concepts that represent a student's essay can be used for a calculation to rank "on-topicness" ; if a low ranking is derived, an essay is regarded as off-topic. We used eight different essay prompts and a student-essay collection for the performance evaluation, whereby our proposed method shows a performance that is better than those of the previous studies. As ConceptNet enables the conduction of a simple text inference, our new method looks very promising with respect to the design of an essay prompt for which a simple inference is required.

Assessment of Writing Fluency For Automated English Essay Scoring (영어 논술 자동 평가를 위한 언어 유창성 측정 방법)

  • Yang, Min-Chul;Kim, Min-Jeong;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2011.10a
    • /
    • pp.25-29
    • /
    • 2011
  • 영어 논술 자동 평가 시스템은 수험자가 쓴 에세이에 대하여 전문 평가자가 직접 읽고 평가하는 방식에서 벗어나 웹상에서 자동으로 평가 받을 수 있는 실시간 시스템이다. 하지만 비영어권 수험자에게는 논리력 혹은 작문 능력보다 그것을 영어로 표현하는 유창성에서 더 큰 문제가 있을 수 있는데 기존 연구에서는 이런 측면에 대한 평가가 부족하였다. 본 연구에서는 보다 정확한 비영어권 수험자의 영어 논술 평가를 위해 어휘력, 문장 구조의 다양성, 문장의 혼잡도를 평가하여 언어 유창성에 집중된 기계학습 방법의 추가적인 자질을 제안한다. 실험 결과 전문 평가자의 점수와 1) 상관관계 2) 정확도 측면에서 제안하는 방법은 기존의 방법에 비해 더 나은 성능을 보였다.

  • PDF

Effect of Application of Ensemble Method on Machine Learning with Insufficient Training Set in Developing Automated English Essay Scoring System (영작문 자동채점 시스템 개발에서 학습데이터 부족 문제 해결을 위한 앙상블 기법 적용의 효과)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1124-1132
    • /
    • 2015
  • In order to train a supervised machine learning algorithm, it is necessary to have non-biased labels and a sufficient amount of training data. However, it is difficult to collect the required non-biased labels and a sufficient amount of training data to develop an automatic English Composition scoring system. In addition, an English writing assessment is carried out using a multi-faceted evaluation of the overall level of the answer. Therefore, it is difficult to choose an appropriate machine learning algorithm for such work. In this paper, we show that it is possible to alleviate these problems through ensemble learning. The results of the experiment indicate that the ensemble technique exhibited an overall performance that was better than that of other algorithms.