Browse > Article
http://dx.doi.org/10.3745/KIPSTB.2007.14-B.3.223

Building an Automated Scoring System for a Single English Sentences  

Kim, Jee-Eun (한국어학원대학교 영어학부)
Lee, Kong-Joo (충남대학교 전기정보통신공학부)
Jin, Kyung-Ae (한국교육과정평가원)
Abstract
The purpose of developing an automated scoring system for English composition is to score the tests for writing English sentences and to give feedback on them without human's efforts. This paper presents an automated system to score English composition, whose input is a single sentence, not an essay. Dealing with a single sentence as an input has some advantages on comparing the input with the given answers by human teachers and giving detailed feedback to the test takers. The system has been developed and tested with the real test data collected through English tests given to the third grade students in junior high school. Two steps of the process are required to score a single sentence. The first process is analyzing the input sentence in order to detect possible errors, such as spelling errors, syntactic errors and so on. The second process is comparing the input sentence with the given answer to identify the differences as errors. The results produced by the system were then compared with those provided by human raters.
Keywords
Scoring System; English Composition; Human Rater; Error Production Rules; Intra-Sentential Error; Inter-Sentential Error;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Jill Burstein and Derrick Higgins, Advanced Capabilities for Evaluation Student Writing: Detecting Off-Topic Essays Without Topic-Specific Training, In Proceedings of the International Conference on Artificial Intelligence in Education, July 2005
2 David A. Schneider and Kathleen F. McCoy. Recognizing Syntactic Errors in the Writing of Second Language Learners. In Proceedings of the 17th International Conference on Computational Linguistics (COLING- ACL '98), 1998   DOI
3 Higgins, D., Burstein, J., Marcu, D., & Gentile, C. Evaluating multiple aspects of coherence in student essays (PDF). In Proceedings of the Annual Meeting of HLT/NAACL, 2004
4 Jill Burstein, Daniel Marcu and Kevin Knight, Finding the WRITE Stuff: Automatic Identification of Discourse Structure in Student Essays. IEEE Intelligent Systems, Vol 18, Issue 1, 32-39, 2003   DOI
5 Jill Burstein, Martin Chodorow and Claudia Leacock. Criterion Online Essay Evaluation: An Application for Automated Evaluation of Student Essays. In Proceedings of the Fifteenth Annual Conference on Innovative Applications of Artificial Intelligence, 2003
6 Koldo Gojenola and Maite Oronoz. Corpus-based syntactic error detection using syntactic patterns. In Proceedings of the workshop on Student research. 24-29, 2000
7 M. Chodorow and C. Leacock. An Unsupervised Method for Detecting Grammatical Errors. In Proc. First Meeting of the North American Chapter of the Association for Computational Linguistics (ANLPNAACL -2000), 140-147, 2000
8 Christiane Fellbaum, WordNet An Electronic Lexical Database. The MIT Press, 1998