• Title/Summary/Keyword: Grammatical Accuracy

Search Result 30, Processing Time 0.032 seconds

A comparison of grammatical error detection techniques for an automated english scoring system

  • Lee, Songwook;Lee, Kong Joo
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.37 no.7
    • /
    • pp.760-770
    • /
    • 2013
  • Detecting grammatical errors from a text is a long-history application. In this paper, we compare the performance of two grammatical error detection techniques, which are implemented as a sub-module of an automated English scoring system. One is to use a full syntactic parser, which has not only grammatical rules but also extra-grammatical rules in order to detect syntactic errors while paring. The other one is to use a finite state machine which can identify an error covering a small range of an input. In order to compare the two approaches, grammatical errors are divided into three parts; the first one is grammatical error that can be handled by both approaches, and the second one is errors that can be handled by only a full parser, and the last one is errors that can be done only in a finite state machine. By doing this, we can figure out the strength and the weakness of each approach. The evaluation results show that a full parsing approach can detect more errors than a finite state machine can, while the accuracy of the former is lower than that of the latter. We can conclude that a full parser is suitable for detecting grammatical errors with a long distance dependency, whereas a finite state machine works well on sentences with multiple grammatical errors.

Evaluating Corrective Feedback Generated by an AI-Powered Online Grammar Checker

  • Moon, Dosik
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.22-29
    • /
    • 2021
  • This study evaluates the accuracy of corrective feedback from Grammarly, an online grammar checker, on essays written by cyber university learners in terms of detected errors, suggested replacement forms, and false alarms.The results indicate that Grammarly has a high overall error detection rate of over 65%, being particularly strong at catching errors related to articles and prepositions. In addition, on the detected errors, Grammarly mostly provide accurate replacement forms and very rarely make false alarms. These findings suggest that Grammarly has high potential as a useful educational tool to complement the drawbacks of teacher feedback and to help learnersimprove grammatical accuracy in their written work. However, it is still premature to conclude that Grammarly can completely replace teacher feedback because it has the possibility (approximately 35%) of failing to detect errors and the limitationsin detecting errors in certain categories. Since the feedback from Grammarly is not entirely reliable, caution should be taken for successful integration of Grammarly in English writing classes. Teachers should make judicious decisions on when and how to use Grammarly, based on a keen awareness of Grammarly's strengths and limitations.

The Effect of Overseas Language Training on the Development of Foreign Language Accuracy (해외어학연수의 외국어 정확성 향상에 대한 효과)

  • Cha, Mi-Yang
    • Journal of Industrial Convergence
    • /
    • v.18 no.4
    • /
    • pp.93-99
    • /
    • 2020
  • The Journal of Industrial Management Society in Republic of Korea. In order to explore the effect of overseas language training on the development of foreign language accuracy, this study investigates the errors in English compositions produced by 27 Korean university students who received overseas language training for 15 weeks. For data collection, students were made to take two tests, a pretest and a posttest, a semester apart. The differences in composition elements and errors between the two tests were examined and statistical analyses were performed. Results showed that while the average length of the compositions and sentences increased, the number of sentences decreased in the posttest. Also, more errors were found in the posttest where the students tried to construct more complex sentence structures. The students' ability to generate sentences were found to have improved, while their competence in using grammatical elements accurately within sentences did not see great improvement. This implies that overseas language training was not effective for aiding the development of one's grammatical accuracy of a foreign language over a 15-week period for the students.

Cascaded Parsing Korean Sentences Using Grammatical Relations (문법관계 정보를 이용한 단계적 한국어 구문 분석)

  • Lee, Song-Wook
    • The KIPS Transactions:PartB
    • /
    • v.15B no.1
    • /
    • pp.69-72
    • /
    • 2008
  • This study aims to identify dependency structures in Korean sentences with the cascaded chunking. In the first stage of the cascade, we find chunks of NP and guess grammatical relations (GRs) using Support Vector Machine (SVM) classifiers for all possible modifier-head pairs of chunks in terms of GR categories as subject, object, complement, adverbial, etc. In the next stages, we filter out incorrect modifier-head relations in each cascade for its corresponding GR using the SVM classifiers and the characteristics of the Korean language such as distance between relations, no-crossing and case property. Through an experiment with a parsed and GR tagged corpus for training the proposed parser, we achieved an overall accuracy of 85.7%.

A Korean Grammar Checker based on the Trees Resulted from a Full Parser (전체 문장 분석에 기반한 한국어 문법 검사기)

  • 이공주;황선영;김지은
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.10
    • /
    • pp.992-999
    • /
    • 2003
  • The purpose of a grammar checker is to find a grammatical erroneous expression in a sentence, and to provide appropriate suggestions for them. To find those errors, grammar checker should parse the whole input sentence, which is a highly time-consuming job. B7or this reason, most Korean grammar checkers adopt a partial parser that can analyze a fragment of a sentence without an ambiguity. This paper presents a Korean grammar checker using a full parser in order to find grammatical errors. This approach allows the grammar checker to critique the errors between the two words in a long distance relationship within a sentence. As a result, this approach improves the accuracy in correcting errors, but it nay come at the expense of decrease in its performance. The Korean grammar checker described in this paper is implemented with 65 rules for checking and correcting the grammatical errors. The grammar checker shows 96.49% in checking accuracy against the test corpus including 7 million words.

Shallow Parsing on Grammatical Relations in Korean Sentences (한국어 문법관계에 대한 부분구문 분석)

  • Lee, Song-Wook;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.10
    • /
    • pp.984-989
    • /
    • 2005
  • This study aims to identify grammatical relations (GRs) in Korean sentences. The key task is to find the GRs in sentences in terms of such GR categories as subject, object, and adverbial. To overcome this problem, we are fared with the many ambiguities. We propose a statistical model, which resolves the grammatical relational ambiguity first, and then finds correct noun phrases (NPs) arguments of given verb phrases (VP) by using the probabilities of the GRs given NPs and VPs in sentences. The proposed model uses the characteristics of the Korean language such as distance, no-crossing and case property. We attempt to estimate the probabilities of GR given an NP and a VP with Support Vector Machines (SVM) classifiers. Through an experiment with a tree and GR tagged corpus for training the model, we achieved an overall accuracy of $84.8\%,\;94.1\%,\;and\;84.8\%$ in identifying subject, object, and adverbial relations in sentences, respectively.

Comparing Perceptions of Evaluative Criteria in EFL Writing Between Learner and Instructor Group

  • Shin, You-Sun
    • English Language & Literature Teaching
    • /
    • v.17 no.1
    • /
    • pp.191-208
    • /
    • 2011
  • The quantitative study investigated perceptions of evaluative criteria in L2 writing between two groups - learners (N=212) and instructors (N=52) in Korea. Specifically, the purpose of the study is (1) to examine learners' and instructors' perceptions on evaluative criteria in L2 writing and to provide empirical evidence concerning how they respond to a list of them and (2) to ultimately devise appropriate rating criteria applicable to an EFL context like Korea. Analyses of evaluative criteria were conducted using factor analysis and yielded the following results: learner and instructor groups perceived the evaluative criteria differently and weighted them in a different way. For the learner group, the combined elements of grammar and language in use were identified as Factor 1 and mechanics as Factor 2. The results may infer that learners' response patterns are primarily linked to their instructors' writing practice in class, which may largely focus on grammatical knowledge based on lexical use and mechanical accuracy. Similarly, the instructor group acknowledged grammatical knowledge as Factor 1 and lexical use as Factor 2. The first two factors found in both learner and instructor groups indicate that in an EFL context like Korea, the form-then-content way of teaching and learning is still being considered more effective in L2 writing than any other method. Taking into consideration these perceptive similarities and differences between learners and instructors, the categories of evaluative criteria in writing include content and organization, grammar, mechanics, language in use, and flow of the essay, respectively.

  • PDF

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

The Grammatical Structure of Protein Sequences

  • Bystroff, Chris
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.28-31
    • /
    • 2000
  • We describe a hidden Markov model, HMMTIR, for general protein sequence based on the I-sites library of sequence-structure motifs. Unlike the linear HMMs used to model individual protein families, HMMSTR has a highly branched topology and captures recurrent local features of protein sequences and structures that transcend protein family boundaries. The model extends the I-sites library by describing the adjacencies of different sequence-structure motifs as observed in the database, and achieves a great reduction in parameters by representing overlapping motifs in a much more compact form. The HMM attributes a considerably higher probability to coding sequence than does an equivalent dipeptide model, predicts secondary structure with an accuracy of 74.6% and backbone torsion angles better than any previously reported method, and predicts the structural context of beta strands and turns with an accuracy that should be useful for tertiary structure prediction. HMMSTR has been incorporated into a public, fully-automated protein structure prediction server.

  • PDF

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.