• Title/Summary/Keyword: same form-morpheme

Search Result 3, Processing Time 0.019 seconds

A Comparative Study of the Trisyllabic Words with same form-morpheme and same meaning in Modern Chinese and the Trisyllabic Korean Words Written in Chinese Characters with same form-morpheme and same meaning (현대 중국어의 삼음사(三音詞)와 현용 한국 삼음절(三音節) 한자어(漢字語)의 동형(同形) 동소어(同素語) 비교 연구)

  • CHOE, GEUM DAN
    • Cross-Cultural Studies
    • /
    • v.25
    • /
    • pp.743-773
    • /
    • 2011
  • In this research, the writer has done a comparative analysis of 4,791 trisyllabic modern Chinese vocabularies from "a dictionary for trisyllabic modern Chinese word" and the corresponding Korean words written in Chinese characters out of 170,000 vocabularies hereupon that are collected in "new age new Korean dictionar y". Aa a result, we have the total 407 pairs of corresponding group with the following 3 types: 1) Chinese : Korean 3(2) : 3 syllable Chinese characters with completely same form-morpheme and same meaning, use, class (376pairs, 92.38% of 407), 2) Chinese : Korean 3 : 3 syllable Chinese characters with completely same form-morpheme and partly same meaning, use, class (18pairs, 4.42% of 407), 3)Chinese : Korean 3 : 3 syllable Chinese characters with completely same form-morpheme and different meaning, use, class (13pairs, 3.19% of 407).

A Comparative Study of New HSK and Entry-Level of TOPIK Written in Sino-Korean in the same form and morpheme of vocabularies (신(新)HSK와 초급용(初級用) TOPIK 어휘 중의 중한(中韓) 동형(同形) 동소(同素) 한자(漢字) 어휘의 비교 연구)

  • Choe, Geum Dan
    • Cross-Cultural Studies
    • /
    • v.30
    • /
    • pp.187-222
    • /
    • 2013
  • In this study, From 1,560 entry-level of TOPIK standard vocabularies are 702 Sino-Korean words selected which account for 45% of the whole vocabularies in TOPIK. In addition, the same form and morpheme words in Sino-Korean are sorted out by comparing them with 5,000 words of the NEW HSK vocabularies in Sino-Korean morpheme, array position of morpheme, meaning, and usage. Those are categorized into three parts : type of completely the same form-morpheme and same meaning, use, class(189 pairs), type of completely the same form-morpheme and partly same meaning, use, class(28 pairs), and type of completely the same form-morpheme and different meaning, use, class(10 pairs). The first type of words that account for 83.26% of them are used in exactly the same way in both Chinese and Korean. Through an accurate understanding of these vocabularies could either Chinese-speaking Korean learners or Korean-speaking Chinese learners apply those words in their mother tongue to the acquisition of the target language and get more effective means of learning methods for language proficiency test.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.