• Title/Summary/Keyword: Korean word-spacing

Search Result 53, Processing Time 0.026 seconds

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Korean Word Spacing System Using Syllable N-Gram and Word Statistic Information (음절 N-Gram과 어절 통계 정보를 이용한 한국어 띄어쓰기 시스템)

  • Choi, Sung-Ja;Kang, Mi-Young;Heo, Hee-Keun;Kwon, Hyuk-Chul
    • Annual Conference on Human and Language Technology
    • /
    • 2003.10d
    • /
    • pp.47-53
    • /
    • 2003
  • 본 논문은 정제된 대용량 말뭉치로부터 얻은 음절 n-gram과 어절 통계를 이용한 한국어 자동 띄어쓰기 시스템을 제안한다. 한 문장 내에서 최적의 띄어쓰기 위치는 Viterbi 알고리즘에 의해 결정된다. 통계 기반 연구에 고유한 문제인 데이터 부족 문제, 학습 말뭉치 의존 문제를 개선하기 위하여 말뭉치를 확장하고 실험을 통해 얻은 매개변수를 사용하고 최장 일치 Viable Prefix를 찾아 어절 목록에 추가한다. 본 연구에 사용된 학습 말뭉치는 33,641,511어절로 구성되어 있으며 구어와 문어를 두루 포함한다.

  • PDF

Automatic Korean Word Spacing using Structural SVM (Structural SVM을 이용한 한국어 자동 띄어쓰기)

  • Lee, Chang-Ki;Kim, Hyun-Ki
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06b
    • /
    • pp.270-272
    • /
    • 2012
  • 본 논문에서는 띄어쓰기가 완전히 무시된 한국어 문장의 띄어쓰기 문제를 위해 structural SVM을 이용한 한국어 띄어쓰기 방법을 제안한다. Structural SVM은 기존의 이진 분류 SVM을 sequence labeling 등의 문제에 적용할 수 있도록 확장된 것으로, 이 분야에 띄어난 성능을 보이는 것으로 알려진 CRF와 비슷하거나 더 높은 성능을 보이고 있다. 본 논문에서는 약 2,600만 어절의 세종 코퍼스 원문을 학습 데이터로 사용하고, 약 29만 어절의 ETRI 품사 부착 코퍼스를 평가 데이터로 사용하였다. 평가 결과 음절단위의 정확도는 99.01%, 어절단위의 정확도는 95.47%를 보였다.

Improving Korean Word-Spacing System Using Stochastic Information (통계 정보를 이용한 한국어 자동 띄어쓰기 시스템의 성능 개선)

  • 최성자;강미영;권혁철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.883-885
    • /
    • 2004
  • 본 논문은 대용량 말뭉치로부터 어절 unigram과 음절 bigram 통계 정보를 추출하여 구축한 한국어 자동 띄어쓰기 시스템의 성능을 개선하는 방법을 제안한다 어절 통계를 주로 이용하는 기법으로 한국어 문서를 처리할 때, 한국어의 교착어적인 특성으로 인해 자료부족 문제가 발생한다 이물 극복하기 위해서 본 논문은 음절 bigram간 띄어쓸 확률 정보를 이용함으로써 어절로 인식 가능한 추가의 후보 어절을 추정하는 방법을 제안한다. 이와 글이 개선된 시스템의 성능을 다양한 실험 데이터를 사용하여 평가한 결과, 평균 93.76%의 어절 단위 정확도를 얻었다.

  • PDF

Automatic Word Spacer based on Syllable Bi-gram Model using Word Spacing Information of an Input Sentence (입력 문장의 띄어쓰기를 고려한 음절 바이그램 띄어쓰기 모델)

  • Cho, Han-Cheol;Lee, Do-Gil;Rim, Hae-Chang
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2006.06a
    • /
    • pp.67-71
    • /
    • 2006
  • 현재까지 제안된 자동 띄어쓰기 교정 모델들은 그 중의 대다수가 입력 문장에서 공백을 제거한 후에 교정 작업을 수행한다. 이러한 교정 방식은 입력 문장의 띄어쓰기가 잘 되어 있는 경우에 입력 문장보다 좋지 못한 교정 문장을 생성하는 경우가 있다. 본 논문에서는 이러한 문제점을 해결하기 위하여 입력 문장의 띄어쓰기를 고려한 자동 띄어쓰기 교정모델을 제안한다. 이 모델은 입력 문장의 음절단위 띄어쓰기 오류가 5%일 때 약 8%의 성능 향상을 보였으며, 10%의 오류가 존재할 때 약 5%의 성능 향상을 보였다.

  • PDF

Improving Word Spacing Correction Methods for Efficient Text Processing (효율적인 문서처리를 위한 띄어쓰기 교정 기법 개선)

  • 강미영;권혁철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.486-488
    • /
    • 2003
  • 한국어 문서에서 가장 많이 나타나는 띄어쓰기 오류는 의미적이고 통사적인 중의성이나 오류를 야기한다. 이 논문은 부산대 인공지능 연구실에서 개발한 부분 문장 분석을 기반으로 하는 한국어 걸자 및 운법 검사기(2.2)에 구현되어 있는 어절 내 한 번 띄어쓰기 오류 교정 기법 및 어절 간 띄어쓰기 오류 교점 기법을 확장하고 개선하며 어절 내 여러 번 띄어쓰기 기법을 개발함을 목표로 한다.

  • PDF

Noun and affix extraction using conjunctive information (결합정보를 이용한 명사 및 접사 추출)

  • 서창덕;박인칠
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.5
    • /
    • pp.71-81
    • /
    • 1997
  • This paper proposes noun and affix extraction methods using conjunctive information for making an automatic indexing system thorugh morphological analysis and syntactic analysis. The korean language has a peculiar spacing words rule, which is different from other languages, and the conjunctive information, which is extracted from the rule, can reduce the number of multiple parts of speech at a minimum cost. The proposed algorithms also solve the problem that one word is seperated by newline charcter. We show efficiency of the proposed algorithms through the process of morhologica analyzing.

  • PDF

A Recognition of Word Spacing Errors Using By Syllable (음절 bigram 특성을 이용한 띄어쓰기 오류의 인식)

  • 강승식
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2000.06a
    • /
    • pp.85-88
    • /
    • 2000
  • 대용량 말뭉치에서 이웃 음절간 공기빈도 정보를 추출하여 한글의 bigram 음절 특성을 조사하였다. Bigram 음절 특성은 띄어쓰기가 무시된 문서에 대한 자동 띄어쓰기, 어떤 어절이 띄어쓰기 오류어인지 판단, 맞춤법 검사기에서 절차 오류어의 교정 등 다양한 응용분야에서 유용하게 사용될 것으로 예상되고 있다. 본 논문에서는 한글의 bigram 음절 특성을 자동 띄어쓰기 및 입력어절이 띄어쓰기 오류어인지를 판단하는데 적용하는 실험을 하였다. 실험 결과에 의하면 bigram 음절 특성이 매우 유용하게 사용될 수 있음을 확인하였다.

  • PDF

Recognizing Unknown Words and Correcting Spelling errors as Preprocessing for Korean Information Processing System (한국어 정보처리 시스템의 전처리를 위한 미등록어 추정 및 철자 오류의 자동 교정)

  • Park, Bong-Rae;Rim, Hae-Chang
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2591-2599
    • /
    • 1998
  • In this paper, we proose a method of recognizing unknown words and correcting spelling errors(including spacing erors) to increase the performance of Korean information processing systems. Unknown words are recognized through comparative analysis of two or more morphologically similar eojeols(spacing units in Korean) including the same unknown word candidates. And spacing errors and spelling errors are corrected by using lexicatlized rules shich are automatically extracted from very large raw corpus. The extractionof the lexicalized rules is based on morphological and contextual similarities between error eojeols and their corection eojeols which are confirmed to be used in the corpus. The experimental result shows that our system can recognize unknown words in an accuracy of 98.9%, and can correct spacing errors and spelling errors in accuracies of 98.1% and 97.1%, respectively.

  • PDF

Automatic Error Correction System for Erroneous SMS Strings (SMS 변형된 문자열의 자동 오류 교정 시스템)

  • Kang, Seung-Shik;Chang, Du-Seong
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.6
    • /
    • pp.386-391
    • /
    • 2008
  • Some spoken word errors that violate grammatical or writing rules occurs frequently in communication environments like mobile phone and messenger. These unexpected errors cause a problem in a language processing system for many applications like speech recognition, text-to-speech translation, and so on. In this paper, we proposed and implemented an automatic correction system of ill-formed words and word spacing errors in SMS sentences that has been the major errors of poor accuracy. We experimented three methods of constructing the word correction dictionary and evaluated the results of those methods. They are (1) manual construction of error words from the vocabulary list of ill-formed communication languages, (2) automatic construction of error dictionary from the manually constructed corpus, and (3) context-dependent method of automatic construction of error dictionary.