• Title/Summary/Keyword: 한국어 감정 자질

Search Result 27, Processing Time 0.019 seconds

Using CNN-LSTM for Effective Application of Dialogue Context to Emotion Classification (CNN-LSTM을 이용한 대화 문맥 반영과 감정 분류)

  • Shin, Dong-Won;Lee, Yeon-Soo;Jang, Jung-Sun;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.141-146
    • /
    • 2016
  • 대화 시스템에서 사용자가 나타내는 발화에 내재된 감정을 분류하는 것은, 시스템이 적절한 응답과 서비스를 제공하는데 있어 매우 중요하다. 본 연구에서는 대화 내 감정 분류를 하는데 있어 직접적, 간접적으로 드러나는 감정 자질을 자동으로 학습하고 감정이 지속되는 대화 문맥을 효과적으로 반영하기 위해 CNN-LSTM 방식의 딥 뉴럴 네트워크 구조를 제안한다. 그리고 대량의 구어체 코퍼스를 이용한 사전 학습으로 데이터 부족 문제를 완화하였다. 실험 결과 제안하는 방법이 기존의 SVM이나, 단순한 RNN, CNN 네트워크 구조에 비해 전반전인 성능 향상을 보였고, 특히 감정이 있는 경우 더 잘 분류하는 것을 확인할 수 있었다.

  • PDF

Music Recommender System based on Lyrics Information (가사정보를 이용한 음악 추천 시스템)

  • Chang, Geun-Tak;Seo, Jung-Yun
    • Annual Conference on Human and Language Technology
    • /
    • 2010.10a
    • /
    • pp.42-45
    • /
    • 2010
  • 본 연구에서는 한국의 대중가요의 가사 정보를 형태소 단위로 분석하고 이 정보를 기반으로 노래의 감정을 분류하여 추천하는 시스템을 제안한다. 이 시스템을 구축하기 위해서 수집된 노래의 가사는 형태소를 분석하여 각 형태소를 자질로 결정하고, 사용되는 분류기는 ME 모델을 이용해서 학습된다. 이 학습된 분류기는 자질의 수에 따라 그 성능이 분석되고, 분류기를 사용한 추천 시스템은 랜덤하게 생성된 데이터 집합에 대해서 얼마나 정확하게 노래를 추천하는 지를 분석한다.

  • PDF

A Comparative Study on Joy in Russian and Korean (기쁨의 의미연구 - 러시아어와 한국어의 비교를 중심으로 -)

  • Kim, Jung-Il
    • Cross-Cultural Studies
    • /
    • v.41
    • /
    • pp.113-140
    • /
    • 2015
  • This paper explains how the basic and instinctive emotion "joy" is verbally expressed in Russian and Korean. In particular, the main concern of this pater is on the cultural context with which the emotion "joy" is related and the ways in which the emotion "joy" has a wide range of uses. The semantic and pragmatic characteristics of the uses of the expression "joy" can be explained through the cultural and historical backgrounds in both languages. In Russian, joy has two variants, radost' and udovol'stvie. It is very difficult to distinguish a significant difference between them; however, the former is mainly connected with more mental, spiritual, cultural, and religious contexts, whereas the latter is mainly related with more concrete, instantaneous contexts and daily life. The former produces an impression that has a more wide, spiritual, and macroscopic attitude toward a situation, whereas the latter produces an impression that has a microscopic and instantaneous attitude toward a situation. Compared with the Russian expressions, the Korean equivalents, 기쁨 and 즐거움, have a very similar opposition like that of the Russian. The former is based on a more logical and causal relation between an anticipation or desire and the current situations, whereas the latter is based on the participation of speakers in a situation and has a very instantaneous characteristic, like a udovol'stvie in Russian. Thus, it can be reasonable argued that the Russian udovol'stvie and the Korean 즐거움 share many similar semantic properties. In brief summary, in both languages there exists two distinctive variants that show a privative opposition to express the emotional concept of joy.

Sentiment Categorization of Korean Customer Reviews using CRFs (CRFs를 이용한 한국어 상품평의 감정 분류)

  • Shin, Junsoo;Lee, Juhoo;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2008.10a
    • /
    • pp.58-62
    • /
    • 2008
  • 인터넷 상에서 상품을 구입할 때 고려하는 부분 중의 하나가 상품평이다. 하지만 이러한 상품평들을 개인이 일일이 확인 하는데에는 상당한 시간이 소요된다. 이러한 문제점을 줄이기 위해서 본 논문에서는 인터넷 상의 상품평에 대한 의견을 긍정, 부정, 일반으로 나누는 시스템을 제안한다. 제안 시스템은 CRFs 기계학습모델을 기반으로 하며, 연결어미, 형태소 유니그램, 슬라이딩 윈도우 기법의 형태소 바이그램을 자질로 사용한다. 실험을 위해서 가격비교 사이트의 모니터 카테고리에서 561개의 상품평을 수집하였다. 이 중 465개의 상품평을 학습 문서로 사용하였고 96개의 상품평을 실험 문서로 사용하였다. 제안 시스템은 실험결과 79% 정도의 정확도를 보였다. 추가 실험으로 제안 시스템이 사람들과 얼마나 비슷한 성능을 보이는지 알아보기 위해서 카파 테스트를 실시하였다. 카파 테스트를 실시한 결과, 사람간의 카파 계수는 0.6415였으며, 제안 시스템과 사람 간의 카파 계수는 평균 0.5976이였다. 결론적으로 제안 시스템이 사람보다는 떨어지지만 유사한 정도의 성능을 보임을 알 수 있었다.

  • PDF

A Vocabulary Analysis and Improvement Plan of Korean textbooks for Chinese learners: focusing on Korean "symbol adverb+predicate" (중국인 학습자를 위한 한국어 교재의 어휘 분석 및 개선 방안 한국어 '상징부사+용언'을 중심으로)

  • Zong, Yi
    • Korean Educational Research Journal
    • /
    • v.42 no.1
    • /
    • pp.35-72
    • /
    • 2021
  • This study is to form develops an effective teaching method centered on the Korean "symbol adverb + predicate" type, helping Chinese students to learn Korean to communicate more accurately when expressing detailed complex feelings and various emotions.Manyforeignlanguage learners try to memorize individual words when they acquire the new vocabulary, this may lead to a problematic in that they cannot use Korean vocabulary accurately and naturally because they do not value the combination of vocabulary words. Since symbolic adverbs are not used in isolation and being frequently used with certain vocabulary words, it is more effective to teach them in the form of instruct learners using "symbol adverb + predicate" forms rather than individual vocabulary words. Accordingly, this research considers a particular vocabulary following symbolic adverbs or vocabulary groups with common semantic qualities that could be frequently introduced. Seven Korean language textbooks used by university in domestic Korea and China are compared and analyzed to reveal the aspects of differences in the use of descriptive words after symbolic adverbs. Finally, based on the textbook analysis results, the government propose a plan to improve the Korean "symbol adverb + predicate" type for Chinese learners. However, this study was limit to being unable to present specific educational measures for Chinese learners in the form of "symbol adverb + predicate". This is expected to complement the limitations of this study in subsequent studies, and lead to more specific discussions.

  • PDF

A Study of 'Emotion Trigger' by Text Mining Techniques (텍스트 마이닝을 이용한 감정 유발 요인 'Emotion Trigger'에 관한 연구)

  • An, Juyoung;Bae, Junghwan;Han, Namgi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.69-92
    • /
    • 2015
  • The explosion of social media data has led to apply text-mining techniques to analyze big social media data in a more rigorous manner. Even if social media text analysis algorithms were improved, previous approaches to social media text analysis have some limitations. In the field of sentiment analysis of social media written in Korean, there are two typical approaches. One is the linguistic approach using machine learning, which is the most common approach. Some studies have been conducted by adding grammatical factors to feature sets for training classification model. The other approach adopts the semantic analysis method to sentiment analysis, but this approach is mainly applied to English texts. To overcome these limitations, this study applies the Word2Vec algorithm which is an extension of the neural network algorithms to deal with more extensive semantic features that were underestimated in existing sentiment analysis. The result from adopting the Word2Vec algorithm is compared to the result from co-occurrence analysis to identify the difference between two approaches. The results show that the distribution related word extracted by Word2Vec algorithm in that the words represent some emotion about the keyword used are three times more than extracted by co-occurrence analysis. The reason of the difference between two results comes from Word2Vec's semantic features vectorization. Therefore, it is possible to say that Word2Vec algorithm is able to catch the hidden related words which have not been found in traditional analysis. In addition, Part Of Speech (POS) tagging for Korean is used to detect adjective as "emotional word" in Korean. In addition, the emotion words extracted from the text are converted into word vector by the Word2Vec algorithm to find related words. Among these related words, noun words are selected because each word of them would have causal relationship with "emotional word" in the sentence. The process of extracting these trigger factor of emotional word is named "Emotion Trigger" in this study. As a case study, the datasets used in the study are collected by searching using three keywords: professor, prosecutor, and doctor in that these keywords contain rich public emotion and opinion. Advanced data collecting was conducted to select secondary keywords for data gathering. The secondary keywords for each keyword used to gather the data to be used in actual analysis are followed: Professor (sexual assault, misappropriation of research money, recruitment irregularities, polifessor), Doctor (Shin hae-chul sky hospital, drinking and plastic surgery, rebate) Prosecutor (lewd behavior, sponsor). The size of the text data is about to 100,000(Professor: 25720, Doctor: 35110, Prosecutor: 43225) and the data are gathered from news, blog, and twitter to reflect various level of public emotion into text data analysis. As a visualization method, Gephi (http://gephi.github.io) was used and every program used in text processing and analysis are java coding. The contributions of this study are as follows: First, different approaches for sentiment analysis are integrated to overcome the limitations of existing approaches. Secondly, finding Emotion Trigger can detect the hidden connections to public emotion which existing method cannot detect. Finally, the approach used in this study could be generalized regardless of types of text data. The limitation of this study is that it is hard to say the word extracted by Emotion Trigger processing has significantly causal relationship with emotional word in a sentence. The future study will be conducted to clarify the causal relationship between emotional words and the words extracted by Emotion Trigger by comparing with the relationships manually tagged. Furthermore, the text data used in Emotion Trigger are twitter, so the data have a number of distinct features which we did not deal with in this study. These features will be considered in further study.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.