• Title/Summary/Keyword: 속성 기반 감성 분류

Search Result 18, Processing Time 0.021 seconds

Performance and Limitations of a Korean Sentiment Lexicon Built on the English SentiWordNet (영어 SentiWordNet을 이용하여 구축한 한국어 감성어휘사전의 성능 평가와 한계 연구)

  • Shin, Donghyok;Kim, Sairom;Cho, Donghee;Nguyen, Minh Dieu;Park, Soongang;Eo, Keonjoo;Nam, Jeesun
    • Annual Conference on Human and Language Technology
    • /
    • 2016.10a
    • /
    • pp.189-194
    • /
    • 2016
  • 본 연구는 다국어 감성사전 및 감성주석 코퍼스 구축 프로젝트인 MUSE 프로젝트의 일환으로 한국어 감성사전을 구축하기 위해 대표적인 영어 감성사전인 SentiWordNet을 이용하여 한국어 감성사전을 구축하는 방법의 의의와 한계점을 검토하는 것을 목적으로 한다. 우선 영어 SentiWordNet의 117,659개의 어휘중에서 긍정/부정 0.5 스코어 이상의 어휘를 추출하여 구글 번역기를 이용해 자동 번역하는 작업을 실시하였다. 그 중에서 번역이 되지 않거나, 중복되는 경우를 제거하고, 언어학 전문가들의 수작업으로 분류해 낸 결과 3,665개의 감성어휘를 획득할 수 있었다. 그러나 이마저도 병명이나 순수 감성어휘로 보기 어려운 사례들이 상당수 포함되어 있어 실제 이를 코퍼스에 적용하여 감성어휘를 자동 판별했을 때에 맛집 코퍼스에서의 재현율(recall)이 긍정과 부정에서 각각 47.4%, 37.7%, IT 코퍼스에서 각각 55.2%, 32.4%에 불과하였다. 이와 더불어 F-measure의 경우, 맛집 코퍼스에서는 긍정과 부정의 값이 각각 62.3%, 38.5%였고, IT 코퍼스에서는 각각 65.5%, 44.6%의 낮은 수치를 보여주고 있어, SentiWordNet 기반의 감성사전은 감성사전으로서의 역할을 수행하기에 충분하지 않은 것으로 나타났다. 이를 통해 한국어 감성사전을 구축할 때에는 한국어의 언어적 속성을 고려한 체계적인 접근이 필요함을 역설하고, 현재 한국어 전자사전 DECO에 기반을 두어 보완 확장중인 SELEX 감성사전에 대해 소개한다.

  • PDF

E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model (자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축)

  • Choi, Jun-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.33-39
    • /
    • 2020
  • In the field of Natural Language Processing, Various research such as Translation, POS Tagging, Q&A, and Sentiment Analysis are globally being carried out. Sentiment Analysis shows high classification performance for English single-domain datasets by pretrained sentence embedding models. In this thesis, the classification performance is compared by Korean E-commerce online dataset with various domain attributes and 6 Neural-Net models are built as BOW (Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], and BERT(KoBERT)[4]. It has been confirmed that the performance of pretrained sentence embedding models are higher than word embedding models. In addition, practical Neural-Net model composition is proposed after comparing classification performance on dataset with 17 categories. Furthermore, the way of compressing sentence embedding model is mentioned as future work, considering inference time against model capacity on real-time service.

Building Korean Multi-word Expression Lexicons and Grammars Represented by Finite-State Graphs for FbSA of Cosmetic Reviews (화장품 후기글의 자질기반 감성분석을 위한 다단어 표현의 유한그래프 사전 및 문법 구축)

  • Hwang, Chang-Hoe;Yoo, Gwang-Hoon;Choi, Seong-Yong;Shin, Dong-Heouk;Nam, Jee-Sun
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.400-405
    • /
    • 2018
  • 본 연구는 한국어 화장품 리뷰 코퍼스의 자질기반 감성 분석을 위하여, 이 도메인에서 실현되는 중요한 다단어 표현(MWE)의 유한상태 그래프 사전과 문법을 구축하는 방법론을 제시하고, 실제 구축된 사전과 문법의 성능을 평가하는 것을 목표로 한다. 본 연구에서는 자연어처리(NLP)에서 중요한 화두로 논의되어 온 MWE의 어휘-통사적 특징을 부분문법 그래프(LGG)로 형식화하였다. 화장품 리뷰 코퍼스에 DECO 한국어 전자사전을 적용하여 어휘 빈도 통계를 획득하고 이에 대한 언어학적 분석을 통해 극성 MWE(Polarity-MWE)와 화제 MWE(Topic MWE)의 전체 네 가지 하위 범주를 분류하였다. 또한 각 모듈간의 상호관계에 대한 어휘-통사적 속성을 반복적으로 적용하는 이중 증식(double-propagation)을 통해 자원을 확장하였다. 이 과정을 통해 구축된 대용량 MWE 유한그래프 사전 DECO-MWE의 성능을 테스트한 결과 각각 0.844(Pol-MWE), 0.742(Top-MWE)의 조화평균을 보였다. 이를 통해 본 연구에서 제안하는 MWE 언어자원 구축 방법론이 다양한 도메인에서 활용될 수 있고 향후 자질기반 감성 분석에 중요한 자원이 될 것임을 확인하였다.

  • PDF

The Sensitivity Analysis for Customer Feedback on Social Media (소셜 미디어 상 고객피드백을 위한 감성분석)

  • Song, Eun-Jee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.4
    • /
    • pp.780-786
    • /
    • 2015
  • Social media, such as Social Network Service include a lot of spontaneous opinions from customers, so recent companies collect and analyze information about customer feedback by using the system that analyzes Big Data on social media in order to efficiently operate businesses. However, it is difficult to analyze data collected from online sites accurately with existing morpheme analyzer because those data have spacing errors and spelling errors. In addition, many online sentences are short and do not include enough meanings which will be selected, so established meaning selection methods, such as mutual information, chi-square statistic are not able to practice Emotional Classification. In order to solve such problems, this paper suggests a module that can revise the meanings by using initial consonants/vowels and phase pattern dictionary and meaning selection method that uses priority of word class in a sentence. On the basis of word class extracted by morpheme analyzer, these new mechanisms would separate and analyze predicate and substantive, establish properties Database which is subordinate to relevant word class, and extract positive/negative emotions by using accumulated properties Database.

Subjectivity Study for Digital Game Players: Based on Game Classification Factors (디지털 게임 플레이어의 주관성 연구: 게임 분류 속성을 중심으로)

  • Lee, Hyejung;Min, Aehong
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.3
    • /
    • pp.275-287
    • /
    • 2019
  • As game players have been more diverse and new features of digital games have been emerged in recent days, it is important to find out and understand how recent game players recognize and classify digital games. Thirty game players conducted Q-analysis of twenty-nine Q-statements extracted from previous studies on game typology. By using a QUANL program, three different types were revealed. For game classification, 'Physical Environment Centric Players' type highly values external game elements from the outside perspectives. 'Contents Centric Players' type considers internal game elements as the most important criterion. 'Emotional Experience Centric Players' type values his/her subjective feeling and thoughts. Based on this study, it is expected to make a contribution in developing a framework of game players with their perspectives on game classification.

Investigating Opinion Mining Performance by Combining Feature Selection Methods with Word Embedding and BOW (Bag-of-Words) (속성선택방법과 워드임베딩 및 BOW (Bag-of-Words)를 결합한 오피니언 마이닝 성과에 관한 연구)

  • Eo, Kyun Sun;Lee, Kun Chang
    • Journal of Digital Convergence
    • /
    • v.17 no.2
    • /
    • pp.163-170
    • /
    • 2019
  • Over the past decade, the development of the Web explosively increased the data. Feature selection step is an important step in extracting valuable data from a large amount of data. This study proposes a novel opinion mining model based on combining feature selection (FS) methods with Word embedding to vector (Word2vec) and BOW (Bag-of-words). FS methods adopted for this study are CFS (Correlation based FS) and IG (Information Gain). To select an optimal FS method, a number of classifiers ranging from LR (logistic regression), NN (neural network), NBN (naive Bayesian network) to RF (random forest), RS (random subspace), ST (stacking). Empirical results with electronics and kitchen datasets showed that LR and ST classifiers combined with IG applied to BOW features yield best performance in opinion mining. Results with laptop and restaurant datasets revealed that the RF classifier using IG applied to Word2vec features represents best performance in opinion mining.

A Study of Concepts on the Brand Love (브랜드 사랑 구성개념에 대한 연구)

  • Min, Guihong;Park, Pumsoon
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.8
    • /
    • pp.315-326
    • /
    • 2020
  • Corporate efforts to build strong brands have made consumers interested in brand love. In the field of brand love, however, there is a lack of systematic research on the multidimensionality of the concept of brand love and on the scale development to measure it. Thus, based on the methodological research design of Churchill(1979) and DeVellis(1991), this study explored properties of brand love and classified them into two levels - 'emotion' and 'relationship' - and generated corresponding measurement items. To do this, the research was conducted in a total of eight stages, including preliminary studies such as literature review, open surveys, and in-depth interviews, as well as the main study process in which the factors were analyzed step by step. As a result, the level of emotion appeared to have five subcomponents (self-esteem, warmth, interest, responsibility, pleasure) with 19 items, and the level of relationship - three subcomponents (unchanging, sharing/supporting, understanding) with 11 items, adding up to a total of 30 measurement items for brand love with reliability, convergent and discriminant validity, and nomological validity. Additionally, we intended to expand the scope of research related to brand love by presenting the result model of organic interaction between the concepts that constitute brand love and proposing '4 categories of brand love strength' based on it.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.