• 제목/요약/키워드: Emotion Classification

검색결과 302건 처리시간 0.024초

사용자의 정서 단어 분류에 기반한 정서 분류와 선택 방법 (A Classification and Selection Method of Emotion Based on Classifying Emotion Terms by Users)

  • 이신영;함준석;고일주
    • 감성과학
    • /
    • 제15권1호
    • /
    • pp.97-104
    • /
    • 2012
  • 최근에 사용자에 의한 대량의 텍스트 데이터가 발생하면서 사용자의 정보, 의견 등을 분석하는 오피니언 마이닝이 중요하게 부각되고 있다. 오피니언 마이닝 중 특히 정서 분석은 제품, 사회적 이슈, 정치인에 대한 호감 등에 대한 개인적 의견이나 정서를 분석하여 긍정, 부정이나 행복, 슬픔 등의 정서를 분석하는 연구 분야이다. 정서 분석을 위해서 정서 차원 이론의 정서가와 각성 차원의 2차원 공간을 사용하고, 이 공간에서 정서가 분포하는 영역을 설정하여 매핑하는 방법을 사용한다. 그러나 기존에는 정서의 분포 영역을 임의로 설정하는 문제가 있었다. 본 논문에서는 이 문제를 해결하기 위해, 한국어 정서 단어 목록을 사용해 사용자 설문을 실시하여 2차원 상에 12개 정서의 분포를 구성하였다. 또한 2차원 상의 특정 정서 상태가 여러 개의 정서에 중첩되는 경우, 정서에 소속될 확률을 사용한 룰렛휠 방법을 사용하여 하나의 정서를 선택하는 방법을 제안하였다. 제안한 방법을 사용하여 텍스트에서 정서 단어를 추출하여 텍스트를 정서로 분류할 수 있다.

  • PDF

RBF 커널과 다중 클래스 SVM을 이용한 생리적 반응 기반 감정 인식 기술 (Physiological Responses-Based Emotion Recognition Using Multi-Class SVM with RBF Kernel)

  • 마카라 완니;고광은;박승민;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제19권4호
    • /
    • pp.364-371
    • /
    • 2013
  • Emotion Recognition is one of the important part to develop in human-human and human computer interaction. In this paper, we have focused on the performance of multi-class SVM (Support Vector Machine) with Gaussian RFB (Radial Basis function) kernel, which has been used to solve the problem of emotion recognition from physiological signals and to improve the accuracy of emotion recognition. The experimental paradigm for data acquisition, visual-stimuli of IAPS (International Affective Picture System) are used to induce emotional states, such as fear, disgust, joy, and neutral for each subject. The raw signals of acquisited data are splitted in the trial from each session to pre-process the data. The mean value and standard deviation are employed to extract the data for feature extraction and preparing in the next step of classification. The experimental results are proving that the proposed approach of multi-class SVM with Gaussian RBF kernel with OVO (One-Versus-One) method provided the successful performance, accuracies of classification, which has been performed over these four emotions.

한글 글꼴 추천시스템을 위한 크라우드 방식의 감성 속성 적용 및 분석 (Application and Analysis of Emotional Attributes using Crowdsourced Method for Hangul Font Recommendation System)

  • 김현영;임순범
    • 한국멀티미디어학회논문지
    • /
    • 제20권4호
    • /
    • pp.704-712
    • /
    • 2017
  • Various researches on content sensibility with the development of digital contents are under way. Emotional research on fonts is also underway in various fields. There is a requirement to use the content expressions in the same way as the content, and to use the font emotion and the textual sensibility of the text in harmony. But it is impossible to select a proper font emotion in Korea because each of more than 6,000 fonts has a certain emotion. In this paper, we analysed emotional classification attributes and constructed the Hangul font recommendation system. Also we verified the credibility and validity of the attributes themselves in order to apply to Korea Hangul fonts. After then, we tested whether general users can find a proper font in a commercial font set through this emotional recommendation system. As a result, when users want to express their emotions in sentences more visually, they can get a recommendation of a Hangul font having a desired emotion by utilizing font-based emotion attribute values collected through the crowdsourced method.

음성의 감성요소 추출을 통한 감성 인식 시스템 (The Emotion Recognition System through The Extraction of Emotional Components from Speech)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제10권9호
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

얼굴 특징 변화에 따른 휴먼 감성 인식 (Human Emotion Recognition based on Variance of Facial Features)

  • 이용환;김영섭
    • 반도체디스플레이기술학회지
    • /
    • 제16권4호
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Word2Vec과 LSTM을 활용한 이별 가사 감정 분류 (Parting Lyrics Emotion Classification using Word2Vec and LSTM)

  • 임명진;박원호;신주현
    • 스마트미디어저널
    • /
    • 제9권3호
    • /
    • pp.90-97
    • /
    • 2020
  • 인터넷과 스마트폰의 발달로 디지털 음원은 쉽게 접근이 가능해졌고 이에 따라 음악 검색 및 추천에 대한 관심이 높아지고 있다. 음악 추천 방법으로는 장르나 감정을 분류하기 위해 음정, 템포, 박자 등의 멜로디를 사용한 연구가 진행되고 있다. 하지만 음악에서 가사는 인간의 감정을 표현하는 수단 중의 하나로 역할 비중이 점점 높아지고 있기 때문에 가사를 기반으로 한 감정 분류 연구가 필요하다. 이에 본 논문에서는 가사를 기반으로 이별 감정을 세분화하기 위해 이별 가사의 감정을 분석한다. 이별 가사에 나타나는 단어 간 유사도를 Word2Vec 학습을 통해 벡터화하여 감정 사전을 구축 한 후 LSTM을 활용하여 가사를 학습시켜 유사한 감정으로 가사를 분류해주는 Word2Vec과 LSTM을 활용한 이별 가사 감정 분류 방법을 제안한다.

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제11권2호
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구 (A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition)

  • 조인구;공연우;전소이;조서영;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

Music Similarity Search Based on Music Emotion Classification

  • Kim, Hyoung-Gook;Kim, Jang-Heon
    • The Journal of the Acoustical Society of Korea
    • /
    • 제26권3E호
    • /
    • pp.69-73
    • /
    • 2007
  • This paper presents an efficient algorithm to retrieve similar music files from a large archive of digital music database. Users are able to navigate and discover new music files which sound similar to a given query music file by searching for the archive. Since most of the methods for finding similar music files from a large database requires on computing the distance between a given query music file and every music file in the database, they are very time-consuming procedures. By measuring the acoustic distance between the pre-classified music files with the same type of emotion, the proposed method significantly speeds up the search process and increases the precision in comparison with the brute-force method.