• 제목/요약/키워드: emotion recognition

검색결과 647건 처리시간 0.029초

FFT와 MFB Spectral Entropy를 이용한 GMM 기반의 감정인식 (Speech Emotion Recognition Based on GMM Using FFT and MFB Spectral Entropy)

  • 이우석;노용완;홍광석
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.99-100
    • /
    • 2008
  • This paper proposes a Gaussian Mixture Model (GMM) - based speech emotion recognition methods using four feature parameters; 1) Fast Fourier Transform(FFT) spectral entropy, 2) delta FFT spectral entropy, 3) Mel-frequency Filter Bank (MFB) spectral entropy, and 4) delta MFB spectral entropy. In addition, we use four emotions in a speech database including anger, sadness, happiness, and neutrality. We perform speech emotion recognition experiments using each pre-defined emotion and gender. The experimental results show that the proposed emotion recognition using FFT spectral-based entropy and MFB spectral-based entropy performs better than existing emotion recognition based on GMM using energy, Zero Crossing Rate (ZCR), Linear Prediction Coefficient (LPC), and pitch parameters. In experimental Results, we attained a maximum recognition rate of 75.1% when we used MFB spectral entropy and delta MFB spectral entropy.

  • PDF

감성 인식을 위한 강화학습 기반 상호작용에 의한 특징선택 방법 개발 (Reinforcement Learning Method Based Interactive Feature Selection(IFS) Method for Emotion Recognition)

  • 박창현;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제12권7호
    • /
    • pp.666-670
    • /
    • 2006
  • This paper presents the novel feature selection method for Emotion Recognition, which may include a lot of original features. Specially, the emotion recognition in this paper treated speech signal with emotion. The feature selection has some benefits on the pattern recognition performance and 'the curse of dimension'. Thus, We implemented a simulator called 'IFS' and those result was applied to a emotion recognition system(ERS), which was also implemented for this research. Our novel feature selection method was basically affected by Reinforcement Learning and since it needs responses from human user, it is called 'Interactive feature Selection'. From performing the IFS, we could get 3 best features and applied to ERS. Comparing those results with randomly selected feature set, The 3 best features were better than the randomly selected feature set.

추론 능력에 기반한 음성으로부터의 감성 인식 (Inference Ability Based Emotion Recognition From Speech)

  • 박창현;심귀보
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.123-125
    • /
    • 2004
  • Recently, we are getting to interest in a user friendly machine. The emotion is one of most important conditions to be familiar with people. The machine uses sound or image to express or recognize the emotion. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

  • PDF

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제11권2호
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

얼굴 감정을 이용한 시청자 감정 패턴 분석 및 흥미도 예측 연구 (A Study on Sentiment Pattern Analysis of Video Viewers and Predicting Interest in Video using Facial Emotion Recognition)

  • 조인구;공연우;전소이;조서영;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.215-220
    • /
    • 2022
  • Emotion recognition is one of the most important and challenging areas of computer vision. Nowadays, many studies on emotion recognition were conducted and the performance of models is also improving. but, more research is needed on emotion recognition and sentiment analysis of video viewers. In this paper, we propose an emotion analysis system the includes a sentiment analysis model and an interest prediction model. We analyzed the emotional patterns of people watching popular and unpopular videos and predicted the level of interest using the emotion analysis system. Experimental results showed that certain emotions were strongly related to the popularity of videos and the interest prediction model had high accuracy in predicting the level of interest.

인터넷 폰을 이용한 감성인식 시스템 구현 (Implementation of Emotion Recognition System using Internet Phone)

  • 권병헌;서범석
    • 디지털콘텐츠학회 논문지
    • /
    • 제8권1호
    • /
    • pp.35-40
    • /
    • 2007
  • 본 논문에서는 감성 인식과 감성을 표현하는 캐릭터 애니메이션에 대한 내용을 소개한다. 본 논문에서 우리는 사용자의 감성을 표현하는 특성 파라미터를 찾는 방법과 패턴 매칭 알고리즘을 통해 감성을 추론하는 방법을 제안했다. 또한, 인터넷 폰에서 화자의 감성을 인식하여 캐릭터 애니메이션을 디스플레이하는 플랫폼을 구현하여 감성 인식 시스템의 성능을 실험하였다.

  • PDF

GMM을 이용한 화자 및 문장 독립적 감정 인식 시스템 구현 (Speaker and Context Independent Emotion Recognition System using Gaussian Mixture Model)

  • 강면구;김원구
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2463-2466
    • /
    • 2003
  • This paper studied the pattern recognition algorithm and feature parameters for emotion recognition. In this paper, KNN algorithm was used as the pattern matching technique for comparison, and also VQ and GMM were used lot speaker and context independent recognition. The speech parameters used as the feature are pitch, energy, MFCC and their first and second derivatives. Experimental results showed that emotion recognizer using MFCC and their derivatives as a feature showed better performance than that using the Pitch and energy Parameters. For pattern recognition algorithm, GMM based emotion recognizer was superior to KNN and VQ based recognizer

  • PDF

제스처와 EEG 신호를 이용한 감정인식 방법 (Emotion Recognition Method using Gestures and EEG Signals)

  • 김호덕;정태민;양현창;심귀보
    • 제어로봇시스템학회논문지
    • /
    • 제13권9호
    • /
    • pp.832-837
    • /
    • 2007
  • Electroencephalographic(EEG) is used to record activities of human brain in the area of psychology for many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study Emotion Recognition method which uses one of EEG signals and Gestures in the existing research. In this paper, we use together EEG signals and Gestures for Emotion Recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both EEG signals and gestures gets high recognition rates better than using EEG signals or gestures. Both EEG signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

정서 재인 방법 고찰을 통한 통합적 모델 모색에 관한 연구 (Toward an integrated model of emotion recognition methods based on reviews of previous work)

  • 박미숙;박지은;손진훈
    • 감성과학
    • /
    • 제14권1호
    • /
    • pp.101-116
    • /
    • 2011
  • 정서 컴퓨팅 분야는 인간과 컴퓨터 간 상호작용을 효과적이게 하기 위하여 사용자의 정서를 재인하는 컴퓨터 시스템을 개발했다. 본 연구의 목적은 심리학적 이론에 기반한 정서 재인 연구들을 고찰하고 보다 진보된 정서 재인 방법을 제안하고자 하였다. 본 연구의 본론에서는 심리학적 이론에 근거한 대표적인 정서 재인 방법을 고찰하였다. 첫째, Darwin 이론에 근거한 얼굴 표정 기반 정서 재인 방법을 고찰하였다. 둘째, James 이론에 근거한 생리신호기반 정서 재인 방법을 고찰하였다. 셋째, 앞의 두 이론에 통합적으로 근거한 다중 모달리티 기반 정서 재인 방법을 고찰하였다. 세 가지 방법들은 이론적 배경과 현재까지 이루어진 연구 결과들을 중심으로 고찰되었다. 결론에서는 선행 연구의 한계점을 종합하여 보다 진보된 정서 재인 방법을 제안하였다. 본 연구는 첫째, 현재 사용되고 있는 제한된 생리 신호에 대한 대안으로 다양한 생리 반응(예., 뇌 활동, 얼굴온도 등)을 정서 재인에 포함하도록 제안하였다. 둘째, 애매한 정서를 구분할 수 있도록 정서의 차원 개념에 기반한 정서 재인 방법을 제안하였다. 셋째, 정서 유발에 영향을 미치는 인지적 요소를 정서 재인에 포함하도록 제안하였다. 본 연구에서 제안한 정서 재인 방법은 다양한 생리 신호를 포함하고, 정서의 차원적 개념에 기반하며, 인지적 요소를 고려한 통합적인 모델이다.

  • PDF

Recognition of Emotion and Emotional Speech Based on Prosodic Processing

  • Kim, Sung-Ill
    • The Journal of the Acoustical Society of Korea
    • /
    • 제23권3E호
    • /
    • pp.85-90
    • /
    • 2004
  • This paper presents two kinds of new approaches, one of which is concerned with recognition of emotional speech such as anger, happiness, normal, sadness, or surprise. The other is concerned with emotion recognition in speech. For the proposed speech recognition system handling human speech with emotional states, total nine kinds of prosodic features were first extracted and then given to prosodic identifier. In evaluation, the recognition results on emotional speech showed that the rates using proposed method increased more greatly than the existing speech recognizer. For recognition of emotion, on the other hands, four kinds of prosodic parameters such as pitch, energy, and their derivatives were proposed, that were then trained by discrete duration continuous hidden Markov models(DDCHMM) for recognition. In this approach, the emotional models were adapted by specific speaker's speech, using maximum a posteriori(MAP) estimation. In evaluation, the recognition results on emotional states showed that the rates on the vocal emotions gradually increased with an increase of adaptation sample number.