• Title/Summary/Keyword: Speech Emotion Recognition

Search Result 135, Processing Time 0.023 seconds

Emotion Robust Speech Recognition using Speech Transformation (음성 변환을 사용한 감정 변화에 강인한 음성 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.683-687
    • /
    • 2010
  • This paper studied some methods which use frequency warping method that is the one of the speech transformation method to develope the robust speech recognition system for the emotional variation. For this purpose, the effect of emotional variations on the speech signal were studied using speech database containing various emotions and it is observed that speech spectrum is affected by the emotional variation and this effect is one of the reasons that makes the performance of the speech recognition system worse. In this paper, new training method that uses frequency warping in training process is presented to reduce the effect of emotional variation and the speech recognition system based on vocal tract length normalization method is developed to be compared with proposed system. Experimental results from the isolated word recognition using HMM showed that new training method reduced the error rate of the conventional recognition system using speech signal containing various emotions.

Multimodal Parametric Fusion for Emotion Recognition

  • Kim, Jonghwa
    • International journal of advanced smart convergence
    • /
    • v.9 no.1
    • /
    • pp.193-201
    • /
    • 2020
  • The main objective of this study is to investigate the impact of additional modalities on the performance of emotion recognition using speech, facial expression and physiological measurements. In order to compare different approaches, we designed a feature-based recognition system as a benchmark which carries out linear supervised classification followed by the leave-one-out cross-validation. For the classification of four emotions, it turned out that bimodal fusion in our experiment improves recognition accuracy of unimodal approach, while the performance of trimodal fusion varies strongly depending on the individual. Furthermore, we experienced extremely high disparity between single class recognition rates, while we could not observe a best performing single modality in our experiment. Based on these observations, we developed a novel fusion method, called parametric decision fusion (PDF), which lies in building emotion-specific classifiers and exploits advantage of a parametrized decision process. By using the PDF scheme we achieved 16% improvement in accuracy of subject-dependent recognition and 10% for subject-independent recognition compared to the best unimodal results.

Acoustic parameters for induced emotion categorizing and dimensional approach (자연스러운 정서 반응의 범주 및 차원 분류에 적합한 음성 파라미터)

  • Park, Ji-Eun;Park, Jeong-Sik;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.16 no.1
    • /
    • pp.117-124
    • /
    • 2013
  • This study examined that how precisely MFCC, LPC, energy, and pitch related parameters of the speech data, which have been used mainly for voice recognition system could predict the vocal emotion categories as well as dimensions of vocal emotion. 110 college students participated in this experiment. For more realistic emotional response, we used well defined emotion-inducing stimuli. This study analyzed the relationship between the parameters of MFCC, LPC, energy, and pitch of the speech data and four emotional dimensions (valence, arousal, intensity, and potency). Because dimensional approach is more useful for realistic emotion classification. It results in the best vocal cue parameters for predicting each of dimensions by stepwise multiple regression analysis. Emotion categorizing accuracy analyzed by LDA is 62.7%, and four dimension regression models are statistically significant, p<.001. Consequently, this result showed the possibility that the parameters could also be applied to spontaneous vocal emotion recognition.

  • PDF

Emotional Speaker Recognition using Emotional Adaptation (감정 적응을 이용한 감정 화자 인식)

  • Kim, Weon-Goo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.7
    • /
    • pp.1105-1110
    • /
    • 2017
  • Speech with various emotions degrades the performance of the speaker recognition system. In this paper, a speaker recognition method using emotional adaptation has been proposed to improve the performance of speaker recognition system using affective speech. For emotional adaptation, emotional speaker model was generated from speaker model without emotion using a small number of training affective speech and speaker adaptation method. Since it is not easy to obtain a sufficient affective speech for training from a speaker, it is very practical to use a small number of affective speeches in a real situation. The proposed method was evaluated using a Korean database containing four emotions. Experimental results show that the proposed method has better performance than conventional methods in speaker verification and speaker recognition.

On the Implementation of a Facial Animation Using the Emotional Expression Techniques (FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.147-155
    • /
    • 2005
  • In this paper, we present a FAES(a Facial Animation with Emotion and Speech) system for speech-driven face animation with emotions. We animate face cartoons not only from input speech, but also based on emotions derived from speech signal. And also our system can ensure smooth transitions and exact representation in animation. To do this, after collecting the training data, we have made the database using SVM(Support Vector Machine) to recognize four different categories of emotions: neutral, dislike, fear and surprise. So that, we can make the system for speech-driven animation with emotions. Also, we trained on Korean young person and focused on only Korean emotional face expressions. Experimental results of our system demonstrate that more emotional areas expanded and the accuracies of the emotional recognition and the continuous speech recognition are respectively increased 7% and 5% more compared with the previous method.

  • PDF

Multi-Emotion Regression Model for Recognizing Inherent Emotions in Speech Data (음성 데이터의 내재된 감정인식을 위한 다중 감정 회귀 모델)

  • Moung Ho Yi;Myung Jin Lim;Ju Hyun Shin
    • Smart Media Journal
    • /
    • v.12 no.9
    • /
    • pp.81-88
    • /
    • 2023
  • Recently, communication through online is increasing due to the spread of non-face-to-face services due to COVID-19. In non-face-to-face situations, the other person's opinions and emotions are recognized through modalities such as text, speech, and images. Currently, research on multimodal emotion recognition that combines various modalities is actively underway. Among them, emotion recognition using speech data is attracting attention as a means of understanding emotions through sound and language information, but most of the time, emotions are recognized using a single speech feature value. However, because a variety of emotions exist in a complex manner in a conversation, a method for recognizing multiple emotions is needed. Therefore, in this paper, we propose a multi-emotion regression model that extracts feature vectors after preprocessing speech data to recognize complex, inherent emotions and takes into account the passage of time.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Emotion Recognition using Prosodic Feature Vector and Gaussian Mixture Model (운율 특성 벡터와 가우시안 혼합 모델을 이용한 감정인식)

  • Kwak, Hyun-Suk;Kim, Soo-Hyun;Kwak, Yoon-Keun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2002.11b
    • /
    • pp.762-766
    • /
    • 2002
  • This paper describes the emotion recognition algorithm using HMM(Hidden Markov Model) method. The relation between the mechanic system and the human has just been unilateral so far. This is the why people don't want to get familiar with multi-service robots of today. If the function of the emotion recognition is granted to the robot system, the concept of the mechanic part will be changed a lot. Pitch and Energy extracted from the human speech are good and important factors to classify the each emotion (neutral, happy, sad and angry etc.), which are called prosodic features. HMM is the powerful and effective theory among several methods to construct the statistical model with characteristic vector which is made up with the mixture of prosodic features

  • PDF

A Study on the Automatic Monitoring System for the Contact Center Using Emotion Recognition and Keyword Spotting Method (감성인식과 핵심어인식 기술을 이용한 고객센터 자동 모니터링 시스템에 대한 연구)

  • Yoon, Won-Jung;Kim, Tae-Hong;Park, Kyu-Sik
    • Journal of Internet Computing and Services
    • /
    • v.13 no.3
    • /
    • pp.107-114
    • /
    • 2012
  • In this paper, we proposed an automatic monitoring system for contact center in order to manage customer's complaint and agent's quality. The proposed system allows more accurate monitoring using emotion recognition and keyword spotting method for neutral/anger voice emotion. The system can provide professional consultation and management for the customer with language violence, such as abuse and sexual harassment. We developed a method of building robust algorithm on heterogeneous speech DB of many unspecified customers. Experimental results confirm the stable and improved performance using real contact center speech data.

Emotion recognition in speech using hidden Markov model (은닉 마르코프 모델을 이용한 음성에서의 감정인식)

  • 김성일;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.3
    • /
    • pp.21-26
    • /
    • 2002
  • This paper presents the new approach of identifying human emotional states such as anger, happiness, normal, sadness, or surprise. This is accomplished by using discrete duration continuous hidden Markov models(DDCHMM). For this, the emotional feature parameters are first defined from input speech signals. In this study, we used prosodic parameters such as pitch signals, energy, and their each derivative, which were then trained by HMM for recognition. Speaker adapted emotional models based on maximum a posteriori(MAP) estimation were also considered for speaker adaptation. As results, the simulation performance showed that the recognition rates of vocal emotion gradually increased with an increase of adaptation sample number.

  • PDF