• Title/Summary/Keyword: speech emotion recognition

Search Result 135, Processing Time 0.024 seconds

Speech Parameters for the Robust Emotional Speech Recognition (감정에 강인한 음성 인식을 위한 음성 파라메터)

  • Kim, Weon-Goo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1137-1142
    • /
    • 2010
  • This paper studied the speech parameters less affected by the human emotion for the development of the robust speech recognition system. For this purpose, the effect of emotion on the speech recognition system and robust speech parameters of speech recognition system were studied using speech database containing various emotions. In this study, mel-cepstral coefficient, delta-cepstral coefficient, RASTA mel-cepstral coefficient and frequency warped mel-cepstral coefficient were used as feature parameters. And CMS (Cepstral Mean Subtraction) method were used as a signal bias removal technique. Experimental results showed that the HMM based speaker independent word recognizer using vocal tract length normalized mel-cepstral coefficient, its derivatives and CMS as a signal bias removal showed the best performance of 0.78% word error rate. This corresponds to about a 50% word error reduction as compare to the performance of baseline system using mel-cepstral coefficient, its derivatives and CMS.

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Speech emotion recognition using attention mechanism-based deep neural networks (주목 메커니즘 기반의 심층신경망을 이용한 음성 감정인식)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.407-412
    • /
    • 2017
  • In this paper, we propose a speech emotion recognition method using a deep neural network based on the attention mechanism. The proposed method consists of a combination of CNN (Convolution Neural Networks), GRU (Gated Recurrent Unit), DNN (Deep Neural Networks) and attention mechanism. The spectrogram of the speech signal contains characteristic patterns according to the emotion. Therefore, we modeled characteristic patterns according to the emotion by applying the tuned Gabor filters as convolutional filter of typical CNN. In addition, we applied the attention mechanism with CNN and FC (Fully-Connected) layer to obtain the attention weight by considering context information of extracted features and used it for emotion recognition. To verify the proposed method, we conducted emotion recognition experiments on six emotions. The experimental results show that the proposed method achieves higher performance in speech emotion recognition than the conventional methods.

Emotion Recognition Using Output Data of Image and Speech (영상과 음성의 출력 데이터를 이용한 감성 인식)

  • Joo, Young-Hoon;Oh, Jae-Heung;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.275-280
    • /
    • 2003
  • In this paper, we propose a method for recognizing the human s emotion using output data of image and speech. The proposed method is based on the recognition rate of image and speech. In case that we use one data of image or speech, it is hard to produce the correct result by wrong recognition. To solve this problem, we propose the new method that can reduce the result of the wrong recognition by multiplying the emotion status with the higher recognition rate by the higher weight value. To experiment the proposed method, we suggest the simple recognizing method by using image and speech. Finally, we have shown the potentialities through the expriment.

Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach (감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법)

  • Edward Dwijayanto Cahyadi;Mi-Hwa Song
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.

Improved speech emotion recognition using histogram equalization and data augmentation techniques (히스토그램 등화와 데이터 증강 기법을 이용한 개선된 음성 감정 인식)

  • Heo, Woon-Haeng;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.9 no.2
    • /
    • pp.77-83
    • /
    • 2017
  • We propose a new method to reduce emotion recognition errors caused by variation in speaker characteristics and speech rate. Firstly, for reducing variation in speaker characteristics, we adjust features from a test speaker to fit the distribution of all training data by using the histogram equalization (HE) algorithm. Secondly, for dealing with variation in speech rate, we augment the training data with speech generated in various speech rates. In computer experiments using EMO-DB, KRN-DB and eNTERFACE-DB, the proposed method is shown to improve weighted accuracy relatively by 34.7%, 23.7% and 28.1%, respectively.

A Training Method for Emotion Recognition using Emotional Adaptation (감정 적응을 이용한 감정 인식 학습 방법)

  • Kim, Weon-Goo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.998-1003
    • /
    • 2020
  • In this paper, an emotion training method using emotional adaptation is proposed to improve the performance of the existing emotion recognition system. For emotion adaptation, an emotion speech model was created from a speech model without emotion using a small number of training emotion voices and emotion adaptation methods. This method showed superior performance even when using a smaller number of emotional voices than the existing method. Since it is not easy to obtain enough emotional voices for training, it is very practical to use a small number of emotional voices in real situations. In the experimental results using a Korean database containing four emotions, the proposed method using emotional adaptation showed better performance than the existing method.

Speech Emotion Recognition Based on Deep Networks: A Review (딥네트워크 기반 음성 감정인식 기술 동향)

  • Mustaqeem, Mustaqeem;Kwon, Soonil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.

Reinforcement Learning Method Based Interactive Feature Selection(IFS) Method for Emotion Recognition (감성 인식을 위한 강화학습 기반 상호작용에 의한 특징선택 방법 개발)

  • Park Chang-Hyun;Sim Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.7
    • /
    • pp.666-670
    • /
    • 2006
  • This paper presents the novel feature selection method for Emotion Recognition, which may include a lot of original features. Specially, the emotion recognition in this paper treated speech signal with emotion. The feature selection has some benefits on the pattern recognition performance and 'the curse of dimension'. Thus, We implemented a simulator called 'IFS' and those result was applied to a emotion recognition system(ERS), which was also implemented for this research. Our novel feature selection method was basically affected by Reinforcement Learning and since it needs responses from human user, it is called 'Interactive feature Selection'. From performing the IFS, we could get 3 best features and applied to ERS. Comparing those results with randomly selected feature set, The 3 best features were better than the randomly selected feature set.

Inference Ability Based Emotion Recognition From Speech (추론 능력에 기반한 음성으로부터의 감성 인식)

  • Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.123-125
    • /
    • 2004
  • Recently, we are getting to interest in a user friendly machine. The emotion is one of most important conditions to be familiar with people. The machine uses sound or image to express or recognize the emotion. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

  • PDF