• Title/Summary/Keyword: Emotion recognition

Search Result 651, Processing Time 0.028 seconds

Emotion recognition from speech using Gammatone auditory filterbank

  • Le, Ba-Vui;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06a
    • /
    • pp.255-258
    • /
    • 2011
  • An application of Gammatone auditory filterbank for emotion recognition from speech is described in this paper. Gammatone filterbank is a bank of Gammatone filters which are used as a preprocessing stage before applying feature extraction methods to get the most relevant features for emotion recognition from speech. In the feature extraction step, the energy value of output signal of each filter is computed and combined with other of all filters to produce a feature vector for the learning step. A feature vector is estimated in a short time period of input speech signal to take the advantage of dependence on time domain. Finally, in the learning step, Hidden Markov Model (HMM) is used to create a model for each emotion class and recognize a particular input emotional speech. In the experiment, feature extraction based on Gammatone filterbank (GTF) shows the better outcomes in comparison with features based on Mel-Frequency Cepstral Coefficient (MFCC) which is a well-known feature extraction for speech recognition as well as emotion recognition from speech.

Interactive Feature selection Algorithm for Emotion recognition (감정 인식을 위한 Interactive Feature Selection(IFS) 알고리즘)

  • Yang, Hyun-Chang;Kim, Ho-Duck;Park, Chang-Hyun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.6
    • /
    • pp.647-652
    • /
    • 2006
  • This paper presents the novel feature selection method for Emotion Recognition, which may include a lot of original features. Specially, the emotion recognition in this paper treated speech signal with emotion. The feature selection has some benefits on the pattern recognition performance and 'the curse of dimension'. Thus, We implemented a simulator called 'IFS' and those result was applied to a emotion recognition system(ERS), which was also implemented for this research. Our novel feature selection method was basically affected by Reinforcement Learning and since it needs responses from human user, it is called 'Interactive Feature Selection'. From performing the IFS, we could get 3 best features and applied to ERS. Comparing those results with randomly selected feature set, The 3 best features were better than the randomly selected feature set.

A Study on Emotion Recognition Systems based on the Probabilistic Relational Model Between Facial Expressions and Physiological Responses (생리적 내재반응 및 얼굴표정 간 확률 관계 모델 기반의 감정인식 시스템에 관한 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.6
    • /
    • pp.513-519
    • /
    • 2013
  • The current vision-based approaches for emotion recognition, such as facial expression analysis, have many technical limitations in real circumstances, and are not suitable for applications that use them solely in practical environments. In this paper, we propose an approach for emotion recognition by combining extrinsic representations and intrinsic activities among the natural responses of humans which are given specific imuli for inducing emotional states. The intrinsic activities can be used to compensate the uncertainty of extrinsic representations of emotional states. This combination is done by using PRMs (Probabilistic Relational Models) which are extent version of bayesian networks and are learned by greedy-search algorithms and expectation-maximization algorithms. Previous research of facial expression-related extrinsic emotion features and physiological signal-based intrinsic emotion features are combined into the attributes of the PRMs in the emotion recognition domain. The maximum likelihood estimation with the given dependency structure and estimated parameter set is used to classify the label of the target emotional states.

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

Represented by the Color Image Emotion Emotional Attributes of Size, Quantification Algorithm (이미지의 색채 감성속성을 이용한 대표감성크기 정량화 알고리즘)

  • Lee, Yean-Ran
    • Cartoon and Animation Studies
    • /
    • s.39
    • /
    • pp.393-412
    • /
    • 2015
  • See and feel the emotion recognition is the image of a person variously changed according to the environment, personal disposition. Thus, the image recognition has been focused on the emotional sensibilities computer you want to control the number studies. However, existing emotional computing model is numbered and the objective is clearly insufficient measurement conditions. Thus, through quantifiable image Emotion Recognition and emotion computing, is a study of the situation requires an objective assessment scheme. In this paper, the sensitivity was represented by numbered sizes quantified according to the image recognition calculation emotion. So apply the principal attributes of the color image emotion recognition as a configuration parameter. In addition, in calculating the color sensitivity by applying a digital computing focused research. Image color emotion computing research approach is the color of emotion attribute, brightness, and saturation reflects the weighted according to importance to the emotional scores. And free-degree by applying the sensitivity point to the image sensitivity formula (X), the tone (Y-axis) is calculated as a number system. There pleasure degree (X-axis), the tension and position the position of the image point that the sensitivity of the emotional coordinate crossing (Y-axis). Image color coordinates by applying the core emotional effect of Russell (Core Affect) is based on the 16 main representatives emotion. Thus, the image recognition sensitivity and compares the number size. Depending on the magnitude of the sensitivity scores demonstrate this sensitivity must change. Compare the way the images are divided up the top five of emotion recognition emotion emotions associated with 16 representatives, and representatives analyzed the concentrated emotion sizes. Future studies are needed emotional computing method of calculation to be more similar sensibility and human emotion recognition.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.

A Proposal of an Interactive Simulation Game using SER (Speech Emotion Recognition) Technology (SER 기술을 이용한 대화형 시뮬레이션 게임 제안)

  • Lee, Kang-Hee;Jeon, Seo-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.445-446
    • /
    • 2019
  • 본 논문에서는 단순히 필요한 정보를 얻기 위한 수준에 그쳤던 현대의 인공지능을 SER (Speech Emotion Recognition) 기술을 이용하여 사용자와 직접적으로 대화하는 형식으로 발전시키고자 한다. 사용자의 음성 언어에서 감정을 추출하여 인공지능 분야 및 챗봇과 대화함에 있어 좀더 효과적으로 해석할 수 있도록 도움을 준다. 이것을 대화형 시뮬레이션 게임에 접목시켜 단순한 선택형 대화 방식이 아닌 구어체로 대화하며 사용자에게 높은 몰입도를 줄 수 있다.

  • PDF

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

Children's Emotion Recognition, Emotion Expression, and Social Interactions According to Attachment Styles (애착 유형에 따른 아동의 정서인식, 정서표현 및 상호작용)

  • Choi, Eun-Sil;Bost, Kelly
    • Korean Journal of Child Studies
    • /
    • v.33 no.2
    • /
    • pp.55-68
    • /
    • 2012
  • The goals of this study were to examine how children's recognition of various emotions, emotion expression, and social interactions among their peers differed according to their attachment styles. A total of 65 three to five years old children completed both attachment story-stem doll plays and a standard emotion recognition task. Trained observers documented children's valence of emotion expression and social interactions among their peers in the classroom. Consistent with attachment theory, children who were categorized as secure in the doll play were more likely to express positive emotions than children who were categorized as avoidant in the doll play. Children who were categorized as avoidant in the doll play were more likely to express neutral emotions among their peers than children who were categorized as secure and anxious in the doll play. The findings of this study contribute to the general attachment literature by documenting how attachment security plays a crucial role in having positive emotions in ordinary situations. It does so by also demonstrating how different attachment styles are associated with children's qualitatively different patterns of emotion processing, especially in terms of their expression of emotions.