• Title/Summary/Keyword: information of emotion

Search Result 1,328, Processing Time 0.028 seconds

Speech emotion recognition based on genetic algorithm-decision tree fusion of deep and acoustic features

  • Sun, Linhui;Li, Qiu;Fu, Sheng;Li, Pingan
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.462-475
    • /
    • 2022
  • Although researchers have proposed numerous techniques for speech emotion recognition, its performance remains unsatisfactory in many application scenarios. In this study, we propose a speech emotion recognition model based on a genetic algorithm (GA)-decision tree (DT) fusion of deep and acoustic features. To more comprehensively express speech emotional information, first, frame-level deep and acoustic features are extracted from a speech signal. Next, five kinds of statistic variables of these features are calculated to obtain utterance-level features. The Fisher feature selection criterion is employed to select high-performance features, removing redundant information. In the feature fusion stage, the GA is is used to adaptively search for the best feature fusion weight. Finally, using the fused feature, the proposed speech emotion recognition model based on a DT support vector machine model is realized. Experimental results on the Berlin speech emotion database and the Chinese emotion speech database indicate that the proposed model outperforms an average weight fusion method.

Research on Designing Korean Emotional Dictionary using Intelligent Natural Language Crawling System in SNS (SNS대상의 지능형 자연어 수집, 처리 시스템 구현을 통한 한국형 감성사전 구축에 관한 연구)

  • Lee, Jong-Hwa
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.237-251
    • /
    • 2020
  • Purpose The research was studied the hierarchical Hangul emotion index by organizing all the emotions which SNS users are thinking. As a preliminary study by the researcher, the English-based Plutchick (1980)'s emotional standard was reinterpreted in Korean, and a hashtag with implicit meaning on SNS was studied. To build a multidimensional emotion dictionary and classify three-dimensional emotions, an emotion seed was selected for the composition of seven emotion sets, and an emotion word dictionary was constructed by collecting SNS hashtags derived from each emotion seed. We also want to explore the priority of each Hangul emotion index. Design/methodology/approach In the process of transforming the matrix through the vector process of words constituting the sentence, weights were extracted using TF-IDF (Term Frequency Inverse Document Frequency), and the dimension reduction technique of the matrix in the emotion set was NMF (Nonnegative Matrix Factorization) algorithm. The emotional dimension was solved by using the characteristic value of the emotional word. The cosine distance algorithm was used to measure the distance between vectors by measuring the similarity of emotion words in the emotion set. Findings Customer needs analysis is a force to read changes in emotions, and Korean emotion word research is the customer's needs. In addition, the ranking of the emotion words within the emotion set will be a special criterion for reading the depth of the emotion. The sentiment index study of this research believes that by providing companies with effective information for emotional marketing, new business opportunities will be expanded and valued. In addition, if the emotion dictionary is eventually connected to the emotional DNA of the product, it will be possible to define the "emotional DNA", which is a set of emotions that the product should have.

Emotion Perception and Multisensory Integration in Autism Spectrum Disorder: A Review of Behavioral and Cognitive Neuroscience Studies (자폐 스펙트럼 장애의 다중감각 통합과 정서인식: 행동연구와 인지 신경 과학 연구에 대한 개관)

  • Cho, Hee-Joung;Kim, So-Yeon
    • Science of Emotion and Sensibility
    • /
    • v.21 no.4
    • /
    • pp.77-90
    • /
    • 2018
  • Behavioral studies of emotion recognition in autism spectrum disorders (ASD) have yielded mixed results. Most of the studies focused on emotion recognition abilities with regard to ASD using stimuli with unisensory modality, making it difficult to determine difficulties in real life emotion perception in ASD. Herein, we review the recent behavioral and cognitive neuroscience studies on emotion recognition functions in ASD, including both unisensory and multisensory emotional information, to elucidate the possible difficulties in emotion recognition in ASD. In our study, we discovered that people with ASD have problems in the process of integrating emotional information during emotion recognition tasks. The following four points are discussed: (1) The restrictions of previous studies, (2) deficits in emotion recognition in ASD especially in recognizing multisensory information, (3) possible compensation mechanisms for emotion recognition in ASD, and (4) the possible roles of attention and language functions in emotion recognition in ASD. The compensatory mechanisms proposed herein for ASD with regard to emotion recognition abilities could contribute to a therapeutic approach for improving emotion recognition functions in ASD.

Development of real-time reactive emotion image contents player system to induce the user's emotion (사용자의 감성을 유도하는 실시간 반응형 감성 이미지 콘텐츠 플레이어 시스템 개발)

  • Lee, Haena;Kim, Dong Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.155-161
    • /
    • 2014
  • This study presents the real-time emotion image contents player to induce the user's emotion efficiently. The emotion image contents player was designed to efficiently induce by giving a change in the color, brightness, saturation of image contents corresponded to the user's emotion. In the emotion recognition module, physiological signal of pulse, skin temperature, skin resistance which based on autonomic nervous system were used. The emotion recognition part used physiological signal of pulse, skin temperature, skin resistance based on autonomic nervous system. The image as emotional contents was used with the 9 kinds emotion area classified in international affective picture system(IAPS). As experimental results, the use's emotion that match the image's emotion with the emotion image contents player was derived 10% more accurately. The emotion contents player is expected to increase emotional feeling between users's emotion and contents emotion duo to the real-time emotion reflection.

Validity analysis of the social emotion model based on relation types in SNS (SNS 사용자의 관계유형에 따른 사회감성 모델의 타당화 분석)

  • Cha, Ye-Sool;Kim, Ji-Hye;Kim, Jong-Hwa;Kim, Song-Yi;Kim, Dong-Keun;Whang, Min-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.283-296
    • /
    • 2012
  • The goal of this study is to determine the social emotion model as an emotion sharing relationship and information sharing relationship based on the user's relations at social networking services. 26 social emotions were extracted by verification of compliance among 92 different emotions collected from the literature survey. The survey on the 26 emotion words was verified to the similarity of social relation types to the Likert 7-points scale. The principal component analysis of the survey data determined 12 representative social emotions in the emotion sharing relation and 13 representative social emotions in the information sharing relation. Multidimensional scaling developed the two-dimensional social emotion model of emotion sharing relation and of information sharing relation based on online communication environment. Meanwhile, insignificant factors in the suggest social emotion models were removed by the structural equation modeling analysis, statistically. The test result of validity analysis demonstrated the fitness of social emotion models at emotion sharing relationships (CFI: .887, TLI: .885, RMSEA: .094), social emotion model of information sharing relationships (CFI: .917, TLI: .900, RMSEA : 0.050). In conclusion, this study presents two different social emotion models based on two different relation types. The findings of this study will provide not only a reference of evaluating social emotions in designing social networking services but also a direction of improving social emotions.

  • PDF

The Emotion Recognition System through The Extraction of Emotional Components from Speech (음성의 감성요소 추출을 통한 감성 인식 시스템)

  • Park Chang-Hyun;Sim Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.9
    • /
    • pp.763-770
    • /
    • 2004
  • The important issue of emotion recognition from speech is a feature extracting and pattern classification. Features should involve essential information for classifying the emotions. Feature selection is needed to decompose the components of speech and analyze the relation between features and emotions. Specially, a pitch of speech components includes much information for emotion. Accordingly, this paper searches the relation of emotion to features such as the sound loudness, pitch, etc. and classifies the emotions by using the statistic of the collecting data. This paper deals with the method of recognizing emotion from the sound. The most important emotional component of sound is a tone. Also, the inference ability of a brain takes part in the emotion recognition. This paper finds empirically the emotional components from the speech and experiment on the emotion recognition. This paper also proposes the recognition method using these emotional components and the transition probability.

The Comparison of Speech Feature Parameters for Emotion Recognition (감정 인식을 위한 음성의 특징 파라메터 비교)

  • 김원구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2004.04a
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

Modeling of the Inspection Environment of Construction Safety Appling Simulation Formalism (비행슈팅게임 게임의 만족도 향상을 위한 인공감정의 개발과 적용)

  • Ham, Jun Seok;Park, Jun Hyoung;Ko, II Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.1
    • /
    • pp.55-63
    • /
    • 2008
  • I Previous scrolling-shooter games cannot represent various emotions of characters as a personality and situations, so it can only represent simple and momentary emotions. This paper purpose to develop and applying an artificial emotion which gives emotional responses and actions to character as a personality and situations of a character. So we proposed an artificial emotion that receives emotional stimulus, analyzes what emotion will be generated, controls various emotions by a personality and time, and exports a current emotion. To visualize and to test an artificial emotion, We made a scrolling shooter game which has two characters who have a different personality each other. And we applied an artificial emotion to that game, modified it to be able to change status as emotions. To estimate a satisfaction of a artificial emotion for a scrolling-shooter game, we made up a question to two groups-one has people who likes a scrolling shooter game, the other has people who doesn't.

Emotion prediction neural network to understand how emotion is predicted by using heart rate variability measurements

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.7
    • /
    • pp.75-82
    • /
    • 2017
  • Correct prediction of emotion is essential for developing advanced health devices. For this purpose, neural network has been successfully used. However, interpretation of how a certain emotion is predicted through the emotion prediction neural network is very tough. When interpreting mechanism about how emotion is predicted by using the emotion prediction neural network can be developed, such mechanism can be effectively embedded into highly advanced health-care devices. In this sense, this study proposes a novel approach to interpreting how the emotion prediction neural network yields emotion. Our proposed mechanism is based on HRV (heart rate variability) measurements, which is based on calculating physiological data out of ECG (electrocardiogram) measurements. Experiment dataset with 23 qualified participants were used to obtain the seven HRV measurement such as Mean RR, SDNN, RMSSD, VLF, LF, HF, LF/HF. Then emotion prediction neural network was modelled by using the HRV dataset. By applying the proposed mechanism, a set of explicit mathematical functions could be derived, which are clearly and explicitly interpretable. The proposed mechanism was compared with conventional neural network to show validity.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.