• Title/Summary/Keyword: Emotion recognition

Search Result 651, Processing Time 0.024 seconds

Emotion Recognition Using The Color Image Scale in Clothing Images (의류 영상에서 컬러 영상 척도를 이용한 감성 인식)

  • Lee, Seul-Gi;Woo, Hyo-Jeong;Ryu, Sung-Pil;Kim, Dong-Woo;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.1-6
    • /
    • 2014
  • Emotion recognition is defined as that machines automatically recognize human emotions. Because the human emotions is very subjective, it is impossible to measure objectively. Therefore, the goal of emotion recognition is to obtain a measure that is agreed by as many people as possible. Emotion recognition in a image is implemented as the method that matches human emotions to the various features of the image. In the paper, we propose an emotion recognition system using color features of clothing image based on the Kobayashi's image scale. The proposed system stores colors of image scale into a database. And extracted major colors from a input clothing image are compared with those in the database. The proposed system can obtain three emotions maximally. In order to evaluate the system performance 70 observers are tested. The test results shows that recognized emotions of the proposed system are very similar to the observers emotions.

Research of Real-Time Emotion Recognition Interface Using Multiple Physiological Signals of EEG and ECG (뇌파 및 심전도 복합 생체신호를 이용한 실시간 감정인식 인터페이스 연구)

  • Shin, Dong-Min;Shin, Dong-Il;Shin, Dong-Kyoo
    • Journal of Korea Game Society
    • /
    • v.15 no.2
    • /
    • pp.105-114
    • /
    • 2015
  • We propose a real time user interface that utilizes emotion recognition by physiological signals. To improve the problem that was low accuracy of emotion recognition through the traditional EEG(ElectroEncephaloGram), We developed a physiological signals-based emotion recognition system mixing relative power spectrum values of theta/alpha/beta/gamma EEG waves and autonomic nerve signal ratio of ECG (ElectroCardioGram). We propose both a data map and weight value modification algorithm to recognize six emotions of happy, fear, sad, joy, anger, and hatred. The datamap that stores the user-specific probability value is created and the algorithm updates the weighting to improve the accuracy of emotion recognition corresponding to each EEG channel. Also, as we compared the results of the EEG/ECG bio-singal complex data and single data consisting of EEG, the accuracy went up 23.77%. The proposed interface system with high accuracy will be utillized as a useful interface for controlling the game spaces and smart spaces.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Emotion Recognition Method using Physiological Signals and Gestures (생체 신호와 몸짓을 이용한 감정인식 방법)

  • Kim, Ho-Duck;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.3
    • /
    • pp.322-327
    • /
    • 2007
  • Researchers in the field of psychology used Electroencephalographic (EEG) to record activities of human brain lot many years. As technology develope, neural basis of functional areas of emotion processing is revealed gradually. So we measure fundamental areas of human brain that controls emotion of human by using EEG. Hands gestures such as shaking and head gesture such as nodding are often used as human body languages for communication with each other, and their recognition is important that it is a useful communication medium between human and computers. Research methods about gesture recognition are used of computer vision. Many researchers study emotion recognition method which uses one of physiological signals and gestures in the existing research. In this paper, we use together physiological signals and gestures for emotion recognition of human. And we select the driver emotion as a specific target. The experimental result shows that using of both physiological signals and gestures gets high recognition rates better than using physiological signals or gestures. Both physiological signals and gestures use Interactive Feature Selection(IFS) for the feature selection whose method is based on a reinforcement learning.

Maximum Entropy-based Emotion Recognition Model using Individual Average Difference (개인별 평균차를 이용한 최대 엔트로피 기반 감성 인식 모델)

  • Park, So-Young;Kim, Dong-Keun;Whang, Min-Cheol
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.7
    • /
    • pp.1557-1564
    • /
    • 2010
  • In this paper, we propose a maximum entropy-based emotion recognition model using the individual average difference of emotional signal, because an emotional signal pattern depends on each individual. In order to accurately recognize a user's emotion, the proposed model utilizes the difference between the average of the input emotional signals and the average of each emotional state's signals(such as positive emotional signals and negative emotional signals), rather than only the given input signal. With the aim of easily constructing the emotion recognition model without the professional knowledge of the emotion recognition, it utilizes a maximum entropy model, one of the best-performed and well-known machine learning techniques. Considering that it is difficult to obtain enough training data based on the numerical value of emotional signal for machine learning, the proposed model substitutes two simple symbols such as +(positive number)/-(negative number) for every average difference value, and calculates the average of emotional signals per second rather than the total emotion response time(10 seconds).

Development of Emotion Recognition Model based on Multi Layer Perceptron (MLP에 기반한 감정인식 모델 개발)

  • Lee Dong-Hoon;Sim Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.3
    • /
    • pp.372-377
    • /
    • 2006
  • In this paper, we propose sensibility recognition model that recognize user's sensibility using brain waves. Method to acquire quantitative data of brain waves including priority living body data or sensitivity data to recognize user's sensitivity need and pattern recognition techniques to examine closely present user's sensitivity state through next acquired brain waves becomes problem that is important. In this paper, we used pattern recognition techniques to use Multi Layer Perceptron (MLP) that is pattern recognition techniques that recognize user's sensibility state through brain waves. We measures several subject's emotion brain waves in specification space for an experiment of sensibility recognition model's which propose in this paper and we made a emotion DB by the meaning data that made of concentration or stability by the brain waves measured. The model recognizes new user's sensibility by the user's brain waves after study by sensibility recognition model which propose in this paper to emotion DB. Finally, we estimates the performance of sensibility recognition model which used brain waves as that measure the change of recognition rate by the number of subjects and a number of hidden nodes.

Speaker-Dependent Emotion Recognition For Audio Document Indexing

  • Hung LE Xuan;QUENOT Georges;CASTELLI Eric
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.92-96
    • /
    • 2004
  • The researches of the emotions are currently great interest in speech processing as well as in human-machine interaction domain. In the recent years, more and more of researches relating to emotion synthesis or emotion recognition are developed for the different purposes. Each approach uses its methods and its various parameters measured on the speech signal. In this paper, we proposed using a short-time parameter: MFCC coefficients (Mel­Frequency Cepstrum Coefficients) and a simple but efficient classifying method: Vector Quantification (VQ) for speaker-dependent emotion recognition. Many other features: energy, pitch, zero crossing, phonetic rate, LPC... and their derivatives are also tested and combined with MFCC coefficients in order to find the best combination. The other models: GMM and HMM (Discrete and Continuous Hidden Markov Model) are studied as well in the hope that the usage of continuous distribution and the temporal behaviour of this set of features will improve the quality of emotion recognition. The maximum accuracy recognizing five different emotions exceeds $88\%$ by using only MFCC coefficients with VQ model. This is a simple but efficient approach, the result is even much better than those obtained with the same database in human evaluation by listening and judging without returning permission nor comparison between sentences [8]; And this result is positively comparable with the other approaches.

  • PDF

Deep Learning-Based Speech Emotion Recognition Technology Using Voice Feature Filters (음성 특징 필터를 이용한 딥러닝 기반 음성 감정 인식 기술)

  • Shin Hyun Sam;Jun-Ki Hong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.223-231
    • /
    • 2023
  • In this study, we propose a model that extracts and analyzes features from deep learning-based speech signals, generates filters, and utilizes these filters to recognize emotions in speech signals. We evaluate the performance of emotion recognition accuracy using the proposed model. According to the simulation results using the proposed model, the average emotion recognition accuracy of DNN and RNN was very similar, at 84.59% and 84.52%, respectively. However, we observed that the simulation time for DNN was approximately 44.5% shorter than that of RNN, enabling quicker emotion prediction.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Speech Emotion Recognition Based on Deep Networks: A Review (딥네트워크 기반 음성 감정인식 기술 동향)

  • Mustaqeem, Mustaqeem;Kwon, Soonil
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.