• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.026 seconds

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

The effects of the usability of products on user's emotions - with emphasis on suggestion of methods for measuring user's emotions expressed while using a product -

  • Jeong, Sang-Hoon
    • Archives of design research
    • /
    • v.20 no.2 s.70
    • /
    • pp.5-16
    • /
    • 2007
  • The main objective of our research is analyzing user's emotional changes while using a product, to reveal the influence of usability on human emotions. In this study we have extracted some emotional words that can come up during user interaction with a product and reveal emotional changes through three methods. Finally, we extracted 88 emotional words for measuring user's emotions expressed while using products. And we categorized the 88 words to form 6 groups by using factor analysis. The 6 categories that were extracted as a result of this study were found to be user's representative emotions expressed while using products. It is expected that emotional words and user's representative emotions extracted in this study will be used as subjective evaluation data that is required to measure user's emotional changes while using a product. Also, we proposed the effective methods for measuring user's emotion expressed while using a product in the environment which is natural and accessible for the field of design, by using the emotion mouse and the Eyegaze. An examinee performs several tasks with the emotion mouse through the mobile phone simulator on the computer monitor connected to the Eyegaze. While testing, the emotion mouse senses user's EDA and PPG and transmits the data to the computer. In addition, the Eyegaze can observe the change of pupil size. And a video camera records user's facial expression while testing. After each testing, a subjective evaluation on the emotional changes expressed by the user is performed by the user him/herself using the emotional words extracted from the above study. We aim to evaluate the satisfaction level of usability of the product and compare it with the actual experiment results. Through continuous studies based on these researches, we hope to supply a basic framework for the development of interface with consideration to the user's emotions.

  • PDF

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

Crossmodal Perception of Mismatched Emotional Expressions by Embodied Agents (에이전트의 표정과 목소리 정서의 교차양상지각)

  • Cho, Yu-Suk;Suk, Ji-He;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.12 no.3
    • /
    • pp.267-278
    • /
    • 2009
  • Today an embodied agent generates a large amount of interest because of its vital role for human-human interactions and human-computer interactions in virtual world. A number of researchers have found that we can recognize and distinguish between various emotions expressed by an embodied agent. In addition many studies found that we respond to simulated emotions in a similar way to human emotion. This study investigates interpretation of mismatched emotions expressed by an embodied agent (e.g. a happy face with a sad voice); whether audio-visual channel integration occurs or one channel dominates when participants judge the emotion. The study employed a 4 (visual: happy, sad, warm, cold) $\times$ 4 (audio: happy, sad, warm, cold) within-subjects repeated measure design. The results suggest that people perceive emotions not depending on just one channel but depending on both channels. Additionally facial expression (happy face vs. sad face) makes a difference in influence of two channels; Audio channel has more influence in interpretation of emotions when facial expression is happy. People were able to feel other emotion which was not expressed by face or voice from mismatched emotional expressions, so there is a possibility that we may express various and delicate emotions with embodied agent by using only several kinds of emotions.

  • PDF

Experiencing and Expression of Deaf Adolescents (농인 청소년의 감정 경험 및 표현 특성)

  • Park, Ji-Eun;Kim, Eun-Ye;Jang, Un-Jung;Cheong, E-Nae;Eum, Young-Ji;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.19 no.3
    • /
    • pp.51-58
    • /
    • 2016
  • This study examined the difference between the deaf and hearing adolescents of experiencing emotions and the intensity levels of expressing them. Three different video clips were used to induce pleasure, anger, and sadness. While watching the clips, facial expressions of the participants were recorded. The experienced emotions were measured by a self-report method, and the third person rated participants' expressed emotions based upon the recorded facial images. Two groups (deaf and hearing) were compared if those two groups shared the same experienced emotions, and whether the ratings scored by the third person corresponded with the self-rated scores. There was no significant difference in experienced emotion and its intensity level. However, hearing adolescents showed more intensive responses of pleasure than they reported, while deaf adolescents showed less intensive expressions of happiness than they reported themselves. Thus, hearing people might not be able to detect and fully comprehend how the deaf feel in general circumstances. This further indicates that the deaf adolescents cannot get enough supports from the hearing people when they express their feelings, and consequently, have a possibility of causing misunderstandings, conflicts, or even a break in relationships.

Facial expression recognition based on pleasure and arousal dimensions (쾌 및 각성차원 기반 얼굴 표정인식)

  • 신영숙;최광남
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.4
    • /
    • pp.33-42
    • /
    • 2003
  • This paper presents a new system for facial expression recognition based in dimension model of internal states. The information of facial expression are extracted to the three steps. In the first step, Gabor wavelet representation extracts the edges of face components. In the second step, sparse features of facial expressions are extracted using fuzzy C-means(FCM) clustering algorithm on neutral faces, and in the third step, are extracted using the Dynamic Model(DM) on the expression images. Finally, we show the recognition of facial expression based on the dimension model of internal states using a multi-layer perceptron. The two dimensional structure of emotion shows that it is possible to recognize not only facial expressions related to basic emotions but also expressions of various emotion.

  • PDF

A Local Feature-Based Robust Approach for Facial Expression Recognition from Depth Video

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.3
    • /
    • pp.1390-1403
    • /
    • 2016
  • Facial expression recognition (FER) plays a very significant role in computer vision, pattern recognition, and image processing applications such as human computer interaction as it provides sufficient information about emotions of people. For video-based facial expression recognition, depth cameras can be better candidates over RGB cameras as a person's face cannot be easily recognized from distance-based depth videos hence depth cameras also resolve some privacy issues that can arise using RGB faces. A good FER system is very much reliant on the extraction of robust features as well as recognition engine. In this work, an efficient novel approach is proposed to recognize some facial expressions from time-sequential depth videos. First of all, efficient Local Binary Pattern (LBP) features are obtained from the time-sequential depth faces that are further classified by Generalized Discriminant Analysis (GDA) to make the features more robust and finally, the LBP-GDA features are fed into Hidden Markov Models (HMMs) to train and recognize different facial expressions successfully. The depth information-based proposed facial expression recognition approach is compared to the conventional approaches such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), and Linear Discriminant Analysis (LDA) where the proposed one outperforms others by obtaining better recognition rates.

Video Expression Recognition Method Based on Spatiotemporal Recurrent Neural Network and Feature Fusion

  • Zhou, Xuan
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.337-351
    • /
    • 2021
  • Automatically recognizing facial expressions in video sequences is a challenging task because there is little direct correlation between facial features and subjective emotions in video. To overcome the problem, a video facial expression recognition method using spatiotemporal recurrent neural network and feature fusion is proposed. Firstly, the video is preprocessed. Then, the double-layer cascade structure is used to detect a face in a video image. In addition, two deep convolutional neural networks are used to extract the time-domain and airspace facial features in the video. The spatial convolutional neural network is used to extract the spatial information features from each frame of the static expression images in the video. The temporal convolutional neural network is used to extract the dynamic information features from the optical flow information from multiple frames of expression images in the video. A multiplication fusion is performed with the spatiotemporal features learned by the two deep convolutional neural networks. Finally, the fused features are input to the support vector machine to realize the facial expression classification task. The experimental results on cNTERFACE, RML, and AFEW6.0 datasets show that the recognition rates obtained by the proposed method are as high as 88.67%, 70.32%, and 63.84%, respectively. Comparative experiments show that the proposed method obtains higher recognition accuracy than other recently reported methods.

Experience of Bell's Palsy Patients on Facial Qigong Exercise and Efficient Educational Program: a Qualitative Study (특발성 안면신경마비 환자의 안면 기공요법 경험과 안면 기공요법의 효과적인 교육 방안 : 질적 연구)

  • Yoon, Kang Hyun;Lee, Seungmin;Lim, Ji Seok;Cho, Ye Eun;Lee, Hwa Jin;Kim, Ji Hye;Kang, Jung Won;Lee, Sang Hoon
    • Journal of Acupuncture Research
    • /
    • v.32 no.1
    • /
    • pp.67-78
    • /
    • 2015
  • Objectives : This study is designed to explore the experience of patients who received facial Qigong exercise education, to identify whether facial Qigong exercise is effective for Bell's Palsy patients and to aid in developing a more efficient exercise program. Methods : Eight patients who visited the Facial Palsy Center in Kyung Hee University Korean Medicine Hospital between September 2014 and December 2014 for treatment of Bell's palsy were interviewed. All patients received facial Qigong exercise education at least two times. The interview was conducted for 20 to 30 minutes and included discussion related to the experience of facial palsy, facial Qigong exercise and suggestions for a more efficient facial Qigong program. Results : The patients replied that they experienced emotions such as anxiety, depression and confusion when they discovered the symptoms of facial palsy. They also suggested that they had regained mental stability and physically recovered through facial Qigong exercise and also suggested possible improvements for the system of facial Qigong exercise in terms of time/place/educator. Conclusions : The results from the interviews suggest that facial Qigong exercise might be an effective treatment for Bell's Palsy patients from a psychological and physical perspective. A more patient-centered environment should be established to ensure efficient treatment.