• Title/Summary/Keyword: Facial Emotions

Search Result 159, Processing Time 0.026 seconds

Micro-Expression Recognition Base on Optical Flow Features and Improved MobileNetV2

  • Xu, Wei;Zheng, Hao;Yang, Zhongxue;Yang, Yingjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.1981-1995
    • /
    • 2021
  • When a person tries to conceal emotions, real emotions will manifest themselves in the form of micro-expressions. Research on facial micro-expression recognition is still extremely challenging in the field of pattern recognition. This is because it is difficult to implement the best feature extraction method to cope with micro-expressions with small changes and short duration. Most methods are based on hand-crafted features to extract subtle facial movements. In this study, we introduce a method that incorporates optical flow and deep learning. First, we take out the onset frame and the apex frame from each video sequence. Then, the motion features between these two frames are extracted using the optical flow method. Finally, the features are inputted into an improved MobileNetV2 model, where SVM is applied to classify expressions. In order to evaluate the effectiveness of the method, we conduct experiments on the public spontaneous micro-expression database CASME II. Under the condition of applying the leave-one-subject-out cross-validation method, the recognition accuracy rate reaches 53.01%, and the F-score reaches 0.5231. The results show that the proposed method can significantly improve the micro-expression recognition performance.

Causal Relationships between Emotional Labor and Emotions and Communication Skills in the Foodservice Industry (외식산업 종사자의 감정노동과 감정노동자의 정서, 커뮤니케이션 스킬간의 인과관계 )

  • Kim, Min-Joo;Kim, Doo-Ra
    • Culinary science and hospitality research
    • /
    • v.14 no.2
    • /
    • pp.73-85
    • /
    • 2008
  • This study is based on an empirical study on consequences of emotions and emotional labor in the food-service industry. It analyzed the effect of emotional labor on emotions of emotional laborers and the effect of emotional labor on communication skills. Data used for analysis were collected through the questionnaire surveyed on the various samples including employees of family restaurants, Korean restaurants, Chinese restaurants, Japanese restaurants, etc. The result of data analysis indicated that only the efforts for emotional expressions among the factors influenced affirmative emotion(P-value=0.042). It also showed that the factor of the efforts for emotional expressions by researchers had a positive effect on both language communication skills and non-language communication skills(P-value=0.000). This study was valuable in the aspect that emotions and communication skills were first selected as dependent variables of emotional labor and it verified causality between emotional labor and these variables. However, it also has some limitations that its sample size was small and it depended on convenience sampling.

  • PDF

Extreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.443-458
    • /
    • 2014
  • An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

A Multimodal Emotion Recognition Using the Facial Image and Speech Signal

  • Go, Hyoun-Joo;Kim, Yong-Tae;Chun, Myung-Geun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.1-6
    • /
    • 2005
  • In this paper, we propose an emotion recognition method using the facial images and speech signals. Six basic emotions including happiness, sadness, anger, surprise, fear and dislike are investigated. Facia] expression recognition is performed by using the multi-resolution analysis based on the discrete wavelet. Here, we obtain the feature vectors through the ICA(Independent Component Analysis). On the other hand, the emotion recognition from the speech signal method has a structure of performing the recognition algorithm independently for each wavelet subband and the final recognition is obtained from the multi-decision making scheme. After merging the facial and speech emotion recognition results, we obtained better performance than previous ones.

Emotion Recognition of Facial Expression using the Hybrid Feature Extraction (혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식)

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

Research on Micro-Movement Responses of Facial Muscles by Intimacy, Empathy, Valence (친밀도, 공감도, 긍정도에 따른 얼굴 근육의 미세움직임 반응 차이)

  • Cho, Ji Eun;Park, Sang-In;Won, Myoung Ju;Park, Min Ji;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.2
    • /
    • pp.439-448
    • /
    • 2017
  • Facial expression is important factor on social interaction. Facial muscle movement provides emotion information to develop social network. However, facial movement has less determined to recognize social emotion. This study is to analyze facial micro-movements and to recognize the social emotion such as intimacy, empathy, and valence. 76 university students were presented to the stimuli for social emotions and was measure their facial expression using camera. As a results, facial micro-movement. showed significant difference of social emotion. After extracting the movement amount of 3 unconscious muscles and 18 conscious muscles, Dominant Frequency band was confirmed. While muscle around the nose and cheek showed significant difference in the intimacy, one around mouth did in the empathy and one around jaw in the valence. The results proposed new facial movement to express social emotion in virtual avatars and to recognize social emotion.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Non-verbal Emotional Expressions for Social Presence of Chatbot Interface (챗봇의 사회적 현존감을 위한 비언어적 감정 표현 방식)

  • Kang, Minjeong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • The users of a chatbot messenger can be better engaged in the conversation if they feel intimacy with the chatbot. This can be achieved by the chatbot's effective expressions of human emotions to chatbot users. Thus motivated, this study aims to identify the appropriate emotional expressions of a chatbot that make people feel the social presence of the chatbot. In the background research, we obtained that facial expression is the most effective way of emotions and movement is important for relationship emersion. In a survey, we prepared moving text, moving gestures, and still emoticon that represent five emotions such as happiness, sadness, surprise, fear, and anger. Then, we asked the best way for them to feel social presence with a chatbot in each emotion. We found that, for an arousal and pleasant emotion such as 'happiness', people prefer moving gesture and text most while for unpleasant emotions such as 'sadness' and 'anger', people prefer emoticons. Lastly, for the neutral emotions such as 'surprise' and 'fear', people tend to select moving text that delivers clear meaning. We expect that this results of the study are useful for developing emotional chatbots that enable more effective conversations with users.

Dynamics of Facial Subcutaneous Blood Flow Recovery in Post-stress Period

  • Sohn, Jin-Hun;Estate M. Sokhadze;Lee, Kyung-Hwa;Lee, Jong-Mi;Park, Mi-Kyung;Park, Ji-Yeon
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2000.11a
    • /
    • pp.62-68
    • /
    • 2000
  • The aim of the study was to compare effects of music and white noise on the recovery of facial blood flow parameters after stressful visual stimulation. Twenty-nine subjects participated in the experiment. Three visual stimulation sessions with aversive slides (the IAPS, disgust category) were followed by subjectively "pleasant" (in the first session), "sad" music (in the second ), and white noise (in the third ). Order of sessions was counterbalanced. Blood flow parameters (peak blood flow, blood flow velocity, blood volume) were recorded by Laser Doppler single-crystal system (LASERFLO BPM 403A) interfaced through BIOPAC 100WS with AcqKnowledge software (v.3.5) and analyzed in off-line mode. Aversive visual stimulation itself decreased blood flow and velocity in all 3 sessions. Both "pleasant" and "sad" music led to the restoration of baseline levels in all blood flow parameters, while noise did not enhance recovery process. Music on post-stress recovery had significant change in peak blood flow and blood flow velocity, but not in blood volume measures. Pleasant music had bigger effects on post-stress recovery in peak blood flow and flow velocity than white noise. It reveals that music exerted positive modulatory effects on facial vascular activity measures during recovery from negative emotional state elicited by stressful slides. Results partially support the undoing hypothesis of Levenson (1994), which states that positive emotions may facilitate process of recovery from negative emotions.

  • PDF