• Title/Summary/Keyword: Facial Action Unit

Search Result 19, Processing Time 0.024 seconds

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • v.37 no.6
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Facial Expression Recognition using Face Alignment and AdaBoost (얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식)

  • Jeong, Kyungjoong;Choi, Jaesik;Jang, Gil-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.193-201
    • /
    • 2014
  • This paper suggests a facial expression recognition system using face detection, face alignment, facial unit extraction, and training and testing algorithms based on AdaBoost classifiers. First, we find face region by a face detector. From the results, face alignment algorithm extracts feature points. The facial units are from a subset of action units generated by combining the obtained feature points. The facial units are generally more effective for smaller-sized databases, and are able to represent the facial expressions more efficiently and reduce the computation time, and hence can be applied to real-time scenarios. Experimental results in real scenarios showed that the proposed system has an excellent performance over 90% recognition rates.

The Effects of the Emotion Regulation Strategy to the Disgust Stimulus on Facial Expression and Emotional Experience (혐오자극에 대한 정서조절전략이 얼굴표정 및 정서경험에 미치는 영향)

  • Jang, Sung-Lee;Lee, Jang-Han
    • Korean Journal of Health Psychology
    • /
    • v.15 no.3
    • /
    • pp.483-498
    • /
    • 2010
  • This study is to examine the effects of emotion regulation strategies in facial expressions and emotional experiences, based on the facial expressions of groups, using antecedent- and response- focused regulation. 50 female undergraduate students were instructed to use different emotion regulation strategies during the viewing of a disgust inducing film. While watching, their facial expressions and emotional experiences were measured. As a result, participants showed the highest frequency of action units related to disgust in the EG(expression group), and they reported in the following order of DG(expressive dissonance group), CG(cognitive reappraisal group), and SG(expressive suppression group). Also, the upper region of the face reflected real emotions. In this region, the frequency of action units related to disgust were lower in the CG than in the EG or DG. The results of the PANAS indicated the largest decrease of positive emotions reported in the DG, but an increase of positive emotions reported in the CG. This study suggests that cognitive reappraisal to an event is a more functional emotion regulation strategy compared to other strategies related to facial expression and emotional experience that affect emotion regulation strategies.

Comic Emotional Expression for Effective Sign-Language Communications (효율적인 수화 통신을 위한 코믹한 감정 표현)

  • ;;Shin Tanahashi;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.651-654
    • /
    • 1999
  • In this paper we propose an emotional expression method using a comic model and special marks for effective sign-language communications. Until now we have investigated to produce more realistic facial and emotional expression. When representing only emotional expression, however, a comic expression could be better than the real picture of a face. The comic face is a comic-style expression model in which almost components except the necessary parts like eyebrows, eyes, nose and mouth are discarded. In the comic model, we can use some special marks for the purpose of emphasizing various emotions. We represent emotional expression using Action Units(AU) of Facial Action Coding System(FACS) and define Special Unit(SU) for emphasizing the emotions. Experimental results show a possibility that the proposed method could be used efficiently for sign-language image communications.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Development of FACS-based Android Head for Emotional Expressions (감정표현을 위한 FACS 기반의 안드로이드 헤드의 개발)

  • Choi, Dongwoon;Lee, Duk-Yeon;Lee, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.537-544
    • /
    • 2020
  • This paper proposes the creation of an android robot head based on the facial action coding system(FACS), and the generation of emotional expressions by FACS. The term android robot refers to robots with human-like appearance. These robots have artificial skin and muscles. To make the expression of emotions, the location and number of artificial muscles had to be determined. Therefore, it was necessary to anatomically analyze the motions of the human face by FACS. In FACS, expressions are composed of action units(AUs), which work as the basis of determining the location and number of artificial muscles in the robots. The android head developed in this study had servo motors and wires, which corresponded to 30 artificial muscles. Moreover, the android head was equipped with artificial skin in order to make the facial expressions. Spherical joints and springs were used to develop micro-eyeball structures, and the arrangement of the 30 servo motors was based on the efficient design of wire routing. The developed android head had 30-DOFs and could express 13 basic emotions. The recognition rate of these basic emotional expressions was evaluated at an exhibition by spectators.

Motions syntheses 0in 3D facial model using features and motion parameters estimated through optical flow (Optical flow를 이용한 얼굴요소 및 얼굴의 움직임 측정값에 따른 3차원 얼굴모델의 움직임 합성)

  • 박도영;변혜란
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10c
    • /
    • pp.408-410
    • /
    • 1998
  • 동영상에서 얼굴의 움직임을 이해하는 것은 인간과 컴퓨터간의 상호작용을 이루는 분야에서 중요한 문제이다. 본 논문에서는 2차원 동영상에서 얼굴요소 및 얼굴의 움직임을 측정하기 위해 optical flow를 통해 매개변수화된 움직임 벡터를 추출한다. 그리고 나서, 이를 소수의 매개변수들의 조합으로 만들어 얼굴의 움직임에 대한 정보를 묘사할 수 있게 하였다. 매개변수화된 움직임 벡터는 얼굴 및 얼굴 요소의 특징에 따라 다른 벡터 모델을 사용한다. 2차원 동영상에서 매개변수화된 움직임 벡터는 매 프레임마다 갱신되어 각 프레임에서 얼굴 및 얼굴 요소의 위치를 파악한다. 또한, 갱신된 벡터의 매개변수 조합으로 만들어 확인된 움직임에 대한 정보가 3차원 얼굴모델에 전달되며 3차원 얼굴 모델의 단위행위(Action Unit)와 연결되어 2차원 동영상에서의 얼굴 움직임을 합성할 수 있게 하였다.

  • PDF

Realistic 3D Facial Expression Animation Based on Muscle Model (근육 모델 기반의 자연스러운 3차원 얼굴 표정 애니메이션)

  • Lee, Hye-Jin;Chung, Hyun-Sook;Lee, Yill-Byung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.265-268
    • /
    • 2002
  • 얼굴은 성별, 나이, 인종에 따라 다양한 특징을 가지고 있어서 개개인을 구별하기가 쉽고 내적인 상태를 쉽게 볼 수 있는 중요한 도구로 여겨지고 있다. 본 논문은 얼굴표정 애니메이션을 위한 효과적인 방법으로 실제얼굴의 피부조직 얼굴 근육 등 해부학적 구조에 기반한 근육기반모델링을 이용하는 방법을 소개하고자 한다. 제안하는 시스템의 구성은 얼굴 와이어프레임 구성과 폴리곤 메쉬분할 단계, 얼굴에 필요한 근육을 적용시키는 단계, 근육의 움직임에 따른 얼굴 표정생성단계로 이루어진다. 와이어프레임 구성과 폴리곤 메쉬 분할 단계에서는 얼굴모델을 Water[1]가 제안한 얼굴을 기반으로 하였고, 하나의 폴리곤 메쉬를 4등분으로 분할하여 부드러운 3D 얼굴모델을 보여준다. 다음 단계는 얼굴 표정생성에 필요한 근육을 30 개로 만들어 실제로 표정을 지을 때 많이 쓰는 부위에 적용시킨다. 그 다음으로 표정생성단계는 FACS 에서 제안한 Action Unit 을 조합하고 얼굴표정에 따라 필요한 근육의 강도를 조절하여 더 자연스럽고 실제감 있는 얼굴표정 애니메이션을 보여준다.

  • PDF

A Comic Facial Expression Method for Intelligent Avatar Communications in the Internet Cyberspace (인터넷 가상공간에서 지적 아바타 통신을 위한 코믹한 얼굴 표정의 생성법)

  • 이용후;김상운;청목유직
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.40 no.1
    • /
    • pp.59-73
    • /
    • 2003
  • As a means of overcoming the linguistic barrier between different languages in the Internet, a new sign-language communication system with CG animation techniques has been developed and proposed. In the system, the joint angles of the arms and the hands corresponding to the gesture as a non-verbal communication tool have been considered. The emotional expression, however, could as play also an important role in communicating each other. Especially, a comic expression is more efficient than real facial expression, and the movements of the cheeks and the jaws are more important AU's than those of the eyebrow, eye, mouth etc. Therefore, in this paper, we designed a 3D emotion editor using 2D model, and we extract AU's (called as PAU, here) which play a principal function in expressing emotions. We also proposed a method of generating the universal emotional expression with Avatar models which have different vertex structures. Here, we employed a method of dynamically adjusting the AU movements according to emotional intensities. The proposed system is implemented with Visual C++ and Open Inventor on windows platforms. Experimental results show a possibility that the system could be used as a non-verbal communication means to overcome the linguistic barrier.