• Title/Summary/Keyword: Facial Action Unit

Search Result 19, Processing Time 0.028 seconds

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

An Action Unit co-occurrence constraint 3DCNN based Action Unit recognition approach

  • Jia, Xibin;Li, Weiting;Wang, Yuechen;Hong, SungChan;Su, Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.924-942
    • /
    • 2020
  • The facial expression is diverse and various among persons due to the impact of the psychology factor. Whilst the facial action is comparatively steady because of the fixedness of the anatomic structure. Therefore, to improve performance of the action unit recognition will facilitate the facial expression recognition and provide profound basis for the mental state analysis, etc. However, it still a challenge job and recognition accuracy rate is limited, because the muscle movements around the face are tiny and the facial actions are not obvious accordingly. Taking account of the moving of muscles impact each other when person express their emotion, we propose to make full use of co-occurrence relationship among action units (AUs) in this paper. Considering the dynamic characteristic of AUs as well, we adopt the 3D Convolutional Neural Network(3DCNN) as base framework and proposed to recognize multiple action units around brows, nose and mouth specially contributing in the emotion expression with putting their co-occurrence relationships as constrain. The experiments have been conducted on a typical public dataset CASME and its variant CASME2 dataset. The experiment results show that our proposed AU co-occurrence constraint 3DCNN based AU recognition approach outperforms current approaches and demonstrate the effectiveness of taking use of AUs relationship in AU recognition.

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

A Study in Relationship between Facial Expression and Action Unit (Manifold Learning을 통한 표정과 Action Unit 간의 상관성에 관한 연구)

  • Kim, Sunbin;Kim, Hyeoncheol
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.763-766
    • /
    • 2018
  • 표정은 사람들 사이에서 감정을 표현하는 강력한 비언어적 수단이다. 표정 인식은 기계학습에서 아주 중요한 분야 중에 하나이다. 표정 인식에 사용되는 기계학습 모델들은 사람 수준의 성능을 보여준다. 하지만 좋은 성능에도 불구하고, 기계학습 모델들은 표정 인식 결과에 대한 근거나 설명을 제공해주지 못한다. 이 연구는 표정 인식의 근거로서 Facial Action Coding Unit(AUs)을 사용하기 위해서 CK+ Dataset을 사용하여 표정 인식을 학습한 Convolutional Neural Network(CNN) 모델이 추출한 특징들을 t-distributed stochastic neighbor embedding(t-SNE)을 사용하여 시각화한 뒤, 인식된 표정과 AUs 사이의 분포의 연관성을 확인하는 연구이다.

Action Unit Based Facial Features for Subject-independent Facial Expression Recognition (인물에 독립적인 표정인식을 위한 Action Unit 기반 얼굴특징에 관한 연구)

  • Lee, Seung Ho;Kim, Hyung-Il;Park, Sung Yeong;Ro, Yong Man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.881-883
    • /
    • 2015
  • 실제적인 표정인식 응용에서는 테스트 시 등장하는 인물이 트레이닝 데이터에 존재하지 않는 경우가 빈번하여 성능 저하가 발생한다. 본 논문에서는 인물에 독립적인(subject-independent) 표정인식을 위한 얼굴특징을 제안한다. 제안방법은 인물에 공통적인 얼굴 근육 움직임(Action Unit(AU))에 기반한 기하학 정보를 표정 특징으로 사용한다. 따라서 인물의 고유 아이덴티티(identity)의 영향은 감소되고 표정과 관련된 정보는 강조된다. 인물에 독립적인 표정인식 실험결과, 86%의 높은 표정인식률과 테스트 비디오 시퀀스 당 3.5ms(Matlab 기준)의 매우 빠른 분류속도를 달성하였다.

Implementation of Facial Robot 3D Simulator For Dynamic Facial Expression (동적 표정 구현이 가능한 얼굴 로봇 3D 시뮬레이터 구현)

  • Kang, Byung-Kon;Kang, Hyo-Seok;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1121-1122
    • /
    • 2008
  • By using FACS(Facial Action Coding System) and linear interpolation, a 3D facial robot simulator is developed in this paper. This simulator is based on real facial robot and synchronizes with it by unifying protocol. Using AUs(Action Unit) of each 5 basic expressions and linear interpolation makes more various dynamic facial expressions.

  • PDF

Prompt Tuning for Facial Action Unit Detection in the Wild

  • Vu Ngoc Tu;Huynh Van Thong;Aera Kim;Soo-Hyung Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.732-734
    • /
    • 2023
  • Facial Action Units Detection (FAUs) problem focuses on identifying various detail units expressing on the human face, as defined by the Facial Action Coding System, which constitutes a fine-grained classification problem. This is a challenging task in computer vision. In this study, we propose a Prompt Tuning approach to address this problem, involving a 2-step training process. Our method demonstrates its effectiveness on the Affective in the Wild dataset, surpassing other existing methods in terms of both accuracy and efficiency.

The Accuracy of Recognizing Emotion From Korean Standard Facial Expression (한국인 표준 얼굴 표정 이미지의 감성 인식 정확률)

  • Lee, Woo-Ri;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.9
    • /
    • pp.476-483
    • /
    • 2014
  • The purpose of this study was to make a suitable images for korean emotional expressions. KSFI(Korean Standard Facial Image)-AUs was produced from korean standard apperance and FACS(Facial Action coding system)-AUs. For the objectivity of KSFI, the survey was examined about emotion recognition rate and contribution of emotion recognition in facial elements from six-basic emotional expression images(sadness, happiness, disgust, fear, anger and surprise). As a result of the experiment, the images of happiness, surprise, sadness and anger which had shown higher accuracy. Also, emotional recognition rate was mainly decided by the facial element of eyes and a mouth. Through the result of this study, KSFI contents which could be combined AU images was proposed. In this future, KSFI would be helpful contents to improve emotion recognition rate.