• 제목/요약/키워드: facial action unit detection

검색결과 4건 처리시간 0.022초

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • 제37권6호
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

Prompt Tuning for Facial Action Unit Detection in the Wild

  • ;;김애라;김수형
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 춘계학술발표대회
    • /
    • pp.732-734
    • /
    • 2023
  • Facial Action Units Detection (FAUs) problem focuses on identifying various detail units expressing on the human face, as defined by the Facial Action Coding System, which constitutes a fine-grained classification problem. This is a challenging task in computer vision. In this study, we propose a Prompt Tuning approach to address this problem, involving a 2-step training process. Our method demonstrates its effectiveness on the Affective in the Wild dataset, surpassing other existing methods in terms of both accuracy and efficiency.

얼굴정렬과 AdaBoost를 이용한 얼굴 표정 인식 (Facial Expression Recognition using Face Alignment and AdaBoost)

  • 정경중;최재식;장길진
    • 전자공학회논문지
    • /
    • 제51권11호
    • /
    • pp.193-201
    • /
    • 2014
  • 본 논문에서는 얼굴영상에 나타난 사람의 표정을 인식하기 위해 얼굴검출, 얼굴정렬, 얼굴단위 추출, 그리고 AdaBoost를 이용한 학습 방법과 효과적인 인식방법을 제안한다. 입력영상에서 얼굴 영역을 찾기 위해서 얼굴검출을 수행하고, 검출된 얼굴영상에 대하여 학습된 얼굴모델과 정렬(Face Alignment)을 수행한 후, 얼굴의 표정을 나타내는 단위요소(Facial Units)들을 추출한다. 본 논문에서 제안하는 얼굴 단위요소들을 표정을 표현하기 위한 기본적인 액션유닛(AU, Action Units)의 하위집합으로 눈썹, 눈, 코, 입 부분으로 나눠지며, 이러한 액션유닛에 대하여 AdaBoost 학습을 수행하여 표정을 인식한다. 얼굴유닛은 얼굴표정을 더욱 효율적으로 표현할 수 있고 학습 및 테스트에서 동작하는 시간을 줄여주기 때문에 실시간 응용분야에 적용하기 적합하다. 실험결과, 제안하는 표정인식 시스템은 실시간 환경에서 90% 이상의 우수한 성능을 보여준다.