• 제목/요약/키워드: Facial Detection

검색결과 377건 처리시간 0.028초

Transposed Convolutional Layer 기반 Stacked Hourglass Network를 이용한 얼굴 특징점 검출에 관한 연구 (Facial Landmark Detection by Stacked Hourglass Network with Transposed Convolutional Layer)

  • 구정수;강호철
    • 한국멀티미디어학회논문지
    • /
    • 제24권8호
    • /
    • pp.1020-1025
    • /
    • 2021
  • Facial alignment is very important task for human life. And facial landmark detection is one of the instrumental methods in face alignment. We introduce the stacked hourglass networks with transposed convolutional layers for facial landmark detection. our method substitutes nearest neighbor upsampling for transposed convolutional layer. Our method returns better accuracy in facial landmark detection compared to stacked hourglass networks with nearest neighbor upsampling.

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권11호
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

운전자 피로 감지를 위한 얼굴 동작 인식 (Facial Behavior Recognition for Driver's Fatigue Detection)

  • 박호식;배철수
    • 한국통신학회논문지
    • /
    • 제35권9C호
    • /
    • pp.756-760
    • /
    • 2010
  • 본 논문에서는 운전자 피로 감지를 위한 얼굴 동작을 효과적으로 인식하는 방법을 제안하고자 한다. 얼굴 동작은 얼굴 표정, 얼굴 자세, 시선, 주름 같은 얼굴 특징으로 나타난다. 그러나 얼굴 특징으로 하나의 동작 상태를 뚜렷이 구분한다는 것은 대단히 어려운 문제이다. 왜냐하면 사람의 동작은 복합적이며 그 동작을 표현하는 얼굴은 충분한 정보를 제공하기에는 모호성을 갖기 때문이다. 제안된 얼굴 동작 인식 시스템은 먼저 적외선 카메라로 눈 검출, 머리 방향 추정, 머리 움직임 추정, 얼굴 추적과 주름 검출과 같은 얼굴 특징 등을 감지하고 획득한 특징을 FACS의 AU로 나타낸다. 획득한 AU를 근간으로 동적 베이지안 네트워크를 통하여 각 상태가 일어날 확률을 추론한다.

A Study on Detecting Glasses in Facial Image

  • Jung, Sung-Gi;Paik, Doo-Won;Choi, Hyung-Il
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권12호
    • /
    • pp.21-28
    • /
    • 2015
  • In this paper, we propose a method of glasses detection in facial image. we develop a detection method of glasses with a weighted sum of the results that detected by facial element detection and glasses frame candidate region. Component of the face detection method detects the glasses, by defining the detection probability of the glasses according to the detection of a face component. Method using the candidate region of the glasses frame detects the glasses, by defining feature of the glasses frame in the candidate region. finally, The results of the combined weight of both methods are obtained. The proposed method in this paper is expected to increase security system's recognition on facial accessories by raising detection performance of glasses or sunglasses for using ATM.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Harris Corner Detection for Eyes Detection in Facial Images

  • Navastara, Dini Adni;Koo, Kyung-Mo;Park, Hyun-Jun;Cha, Eui-Young
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2013년도 춘계학술대회
    • /
    • pp.373-376
    • /
    • 2013
  • Nowadays, eyes detection is required and considered as the most important step in several applications, such as eye tracking, face identification and recognition, facial expression analysis and iris detection. This paper presents the eyes detection in facial images using Harris corner detection. Firstly, Haar-like features for face detection is used to detect a face region in an image. To separate the region of the eyes from a whole face region, the projection function is applied in this paper. At the last step, Harris corner detection is used to detect the eyes location. In experimental results, the eyes location on both grayscale and color facial images were detected accurately and effectively.

  • PDF

Improved Two-Phase Framework for Facial Emotion Recognition

  • Yoon, Hyunjin;Park, Sangwook;Lee, Yongkwi;Han, Mikyong;Jang, Jong-Hyun
    • ETRI Journal
    • /
    • 제37권6호
    • /
    • pp.1199-1210
    • /
    • 2015
  • Automatic emotion recognition based on facial cues, such as facial action units (AUs), has received huge attention in the last decade due to its wide variety of applications. Current computer-based automated two-phase facial emotion recognition procedures first detect AUs from input images and then infer target emotions from the detected AUs. However, more robust AU detection and AU-to-emotion mapping methods are required to deal with the error accumulation problem inherent in the multiphase scheme. Motivated by our key observation that a single AU detector does not perform equally well for all AUs, we propose a novel two-phase facial emotion recognition framework, where the presence of AUs is detected by group decisions of multiple AU detectors and a target emotion is inferred from the combined AU detection decisions. Our emotion recognition framework consists of three major components - multiple AU detection, AU detection fusion, and AU-to-emotion mapping. The experimental results on two real-world face databases demonstrate an improved performance over the previous two-phase method using a single AU detector in terms of both AU detection accuracy and correct emotion recognition rate.

색 정보와 기하학적 위치관계를 이용한 얼굴 특징점 검출 (Detection of Facial Features Using Color and Facial Geometry)

  • 정상현;문인혁
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.57-60
    • /
    • 2002
  • Facial features are often used for human computer interface(HCI). This paper proposes a method to detect facial features using color and facial geometry information. Face region is first extracted by using color information, and then the pupils are detected by applying a separability filter and facial geometry constraints. Mouth is also extracted from Cr(coded red) component. Experimental results shows that the proposed detection method is robust to a wide range of facial variation in position, scale, color and gaze.

  • PDF

영상정보를 활용한 소셜 미디어상에서의 가짜 뉴스 탐지: 유튜브를 중심으로 (Fake News Detection on Social Media using Video Information: Focused on YouTube)

  • 장윤호;최병구
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제32권2호
    • /
    • pp.87-108
    • /
    • 2023
  • Purpose The main purpose of this study is to improve fake news detection performance by using video information to overcome the limitations of extant text- and image-oriented studies that do not reflect the latest news consumption trend. Design/methodology/approach This study collected video clips and related information including news scripts, speakers' facial expression, and video metadata from YouTube to develop fake news detection model. Based on the collected data, seven combinations of related information (i.e. scripts, video metadata, facial expression, scripts and video metadata, scripts and facial expression, and scripts, video metadata, and facial expression) were used as an input for taining and evaluation. The input data was analyzed using six models such as support vector machine and deep neural network. The area under the curve(AUC) was used to evaluate the performance of classification model. Findings The results showed that the ACU and accuracy values of three features combination (scripts, video metadata, and facial expression) were the highest in logistic regression, naïve bayes, and deep neural network models. This result implied that the fake news detection could be improved by using video information(video metadata and facial expression). Sample size of this study was relatively small. The generalizablity of the results would be enhanced with a larger sample size.

인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정 (Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction)

  • 박성기;박민용;이태근
    • 제어로봇시스템학회논문지
    • /
    • 제11권1호
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.