• Title/Summary/Keyword: Facial Detection

Search Result 377, Processing Time 0.03 seconds

A Facial Feature Detection using Light Compensation and Appearance-based Features (빛 보상과 외형 기반의 특징을 이용한 얼굴 특징 검출)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.143-153
    • /
    • 2006
  • Facial feature detection is a basic technology in applications such as human computer interface, face recognition, face tracking and image database management. The speed of feature detection algorithm is one of the main issues for facial feature detection in real-time environment. Primary factors like a variation by lighting effect, location, rotation and complex background give an effect to decrease a detection ratio. A facial feature detection algorithm is proposed to improve the detection ratio and the detection speed. The proposed algorithm detects skin regions over the entire image improved by CLAHE, an algorithm for light compensation against varying lighting conditions. To extract facial feature points on detected skin regions, it uses appearance-based geometrical characteristics of a face. Since the method shows fast detection speed as well as efficient face-detection ratio, it can be applied in real-time application to face tracking and face recognition.

  • PDF

The Clinical Study on Measurement of Foot Reflex Zone Acupoint Detection of Facial Paralysis Patients by Acupoints Detector (경혈탐측기를 이용한 말초성 안면신경마비환자의 족부반사구 변화에 대한 임상적 고찰)

  • Wang, Kai-Hsia;Lee, Eun-Sol;Hwang, Ji-Hoo;Kim, Yu-Jong;Kim, Kyung-Ho;Kim, Seung-Hyeon;Youn, In-Yae;Cho, Hyun-Seok
    • Journal of Acupuncture Research
    • /
    • v.29 no.1
    • /
    • pp.1-8
    • /
    • 2012
  • Objectives : We investigate the characteristics of foot reflex zone acupoint of facial paralysis patients. Methods : In order to make a comparison between facial nerve paralysis patient group and non-facial paralysis group, we measured foot reflex zone acupoint detection in both group of 18 patients who were diagnosticated to facial nerve paralysis and 18 persons who were not. Results : 1. In comparing the means of the foot reflex zone, the measurements of facial nerve paralysis group is different significantly from non-facial paralysis group(p<0.05). 2. The measurement of detection of foot reflex zone acupoints, such as hypophysis(垂體), nose(鼻), cerebrum(大腦), neck(頸項), Trapezius muscle(僧帽筋), eye(眼) and ear(耳) of the facial nerve paralysis group is different significantly in comparison with non-facial paralysis group(p<0.05). But the measurement of detection of foot reflex zone acupoints, such as trigeminal nerve(三叉神經), cerebellum (小腦), kidney(腎), ureter(輸尿管) and urinary bladder(膀胱) of the facial nerve paralysis group is not defferent significantly in comparison with non-facial paralysis group(p>0.05). Conclusions : The results suggest that foot reflex zone can be used in the diagnosis and treatment of facial nerve paralysis.

Detection of Face and Facial Features in Complex Background from Color Images (복잡한 배경의 칼라영상에서 Face and Facial Features 검출)

  • 김영구;노진우;고한석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.69-72
    • /
    • 2002
  • Human face detection has many applications such as face recognition, face or facial feature tracking, pose estimation, and expression recognition. We present a new method for automatically segmentation and face detection in color images. Skin color alone is usually not sufficient to detect face, so we combine the color segmentation and shape analysis. The algorithm consists of two stages. First, skin color regions are segmented based on the chrominance component of the input image. Then regions with elliptical shape are selected as face hypotheses. They are certificated to searching for the facial features in their interior, Experimental results demonstrate successful detection over a wide variety of facial variations in scale, rotation, pose, lighting conditions.

  • PDF

Using Analysis of Major Color Component facial region detection algorithm for real-time image (동영상에서 얼굴의 주색상 밝기 분포를 이용한 실시간 얼굴영역 검출기법)

  • Choi, Mi-Young;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of Digital Contents Society
    • /
    • v.8 no.3
    • /
    • pp.329-339
    • /
    • 2007
  • In this paper we present a facial region detection algorithm for real-time image with complex background and various illumination using spatial and temporal methods. For Detecting Human region It used summation of Edge-Difference Image between continuous image sequences. Then, Detected facial candidate region is vertically divided two objected. Non facial region is reduced using Analysis of Major Color Component. Non facial region has not available Major Color Component. And then, Background is reduced using boundary information. Finally, The Facial region is detected through horizontal, vertical projection of Images. The experiments show that the proposed algorithm can detect robustly facial region with complex background various illumination images.

  • PDF

Integral Regression Network for Facial Landmark Detection (얼굴 특징점 검출을 위한 적분 회귀 네트워크)

  • Kim, Do Yeop;Chang, Ju Yong
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.564-572
    • /
    • 2019
  • With the development of deep learning, the performance of facial landmark detection methods has been greatly improved. The heat map regression method, which is a representative facial landmark detection method, is widely used as an efficient and robust method. However, the landmark coordinates cannot be directly obtained through a single network, and the accuracy is reduced in determining the landmark coordinates from the heat map. To solve these problems, we propose to combine integral regression with the existing heat map regression method. Through experiments using various datasets, we show that the proposed integral regression network significantly improves the performance of facial landmark detection.

Human Emotion Recognition based on Variance of Facial Features (얼굴 특징 변화에 따른 휴먼 감성 인식)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.16 no.4
    • /
    • pp.79-85
    • /
    • 2017
  • Understanding of human emotion has a high importance in interaction between human and machine communications systems. The most expressive and valuable way to extract and recognize the human's emotion is by facial expression analysis. This paper presents and implements an automatic extraction and recognition scheme of facial expression and emotion through still image. This method has three main steps to recognize the facial emotion: (1) Detection of facial areas with skin-color method and feature maps, (2) Creation of the Bezier curve on eyemap and mouthmap, and (3) Classification and distinguish the emotion of characteristic with Hausdorff distance. To estimate the performance of the implemented system, we evaluate a success-ratio with emotional face image database, which is commonly used in the field of facial analysis. The experimental result shows average 76.1% of success to classify and distinguish the facial expression and emotion.

  • PDF

DETECTION OF FACIAL FEATURES IN COLOR IMAGES WITH VARIOUS BACKGROUNDS AND FACE POSES

  • Park, Jae-Young;Kim, Nak-Bin
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.4
    • /
    • pp.594-600
    • /
    • 2003
  • In this paper, we propose a detection method for facial features in color images with various backgrounds and face poses. To begin with, the proposed method extracts face candidacy region from images with various backgrounds, which have skin-tone color and complex objects, via the color and edge information of face. And then, by using the elliptical shape property of face, we correct a rotation, scale, and tilt of face region caused by various poses of head. Finally, we verify the face using features of face and detect facial features. In our experimental results, it is shown that accuracy of detection is high and the proposed method can be used in pose-invariant face recognition system effectively

  • PDF

Gaze Detection by Computing Facial Rotation and Translation (얼굴의 회전 및 이동 분석에 의한 응시 위치 파악)

  • Lee, Jeong-Jun;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.535-543
    • /
    • 2002
  • In this paper, we propose a new gaze detection method using 2-D facial images captured by a camera on top of the monitor. We consider only the facial rotation and translation and not the eyes' movements. The proposed method computes the gaze point caused by the facial rotation and the amount of the facial translation respectively, and by combining these two the final gaze point on a monitor screen can be obtained. We detected the gaze point caused by the facial rotation by using a neural network(a multi-layered perceptron) whose inputs are the 2-D geometric changes of the facial features' points and estimated the amount of the facial translation by image processing algorithms in real time. Experimental results show that the gaze point detection accuracy between the computed positions and the real ones is about 2.11 inches in RMS error when the distance between the user and a 19-inch monitor is about 50~70cm. The processing time is about 0.7 second with a Pentium PC(233MHz) and 320${\times}$240 pixel-size images.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF