• Title/Summary/Keyword: Facial feature detection

검색결과 156건 처리시간 0.025초

Facial Region Detection using Neural Network and Geometrical Feature (신경회로망 및 기하학적 특징을 이용한 얼굴영역 검출)

  • 박상근;박영태
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 한국정보과학회 2003년도 봄 학술발표논문집 Vol.30 No.1 (B)
    • /
    • pp.298-300
    • /
    • 2003
  • 동영상이나 정지영상에서 사람의 얼굴을 검출 및 인식을 하는 여러 가지 알고리즘이 소개되고 있다. 본 논문에서는 신경망(Neural Network)과 얼굴의 기하학적 특징 중에 하나인 눈과 입을 사용하여 얼굴 영역을 추출하는 방법을 사용한다 신경망은 얼굴 인식을 비롯한 여러 분야에서 쓰이는 좋은 방법 중의 하나 이지만 신경망이 가지고 있는 특성상 많은 오차를 가질 수 있기 때문에 얼굴을 구성하고 있는 요소인 눈과 입을 사용해서 오차를 제거하는 방법을 제안한다.

  • PDF

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • 제39권3호
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제29권6C호
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

Albedo Based Fake Face Detection (빛의 반사량 측정을 통한 가면 착용 위변조 얼굴 검출)

  • Kim, Young-Shin;Na, Jae-Keun;Yoon, Sung-Beak;Yi, June-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • 제45권6호
    • /
    • pp.139-146
    • /
    • 2008
  • Masked fake face detection using ordinary visible images is a formidable task when the mask is accurately made with special makeup. Considering recent advances in special makeup technology, a reliable solution to detect masked fake faces is essential to the development of a complete face recognition system. This research proposes a method for masked fake face detection that exploits reflectance disparity due to object material and its surface color. First, we have shown that measuring of albedo can be simplified to radiance measurement when a practical face recognition system is deployed under the user-cooperative environment. This enables us to obtain albedo just by grey values in the image captured. Second, we have found that 850nm infrared light is effective to discriminate between facial skin and mask material using reflectance disparity. On the other hand, 650nm visible light is known to be suitable for distinguishing different facial skin colors between ethnic groups. We use a 2D vector consisting of radiance measurements under 850nm and 659nm illumination as a feature vector. Facial skin and mask material show linearly separable distributions in the feature space. By employing FIB, we have achieved 97.8% accuracy in fake face detection. Our method is applicable to faces of different skin colors, and can be easily implemented into commercial face recognition systems.

Facial Feature Detection Method within the Skewed Facial Images (기울어진 얼굴 영상에서 얼굴 구성 요소 추출 방법)

  • 김익환;송호근
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 한국정보과학회 2001년도 가을 학술발표논문집 Vol.28 No.2 (2)
    • /
    • pp.436-438
    • /
    • 2001
  • 본 논문에서는 기울어진 얼굴 영상에서 얼굴 구성 요소를 추출하는 방법을 제안한다. 제안하는 방법은 먼저 피부 색상 정보를 이용하여 얼굴 후보 영역을 추출한다. 이때 YIQ 색상 좌표계를 이용하고 조명의 영향을 반영하기 위하여 피부색상 영역을 다단계로 분할하여 색상 영역을 각각 결정한 뒤 적중률을 계산하여 얼굴 후보 영역을 결정하는 방법을 제안하였다. 2단계에서는 얼굴의 구성 요소중 가장 두드러진 특징인 눈동자 영역을 기준으로 한국인의 표준 얼굴 통계치를 적응하여 탐색하는 방법을 사용하였다. 이때 탐색된 눈동자 좌표로부터 얼굴의 기울기를 추정한다. 다음 단계에서는 얼굴 후보 영역에 대하여 기울어짐 보정을 수행한 뒤, 수평 수직 투영값을 이용하여 얼굴의 구성요소를 탐색한 뒤 얼굴 포함 최소 사각형을 정의하였다. 마지막으로 얼굴 영상 데이터 베이스로부터 얼굴 포함 최소 사각형에 대한 명암값 표준템플릿을 정의하고, 입력 영상에서 탐색된 최소 포함 사각형에 대하여 얼굴 영역 검증하는 방법을 제안하였다.

  • PDF

Detection of Faces with Partial Occlusions using Statistical Face Model (통계적 얼굴 모델을 이용한 부분적으로 가려진 얼굴 검출)

  • Seo, Jeongin;Park, Hyeyoung
    • Journal of KIISE
    • /
    • 제41권11호
    • /
    • pp.921-926
    • /
    • 2014
  • Face detection refers to the process extracting facial regions in an input image, which can improve speed and accuracy of recognition or authorization system, and has diverse applicability. Since conventional works have tried to detect faces based on the whole shape of faces, its detection performance can be degraded by occlusion made with accessories or parts of body. In this paper we propose a method combining local feature descriptors and probability modeling in order to detect partially occluded face effectively. In training stage, we represent an image as a set of local feature descriptors and estimate a statistical model for normal faces. When the test image is given, we find a region that is most similar to face using our face model constructed in training stage. According to experimental results with benchmark data set, we confirmed the effect of proposed method on detecting partially occluded face.

ASM Algorithm Applid to Image Object spFACS Study on Face Recognition (영상객체 spFACS ASM 알고리즘을 적용한 얼굴인식에 관한 연구)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • 제12권4호
    • /
    • pp.1-12
    • /
    • 2016
  • Digital imaging technology has developed into a state-of-the-art IT convergence, composite industry beyond the limits of the multimedia industry, especially in the field of smart object recognition, face - Application developed various techniques have been actively studied in conjunction with the phone. Recently, face recognition technology through the object recognition technology and evolved into intelligent video detection recognition technology, image recognition technology object detection recognition process applies to skills through is applied to the IP camera, the image object recognition technology with face recognition and active research have. In this paper, we first propose the necessary technical elements of the human factor technology trends and look at the human object recognition based spFACS (Smile Progress Facial Action Coding System) for detecting smiles study plan of the image recognition technology recognizes objects. Study scheme 1). ASM algorithm. By suggesting ways to effectively evaluate psychological research skills through the image object 2). By applying the result via the face recognition object to the tooth area it is detected in accordance with the recognized facial expression recognition of a person demonstrated the effect of extracting the feature points.

Parallel Multi-task Cascade Convolution Neural Network Optimization Algorithm for Real-time Dynamic Face Recognition

  • Jiang, Bin;Ren, Qiang;Dai, Fei;Zhou, Tian;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.4117-4135
    • /
    • 2020
  • Due to the angle of view, illumination and scene diversity, real-time dynamic face detection and recognition is no small difficulty in those unrestricted environments. In this study, we used the intrinsic correlation between detection and calibration, using a multi-task cascaded convolutional neural network(MTCNN) to improve the efficiency of face recognition, and the output of each core network is mapped in parallel to a compact Euclidean space, where distance represents the similarity of facial features, so that the target face can be identified as quickly as possible, without waiting for all network iteration calculations to complete the recognition results. And after the angle of the target face and the illumination change, the correlation between the recognition results can be well obtained. In the actual application scenario, we use a multi-camera real-time monitoring system to perform face matching and recognition using successive frames acquired from different angles. The effectiveness of the method was verified by several real-time monitoring experiments, and good results were obtained.

Face region detection algorithm of natural-image (자연 영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제7권1호
    • /
    • pp.55-60
    • /
    • 2014
  • In this paper, we proposed a method for face region extraction by skin-color hue, saturation and facial feature extraction in natural images. The proposed algorithm is composed of lighting correction and face detection process. In the lighting correction step, performing correction function for a lighting change. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. Eye detection using C element in the CMY color model and mouth detection using Q element in the YIQ color model for extracted candidate areas. Face area detected based on human face knowledge for extracted candidate areas. When an experiment was conducted with 10 natural images of face as input images, the method showed a face detection rate of 100%.

3D Facial Model Expression Creation with Head Motion (얼굴 움직임이 결합된 3차원 얼굴 모델의 표정 생성)

  • Kwon, Oh-Ryun;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2007년도 학술대회 1부
    • /
    • pp.1012-1018
    • /
    • 2007
  • 본 논문에서는 비전 기반 3차원 얼굴 모델의 자동 표정 생성 시스템을 제안한다. 기존의 3차원 얼굴 애니메이션에 관한 연구는 얼굴의 움직임을 나타내는 모션 추정을 배제한 얼굴 표정 생성에 초점을 맞추고 있으며 얼굴 모션 추정과 표정 제어에 관한 연구는 독립적으로 이루어지고 있다. 제안하는 얼굴 모델의 표정 생성 시스템은 크게 얼굴 검출, 얼굴 모션 추정, 표정 제어로 구성되어 있다. 얼굴 검출 방법으로는 얼굴 후보 영역 검출과 얼굴 영역 검출 과정으로 구성된다. HT 컬러 모델을 이용하며 얼굴의 후보 영역을 검출하며 얼굴 후보 영역으로부터 PCA 변환과 템플릿 매칭을 통해 얼굴 영역을 검출하게 된다. 검출된 얼굴 영역으로부터 얼굴 모션 추정과 얼굴 표정 제어를 수행한다. 3차원 실린더 모델의 투영과 LK 알고리즘을 이용하여 얼굴의 모션을 추정하며 추정된 결과를 3차원 얼굴 모델에 적용한다. 또한 영상 보정을 통해 강인한 모션 추정을 할 수 있다. 얼굴 모델의 표정을 생성하기 위해 특징점 기반의 얼굴 모델 표정 생성 방법을 적용하며 12개의 얼굴 특징점으로부터 얼굴 모델의 표정을 생성한다. 얼굴의 구조적 정보와 템플릿 매칭을 이용하여 눈썹, 눈, 입 주위의 얼굴 특징점을 검출하며 LK 알고리즘을 이용하여 특징점을 추적(Tracking)한다. 추적된 특징점의 위치는 얼굴의 모션 정보와 표정 정보의 조합으로 이루어져있기 때문에 기하학적 변환을 이용하여 얼굴의 방향이 정면이었을 경우의 특징점의 변위인 애니메이션 매개변수를 획득한다. 애니메이션 매개변수로부터 얼굴 모델의 제어점을 이동시키며 주위의 정점들은 RBF 보간법을 통해 변형한다. 변형된 얼굴 모델로부터 얼굴 표정을 생성하며 모션 추정 결과를 모델에 적용함으로써 얼굴 모션 정보가 결합된 3차원 얼굴 모델의 표정을 생성한다.

  • PDF