• Title/Summary/Keyword: Facial Feature

Search Result 510, Processing Time 0.024 seconds

Real-Time Face Avatar Creation and Warping Algorithm Using Local Mean Method and Facial Feature Point Detection

  • Lee, Eung-Joo;Wei, Li
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.777-786
    • /
    • 2008
  • Human face avatar is important information in nowadays, such as describing real people in virtual world. In this paper, we have presented a face avatar creation and warping algorithm by using face feature analysis method, in order to detect face feature, we utilized local mean method based on facial feature appearance and face geometric information. Then detect facial candidates by using it's character in $YC_bC_r$ color space. Meanwhile, we also defined the rules which are based on face geometric information to limit searching range. For analyzing face feature, we used face feature points to describe their feature, and analyzed geometry relationship of these feature points to create the face avatar. Then we have carried out simulation on PC and embed mobile device such as PDA and mobile phone to evaluate efficiency of the proposed algorithm. From the simulation results, we can confirm that our proposed algorithm will have an outstanding performance and it's execution speed can also be acceptable.

  • PDF

Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition (얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습)

  • Kang, Hyunwoo;Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

Detection of Facial Region and features from Color Images based on Skin Color and Deformable Model (스킨 컬러와 변형 모델에 기반한 컬러영상으로부터의 얼굴 및 얼굴 특성영역 추출)

  • 민경필;전준철;박구락
    • Journal of Internet Computing and Services
    • /
    • v.3 no.6
    • /
    • pp.13-24
    • /
    • 2002
  • This paper presents an automatic approach to detect face and facial feature from face images based on the color information and deformable model. Skin color information has been widely used for face and facial feature diction since it is effective for object recognition and has less computational burden, In this paper, we propose how to compensates varying light condition and utilize the transformed YCbCr color model to detect candidates region of face and facial feature from color images, Moreover, the detected face facial feature areas are subsequently assigned to a initial condition of active contour model to extract optimal boundaries of face and facial feature by resolving initial boundary problem when the active contour is used, The experimental results show the efficiency of the proposed method, The face and facial feature information will be used for face recognition and facial feature descriptor.

  • PDF

A 3D Face Reconstruction Method Robust to Errors of Automatic Facial Feature Point Extraction (얼굴 특징점 자동 추출 오류에 강인한 3차원 얼굴 복원 방법)

  • Lee, Youn-Joo;Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.122-131
    • /
    • 2011
  • A widely used single image-based 3D face reconstruction method, 3D morphable shape model, reconstructs an accurate 3D facial shape when 2D facial feature points are correctly extracted from an input face image. However, in the case that a user's cooperation is not available such as a real-time 3D face reconstruction system, this method can be vulnerable to the errors of automatic facial feature point extraction. In order to solve this problem, we automatically classify extracted facial feature points into two groups, erroneous and correct ones, and then reconstruct a 3D facial shape by using only the correctly extracted facial feature points. The experimental results showed that the 3D reconstruction performance of the proposed method was remarkably improved compared to that of the previous method which does not consider the errors of automatic facial feature point extraction.

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

Facial Feature Extraction using Genetic Algorithm from Original Image (배경영상에서 유전자 알고리즘을 이용한 얼굴의 각 부위 추출)

  • 이형우;이상진;박석일;민홍기;홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.214-217
    • /
    • 2000
  • Many researches have been performed for human recognition and coding schemes recently. For this situation, we propose an automatic facial feature extraction algorithm. There are two main steps: the face region evaluation from original background image such as office, and the facial feature extraction from the evaluated face region. In the face evaluation, Genetic Algorithm is adopted to search face region in background easily such as office and household in the first step, and Template Matching Method is used to extract the facial feature in the second step. We can extract facial feature more fast and exact by using over the proposed Algorithm.

  • PDF

A Flexible Feature Matching for Automatic Facial Feature Points Detection (얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • Hwang, Suen-Ki;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.2
    • /
    • pp.12-17
    • /
    • 2010
  • An automatic facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features and the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image space by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the system.

  • PDF

A Flexible Feature Matching for Automatic face and Facial feature Points Detection (얼굴과 얼굴 특징점 자동 검출을 위한 탄력적 특징 정합)

  • 박호식;손형경;정연길;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.608-612
    • /
    • 2002
  • An automatic face and facial feature points(FFPs) detection system is proposed. A face is represented as a graph where the nodes are placed at facial feature points(FFPs) labeled by their Gabor features md the edges are describes their spatial relations. An innovative flexible feature matching is proposed to perform features correspondence between models and the input image. This matching model works likes random diffusion process in the image spare by employing the locally competitive and globally corporative mechanism. The system works nicely on the face images under complicated background, pose variations and distorted by facial accessories. We demonstrate the benefits of our approach by its implementation on the fare identification system.

  • PDF

Recognition of Facial Expressions Using Muscle-eased Feature Models (근육기반의 특징모델을 이용한 얼굴표정인식에 관한 연구)

  • 김동수;남기환;한준희;박호식;차영석;최현수;배철수;권오홍;나상동
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.416-419
    • /
    • 1999
  • We Present a technique for recognizing facial expressions from image sequences. The technique uses muscle-based feature models for tracking facial features. Since the feature models are constructed with a small number of parameters and are deformable in the limited range and directions, each search space for a feature can be limited. The technique estimates muscular contractile degrees for classifying six principal facial express expressions. The contractile vectors are obtained from the deformations of facial muscle models. Similarities are defined between those vectors and representative vectors of principal expressions and are used for determining facial expressions.

  • PDF

Recognition of Facial Expressions of Animation Characters Using Dominant Colors and Feature Points (주색상과 특징점을 이용한 애니메이션 캐릭터의 표정인식)

  • Jang, Seok-Woo;Kim, Gye-Young;Na, Hyun-Suk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.375-384
    • /
    • 2011
  • This paper suggests a method to recognize facial expressions of animation characters by means of dominant colors and feature points. The proposed method defines a simplified mesh model adequate for the animation character and detects its face and facial components by using dominant colors. It also extracts edge-based feature points for each facial component. It then classifies the feature points into corresponding AUs(action units) through neural network, and finally recognizes character facial expressions with the suggested AU specification. Experimental results show that the suggested method can recognize facial expressions of animation characters reliably.