• Title/Summary/Keyword: facial features extraction

Search Result 92, Processing Time 0.028 seconds

Facial Feature Extraction with Its Applications

  • Lee, Minkyu;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.7-9
    • /
    • 2015
  • Purpose In the many face-related application such as head pose estimation, 3D face modeling, facial appearance manipulation, the robust and fast facial feature extraction is necessary. We present the facial feature extraction method based on shape regression and feature selection for real-time facial feature extraction. Materials and Methods The facial features are initialized by statistical shape model and then the shape of facial features are deformed iteratively according to the texture pattern which is selected on the feature pool. Results We obtain fast and robust facial feature extraction result with error less than 4% and processing time less than 12 ms. The alignment error is measured by average of ratio of pixel difference to inter-ocular distance. Conclusion The accuracy and processing time of the method is enough to apply facial feature based application and can be used on the face beautification or 3D face modeling.

A Study on Facial Feature' Morphological Information Extraction and Classification for Avatar Generation (아바타 생성을 위한 이목구비 모양 특징정보 추출 및 분류에 관한 연구)

  • 박연출
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.10
    • /
    • pp.631-642
    • /
    • 2003
  • We propose an approach to extract and to classify facial features into some classes from one's photo as prepared classification standards to generate one's avatar. Facial Feature Extraction and Classification was executed at eyes, nose, lips, jaw separately and I presented each facial features and classification standards. Extracted Facial Features are used for calculation to features of professional designer's facial component images. Then, most similar facial component images are mapped onto avatar's vector face.

  • PDF

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Realistic Avatar Face Generation Using Shading Mechanism (음영합성 기법을 이용한 실사형 아바타 얼굴 생성)

  • Park Yeon-Chool
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.79-91
    • /
    • 2004
  • This paper proposes avatar face generation system that uses shading mechanism and facial features extraction method of facial recognition. Proposed system generates avatar face similar to human face automatically using facial features that extracted from a photo. And proposed system is an approach which compose shade and facial features. Thus, it has advantages that can make more realistic avatar face similar to human face. This paper proposes new eye localization method, facial features extraction method, classification method for minimizing retrieval time, image retrieval method by similarity measure, and realistic avatar face generation method by mapping facial features with shaded face pane.

  • PDF

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Emotion Recognition of Facial Expression using the Hybrid Feature Extraction (혼합형 특징점 추출을 이용한 얼굴 표정의 감성 인식)

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo
    • Proceedings of the KIEE Conference
    • /
    • 2004.05a
    • /
    • pp.132-134
    • /
    • 2004
  • Emotion recognition between human and human is done compositely using various features that are face, voice, gesture and etc. Among them, it is a face that emotion expression is revealed the most definitely. Human expresses and recognizes a emotion using complex and various features of the face. This paper proposes hybrid feature extraction for emotions recognition from facial expression. Hybrid feature extraction imitates emotion recognition system of human by combination of geometrical feature based extraction and color distributed histogram. That is, it can robustly perform emotion recognition by extracting many features of facial expression.

  • PDF

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.1-8
    • /
    • 2005
  • Facial analysis is used in many applications like face recognition systems, human-computer interface through head movements or facial expressions, model based coding, or virtual reality. In all these applications a very precise extraction of facial feature points are necessary. In this paper we presents a method for automatic extraction of the facial features Points such as mouth corners, eye corners, eyebrow corners. First, face region is detected by AdaBoost-based object detection algorithm. Then a combination of three kinds of feature energy for facial features are computed; valley energy, intensity energy and edge energy. After feature area are detected by searching horizontal rectangles which has high feature energy. Finally, a corner detection algorithm is applied on the end region of each feature area. Because we integrate three feature energy and the suggested estimation method for valley energy and intensity energy are adaptive to the illumination change, the proposed feature extraction method is robust under various conditions.

  • PDF

A Study on Face Component Extraction for Automatic Generation of Personal Avatar (개인아바타 자동 생성을 위한 얼굴 구성요소의 추출에 관한 연구)

  • Choi Jae Young;Hwang Seung Ho;Yang Young Kyu;Whangbo Taeg Ken
    • Journal of Internet Computing and Services
    • /
    • v.6 no.4
    • /
    • pp.93-102
    • /
    • 2005
  • In Recent times, Netizens have frequently use virtual character 'Avatar' schemes in order to present their own identity, there is a strong need for avatars to resemble the user. This paper proposes an extraction technique for facial region and features that are used in generating the avatar automatically. For extraction of facial feature component, the method uses ACM and edge information. Also, in the extraction process of facial region, the proposed method reduces the effect of lights and poor image quality on low resolution pictures. this is achieved by using the variation of facial area size which is employed for external energy of ACM. Our experiments show that the success rate of extracting facial regions is $92{\%}$ and accuracy rate of extracting facial feature components is $83.4{\%}$, our results provide good evidence that the suggested method can extract the facial regions and features accurately, moreover this technique can be used in the process of handling features according to the pattern parts of automatic avatar generation system in the near future.

  • PDF