• Title/Summary/Keyword: Facial feature

Search Result 510, Processing Time 0.023 seconds

Feature Extraction Based on GRFs for Facial Expression Recognition

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.7 no.3
    • /
    • pp.23-31
    • /
    • 2002
  • In this paper we propose a new feature vector for recognition of the facial expression based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are invariant under translation rotation, and scale of an facial expression imege. The Algorithm for recognition of a facial expression contains two parts: the extraction of feature vector and the recognition process. The extraction of feature vector are comprised of modified 2-D conditional moments based on estimated Gibbs distribution for an facial image. In the facial expression recognition phase, we use discrete left-right HMM which is widely used in pattern recognition. In order to evaluate the performance of the proposed scheme, experiments for recognition of four universal expression (anger, fear, happiness, surprise) was conducted with facial image sequences on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 95%.

  • PDF

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.1-8
    • /
    • 2005
  • Facial analysis is used in many applications like face recognition systems, human-computer interface through head movements or facial expressions, model based coding, or virtual reality. In all these applications a very precise extraction of facial feature points are necessary. In this paper we presents a method for automatic extraction of the facial features Points such as mouth corners, eye corners, eyebrow corners. First, face region is detected by AdaBoost-based object detection algorithm. Then a combination of three kinds of feature energy for facial features are computed; valley energy, intensity energy and edge energy. After feature area are detected by searching horizontal rectangles which has high feature energy. Finally, a corner detection algorithm is applied on the end region of each feature area. Because we integrate three feature energy and the suggested estimation method for valley energy and intensity energy are adaptive to the illumination change, the proposed feature extraction method is robust under various conditions.

  • PDF

Emotion Recognition and Expression Method using Bi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 감정인식 및 표현기법)

  • Joo, Jong-Tae;Jang, In-Hun;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.754-759
    • /
    • 2007
  • In this paper, we proposed the Bi-Modal Sensor Fusion Algorithm which is the emotional recognition method that be able to classify 4 emotions (Happy, Sad, Angry, Surprise) by using facial image and speech signal together. We extract the feature vectors from speech signal using acoustic feature without language feature and classify emotional pattern using Neural-Network. We also make the feature selection of mouth, eyes and eyebrows from facial image. and extracted feature vectors that apply to Principal Component Analysis(PCA) remakes low dimension feature vector. So we proposed method to fused into result value of emotion recognition by using facial image and speech.

Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm (형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식)

  • 최동선;이주신
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

Face and Facial Feature Detection under Pose Variation of User Face for Human-Robot Interaction (인간-로봇 상호작용을 위한 자세가 변하는 사용자 얼굴검출 및 얼굴요소 위치추정)

  • Park Sung-Kee;Park Mignon;Lee Taigun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.50-57
    • /
    • 2005
  • We present a simple and effective method of face and facial feature detection under pose variation of user face in complex background for the human-robot interaction. Our approach is a flexible method that can be performed in both color and gray facial image and is also feasible for detecting facial features in quasi real-time. Based on the characteristics of the intensity of neighborhood area of facial features, new directional template for facial feature is defined. From applying this template to input facial image, novel edge-like blob map (EBM) with multiple intensity strengths is constructed. Regardless of color information of input image, using this map and conditions for facial characteristics, we show that the locations of face and its features - i.e., two eyes and a mouth-can be successfully estimated. Without the information of facial area boundary, final candidate face region is determined by both obtained locations of facial features and weighted correlation values with standard facial templates. Experimental results from many color images and well-known gray level face database images authorize the usefulness of proposed algorithm.

Facial Feature Localization from 3D Face Image using Adjacent Depth Differences (인접 부위의 깊이 차를 이용한 3차원 얼굴 영상의 특징 추출)

  • 김익동;심재창
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.617-624
    • /
    • 2004
  • This paper describes a new facial feature localization method that uses Adjacent Depth Differences(ADD) in 3D facial surface. In general, human recognize the extent of deepness or shallowness of region relatively, in depth, by comparing the neighboring depth information among regions of an object. The larger the depth difference between regions shows, the easier one can recognize each region. Using this principal, facial feature extraction will be easier, more reliable and speedy. 3D range images are used as input images. And ADD are obtained by differencing two range values, which are separated at a distance coordinate, both in horizontal and vertical directions. ADD and input image are analyzed to extract facial features, then localized a nose region, which is the most prominent feature in 3D facial surface, effectively and accurately.

Facial Feature Recognition based on ASNMF Method

  • Zhou, Jing;Wang, Tianjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6028-6042
    • /
    • 2019
  • Since Sparse Nonnegative Matrix Factorization (SNMF) method can control the sparsity of the decomposed matrix, and then it can be adopted to control the sparsity of facial feature extraction and recognition. In order to improve the accuracy of SNMF method for facial feature recognition, new additive iterative rules based on the improved iterative step sizes are proposed to improve the SNMF method, and then the traditional multiplicative iterative rules of SNMF are transformed to additive iterative rules. Meanwhile, to further increase the sparsity of the basis matrix decomposed by the improved SNMF method, a threshold-sparse constraint is adopted to make the basis matrix to a zero-one matrix, which can further improve the accuracy of facial feature recognition. The improved SNMF method based on the additive iterative rules and threshold-sparse constraint is abbreviated as ASNMF, which is adopted to recognize the ORL and CK+ facial datasets, and achieved recognition rate of 96% and 100%, respectively. Meanwhile, from the results of the contrast experiments, it can be found that the recognition rate achieved by the ASNMF method is obviously higher than the basic NMF, traditional SNMF, convex nonnegative matrix factorization (CNMF) and Deep NMF.

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Facial Expression Recognition Using SIFT Descriptor (SIFT 기술자를 이용한 얼굴 표정인식)

  • Kim, Dong-Ju;Lee, Sang-Heon;Sohn, Myoung-Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.2
    • /
    • pp.89-94
    • /
    • 2016
  • This paper proposed a facial expression recognition approach using SIFT feature and SVM classifier. The SIFT was generally employed as feature descriptor at key-points in object recognition fields. However, this paper applied the SIFT descriptor as feature vector for facial expression recognition. In this paper, the facial feature was extracted by applying SIFT descriptor at each sub-block image without key-point detection procedure, and the facial expression recognition was performed using SVM classifier. The performance evaluation was carried out through comparison with binary pattern feature-based approaches such as LBP and LDP, and the CK facial expression database and the JAFFE facial expression database were used in the experiments. From the experimental results, the proposed method using SIFT descriptor showed performance improvements of 6.06% and 3.87% compared to previous approaches for CK database and JAFFE database, respectively.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.