• Title/Summary/Keyword: Facial Feature Extraction

Search Result 157, Processing Time 0.028 seconds

Facial Feature Extraction using Multiple Active Appearance Model (Multiple Active Appearance Model을 이용한 얼굴 특징 추출 기법)

  • Park, Hyun-Jun;Kim, Kwang-Baek;Cha, Eui-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1201-1206
    • /
    • 2013
  • Active Appearance Model(AAM) is one of the facial feature extraction techniques. In this paper, we propose the Multiple Active Appearance Model(MAAM). Proposed method uses two AAMs. Each AAM trains using different training parameters. It causes that each AAM has different strong points. One AAM complements the weak points in the other AAM. We performed the facial feature extraction on the 100 images to verify the performance of MAAM. Experiment results show that MAAM gives more accurate results than AAM with less fitting iteration.

Development of Emotional Feature Extraction Method based on Advanced AAM (Advanced AAM 기반 정서특징 검출 기법 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.834-839
    • /
    • 2009
  • It is a key element that the problem of emotional feature extraction based on facial image to recognize a human emotion status. In this paper, we propose an Advanced AAM that is improved version of proposed Facial Expression Recognition Systems based on Bayesian Network by using FACS and AAM. This is a study about the most efficient method of optimal facial feature area for human emotion recognition about random user based on generalized HCI system environments. In order to perform such processes, we use a Statistical Shape Analysis at the normalized input image by using Advanced AAM and FACS as a facial expression and emotion status analysis program. And we study about the automatical emotional feature extraction about random user.

Hybrid-Feature Extraction for the Facial Emotion Recognition

  • Byun, Kwang-Sub;Park, Chang-Hyun;Sim, Kwee-Bo;Jeong, In-Cheol;Ham, Ho-Sang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1281-1285
    • /
    • 2004
  • There are numerous emotions in the human world. Human expresses and recognizes their emotion using various channels. The example is an eye, nose and mouse. Particularly, in the emotion recognition from facial expression they can perform the very flexible and robust emotion recognition because of utilization of various channels. Hybrid-feature extraction algorithm is based on this human process. It uses the geometrical feature extraction and the color distributed histogram. And then, through the independently parallel learning of the neural-network, input emotion is classified. Also, for the natural classification of the emotion, advancing two-dimensional emotion space is introduced and used in this paper. Advancing twodimensional emotion space performs a flexible and smooth classification of emotion.

  • PDF

Facial Feature Recognition based on ASNMF Method

  • Zhou, Jing;Wang, Tianjiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6028-6042
    • /
    • 2019
  • Since Sparse Nonnegative Matrix Factorization (SNMF) method can control the sparsity of the decomposed matrix, and then it can be adopted to control the sparsity of facial feature extraction and recognition. In order to improve the accuracy of SNMF method for facial feature recognition, new additive iterative rules based on the improved iterative step sizes are proposed to improve the SNMF method, and then the traditional multiplicative iterative rules of SNMF are transformed to additive iterative rules. Meanwhile, to further increase the sparsity of the basis matrix decomposed by the improved SNMF method, a threshold-sparse constraint is adopted to make the basis matrix to a zero-one matrix, which can further improve the accuracy of facial feature recognition. The improved SNMF method based on the additive iterative rules and threshold-sparse constraint is abbreviated as ASNMF, which is adopted to recognize the ORL and CK+ facial datasets, and achieved recognition rate of 96% and 100%, respectively. Meanwhile, from the results of the contrast experiments, it can be found that the recognition rate achieved by the ASNMF method is obviously higher than the basic NMF, traditional SNMF, convex nonnegative matrix factorization (CNMF) and Deep NMF.

The Facial Area Extraction Using Multi-Channel Skin Color Model and The Facial Recognition Using Efficient Feature Vectors (Multi-Channel 피부색 모델을 이용한 얼굴영역추출과 효율적인 특징벡터를 이용한 얼굴 인식)

  • Choi Gwang-Mi;Kim Hyeong-Gyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1513-1517
    • /
    • 2005
  • In this paper, I make use of a Multi-Channel skin color model with Hue, Cb, Cg using Red, Blue, Green channel altogether which remove bight component as being consider the characteristics of skin color to do modeling more effective to a facial skin color for extracting a facial area. 1 used efficient HOLA(Higher order local autocorrelation function) using 26 feature vectors to obtain both feature vectors of a facial area and the edge image extraction using Harr wavelet in image which split a facial area. Calculated feature vectors are used of date for the facial recognition through learning of neural network It demonstrate improvement in both the recognition rate and speed by proposed algorithm through simulation.

Facial Feature Extraction by using a Genetic Algorithm (유전자 알고리즘을 이용한 얼굴의 특징점 추출)

  • Kim, Sang-Kyoon;Oh, Seung-Ha;Lee, Myoung-Eun;Park, Soon-Young
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1053-1056
    • /
    • 1999
  • In this paper we propose a facial feature extraction method by using a genetic algorithm. The method uses a facial feature template to model the location of eyes and a mouth, and genetic algorithm is employed to find the optimal solution from the fitness function consisting of invariant moments. The simulation results show that the proposed algorithm can effectively extract facial features from face images with variations in position, size, rotation and expression.

  • PDF

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Reconstruction from Feature Points of Face through Fuzzy C-Means Clustering Algorithm with Gabor Wavelets (FCM 군집화 알고리즘에 의한 얼굴의 특징점에서 Gabor 웨이브렛을 이용한 복원)

  • 신영숙;이수용;이일병;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.53-58
    • /
    • 2000
  • This paper reconstructs local region of a facial expression image from extracted feature points of facial expression image using FCM(Fuzzy C-Meang) clustering algorithm with Gabor wavelets. The feature extraction in a face is two steps. In the first step, we accomplish the edge extraction of main components of face using average value of 2-D Gabor wavelets coefficient histogram of image and in the next step, extract final feature points from the extracted edge information using FCM clustering algorithm. This study presents that the principal components of facial expression images can be reconstructed with only a few feature points extracted from FCM clustering algorithm. It can also be applied to objects recognition as well as facial expressions recognition.

  • PDF

Facial Feature Extraction Based on Private Energy Map in DCT Domain

  • Kim, Ki-Hyun;Chung, Yun-Su;Yoo, Jang-Hee;Ro, Yong-Man
    • ETRI Journal
    • /
    • v.29 no.2
    • /
    • pp.243-245
    • /
    • 2007
  • This letter presents a new feature extraction method based on the private energy map (PEM) technique to utilize the energy characteristics of a facial image. Compared with a non-facial image, a facial image shows large energy congestion in special regions of discrete cosine transform (DCT) coefficients. The PEM is generated by energy probability of the DCT coefficients of facial images. In experiments, higher face recognition performance figures of 100% for the ORL database and 98.8% for the ETRI database have been achieved.

  • PDF