• Title/Summary/Keyword: facial feature

Search Result 517, Processing Time 0.026 seconds

Real-Time Automatic Human Face Detection and Recognition System Using Skin Colors of Face, Face Feature Vectors and Facial Angle Informations (얼굴피부색, 얼굴특징벡터 및 안면각 정보를 이용한 실시간 자동얼굴검출 및 인식시스템)

  • Kim, Yeong-Il;Lee, Eung-Ju
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.491-500
    • /
    • 2002
  • In this paper, we propose a real-time face detection and recognition system by using skin color informations, geometrical feature vectors of face, and facial angle informations from color face image. The proposed algorithm improved face region extraction efficiency by using skin color informations on the HSI color coordinate and face edge information. And also, it improved face recognition efficiency by using geometrical feature vectors of face and facial angles from the extracted face region image. In the experiment, the proposed algorithm shows more improved recognition efficiency as well as face region extraction efficiency than conventional methods.

Improvement of Active Shape Model for Detecting Face Features in iOS Platform (iOS 플랫폼에서 Active Shape Model 개선을 통한 얼굴 특징 검출)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.15 no.2
    • /
    • pp.61-65
    • /
    • 2016
  • Facial feature detection is a fundamental function in the field of computer vision such as security, bio-metrics, 3D modeling, and face recognition. There are many algorithms for the function, active shape model is one of the most popular local texture models. This paper addresses issues related to face detection, and implements an efficient extraction algorithm for extracting the facial feature points to use on iOS platform. In this paper, we extend the original ASM algorithm to improve its performance by four modifications. First, to detect a face and to initialize the shape model, we apply a face detection API provided from iOS CoreImage framework. Second, we construct a weighted local structure model for landmarks to utilize the edge points of the face contour. Third, we build a modified model definition and fitting more landmarks than the classical ASM. And last, we extend and build two-dimensional profile model for detecting faces within input images. The proposed algorithm is evaluated on experimental test set containing over 500 face images, and found to successfully extract facial feature points, clearly outperforming the original ASM.

Face Recognition Method using Geometric Feature and PCA/LDA in Wavelet Domain (웨이브릿 영역에서 기하학적 특징과 PCA/LDA를 사용한 얼굴 인식 방법)

  • 송영준;김영길
    • The Journal of the Korea Contents Association
    • /
    • v.4 no.3
    • /
    • pp.107-113
    • /
    • 2004
  • This paper improved the performance of the face recognition system using the PCA/LDA hybrid method based on the facial geometric feature and the Wavelet transform. Because the previous PCA/LDA methods have measured the similarity according to the formal dispersion, they could not reflect facial boundaries exactly In order to recover this defect, this paper proposed the method using the distance between eyes and mouth. If the difference of the measured distances on the query and the training images is over the given threshold, then the method reorders the candidate images according to energy feature vectors of eyes, a nose, and a chin. To evaluate the performance of the proposed method the computer simulations have been performed with four hundred facial images in the ORL database. The results showed that our method improves about 4% recognition rate over the previous PCA/LDA method.

  • PDF

A Noisy-Robust Approach for Facial Expression Recognition

  • Tong, Ying;Shen, Yuehong;Gao, Bin;Sun, Fenggang;Chen, Rui;Xu, Yefeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.2124-2148
    • /
    • 2017
  • Accurate facial expression recognition (FER) requires reliable signal filtering and the effective feature extraction. Considering these requirements, this paper presents a novel approach for FER which is robust to noise. The main contributions of this work are: First, to preserve texture details in facial expression images and remove image noise, we improved the anisotropic diffusion filter by adjusting the diffusion coefficient according to two factors, namely, the gray value difference between the object and the background and the gradient magnitude of object. The improved filter can effectively distinguish facial muscle deformation and facial noise in face images. Second, to further improve robustness, we propose a new feature descriptor based on a combination of the Histogram of Oriented Gradients with the Canny operator (Canny-HOG) which can represent the precise deformation of eyes, eyebrows and lips for FER. Third, Canny-HOG's block and cell sizes are adjusted to reduce feature dimensionality and make the classifier less prone to overfitting. Our method was tested on images from the JAFFE and CK databases. Experimental results in L-O-Sam-O and L-O-Sub-O modes demonstrated the effectiveness of the proposed method. Meanwhile, the recognition rate of this method is not significantly affected in the presence of Gaussian noise and salt-and-pepper noise conditions.

Video-based Facial Emotion Recognition using Active Shape Models and Statistical Pattern Recognizers (Active Shape Model과 통계적 패턴인식기를 이용한 얼굴 영상 기반 감정인식)

  • Jang, Gil-Jin;Jo, Ahra;Park, Jeong-Sik;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.139-146
    • /
    • 2014
  • This paper proposes an efficient method for automatically distinguishing various facial expressions. To recognize the emotions from facial expressions, the facial images are obtained by digital cameras, and a number of feature points were extracted. The extracted feature points are then transformed to 49-dimensional feature vectors which are robust to scale and translational variations, and the facial emotions are recognized by statistical pattern classifiers such Naive Bayes, MLP (multi-layer perceptron), and SVM (support vector machine). Based on the experimental results with 5-fold cross validation, SVM was the best among the classifiers, whose performance was obtained by 50.8% for 6 emotion classification, and 78.0% for 3 emotions.

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.

Feature-Point Extraction by Dynamic Linking Model bas Wavelets and Fuzzy C-Means Clustering Algorithm (Gabor 웨이브렛과 FCM 군집화 알고리즘에 기반한 동적 연결모형에 의한 얼굴표정에서 특징점 추출)

  • Sin, Yeong Suk
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.10-10
    • /
    • 2003
  • This paper extracts the edge of main components of face with Gabor wavelets transformation in facial expression images. FCM(Fuzzy C-Means) clustering algorithm then extracts the representative feature points of low dimensionality from the edge extracted in neutral face. The feature-points of the neutral face is used as a template to extract the feature-points of facial expression images. To match point to Point feature points on an expression face against each feature point on a neutral face, it consists of two steps using a dynamic linking model, which are called the coarse mapping and the fine mapping. This paper presents an automatic extraction of feature-points by dynamic linking model based on Gabor wavelets and fuzzy C-means(FCM) algorithm. The result of this study was applied to extract features automatically in facial expression recognition based on dimension[1].

Real Time Face Detection and Recognition using Rectangular Feature based Classifier and Class Matching Algorithm (사각형 특징 기반 분류기와 클래스 매칭을 이용한 실시간 얼굴 검출 및 인식)

  • Kim, Jong-Min;Kang, Myung-A
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.1
    • /
    • pp.19-26
    • /
    • 2010
  • This paper proposes a classifier based on rectangular feature to detect face in real time. The goal is to realize a strong detection algorithm which satisfies both efficiency in calculation and detection performance. The proposed algorithm consists of the following three stages: Feature creation, classifier study and real time facial domain detection. Feature creation organizes a feature set with the proposed five rectangular features and calculates the feature values efficiently by using SAT (Summed-Area Tables). Classifier learning creates classifiers hierarchically by using the AdaBoost algorithm. In addition, it gets excellent detection performance by applying important face patterns repeatedly at the next level. Real time facial domain detection finds facial domains rapidly and efficiently through the classifier based on the rectangular feature that was created. Also, the recognition rate was improved by using the domain which detected a face domain as the input image and by using PCA and KNN algorithms and a Class to Class rather than the existing Point to Point technique.

Facial Region Extraction in an Infrared Image (적외선 영상에서의 얼굴 영역 자동 추적)

  • Shin, S.W.;Kim, K.S.;Yoon, T.H.;Han, M.H.;Kim, I.Y.
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.57-59
    • /
    • 2005
  • In our study, the automatic tracking algorithm of a human face is proposed by utilizing the thermal properties and 2nd momented geometrical feature of an infrared image. First, the facial candidates are estimated by restricting the certain range of thermal values, and the spurious blobs cleaning algorithm is applied to track the refined facial region in an infrared image.

  • PDF