• Title/Summary/Keyword: face feature extraction

Search Result 249, Processing Time 0.027 seconds

PCA-based Feature Extraction using Class Information (클래스 정보를 이용한 PCA 기반의 특징 추출)

  • Park, Myoung-Soo;Na, Jin-Hee;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.4
    • /
    • pp.492-497
    • /
    • 2005
  • Feature extraction is important to classify data with large dimension such as image data. The representative feature extraction methods lot feature extraction ate PCA, ICA, LDA and MLP, etc. These algorithms can be classified in two groups: unsupervised algorithms such as PCA, LDA, and supervised algorithms such as LDA, MLP. Among these two groups, supervised algorithms are more suitable to extract the features for classification because of the class information of input data. In this paper we suggest a new feature extraction algorithm PCA-FX which uses class information with PCA to extract ieatures for classification. We test our algorithm using Yale face database and compare the performance of proposed algorithm with those of other algorithms.

Reconstruction from Feature Points of Face through Fuzzy C-Means Clustering Algorithm with Gabor Wavelets (FCM 군집화 알고리즘에 의한 얼굴의 특징점에서 Gabor 웨이브렛을 이용한 복원)

  • 신영숙;이수용;이일병;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.53-58
    • /
    • 2000
  • This paper reconstructs local region of a facial expression image from extracted feature points of facial expression image using FCM(Fuzzy C-Meang) clustering algorithm with Gabor wavelets. The feature extraction in a face is two steps. In the first step, we accomplish the edge extraction of main components of face using average value of 2-D Gabor wavelets coefficient histogram of image and in the next step, extract final feature points from the extracted edge information using FCM clustering algorithm. This study presents that the principal components of facial expression images can be reconstructed with only a few feature points extracted from FCM clustering algorithm. It can also be applied to objects recognition as well as facial expressions recognition.

  • PDF

Global Feature Extraction and Recognition from Matrices of Gabor Feature Faces

  • Odoyo, Wilfred O.;Cho, Beom-Joon
    • Journal of information and communication convergence engineering
    • /
    • v.9 no.2
    • /
    • pp.207-211
    • /
    • 2011
  • This paper presents a method for facial feature representation and recognition from the Covariance Matrices of the Gabor-filtered images. Gabor filters are a very powerful tool for processing images that respond to different local orientations and wave numbers around points of interest, especially on the local features on the face. This is a very unique attribute needed to extract special features around the facial components like eyebrows, eyes, mouth and nose. The Covariance matrices computed on Gabor filtered faces are adopted as the feature representation for face recognition. Geodesic distance measure is used as a matching measure and is preferred for its global consistency over other methods. Geodesic measure takes into consideration the position of the data points in addition to the geometric structure of given face images. The proposed method is invariant and robust under rotation, pose, or boundary distortion. Tests run on random images and also on publicly available JAFFE and FRAV3D face recognition databases provide impressively high percentage of recognition.

Face Recognition Using Feature Information and Neural Network

  • Chung, Jae-Mo;Bae, Hyeon;Kim, Sung-Shin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.55.2-55
    • /
    • 2001
  • The statistical analysis of the feature extraction and the neural networks are proposed to recognize a human face. In the preprocessing step, the normalized skin color map with Gaussian functions is employed to extract the region efface candidate. The feature information in the region of face candidate is used to detect a face region. In the recognition step, as a tested, the 360 images of 30 persons are trained by the backpropagation algorithm. The images of each person are obtained from the various direction, pose, and facial expression, Input variables of the neural networks are the feature information that comes from the eigenface spaces. The simulation results of 30 persons show that the proposed method yields high recognition rates.

  • PDF

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Implementation of Face Recognition System Using Neural Network

  • gi, Jung-Hun;yong, Kuc-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.169.2-169
    • /
    • 2001
  • In this paper, we propose the face recognition system using the neural network. A difficult procedure in constructing the entire recognition systems is the feature extraction from the face imga. And a key poing is the design of the matching function that relates the set of feature values to the appropriate face candidates. We use the length and angle values as feature values that are extracted from the face image normalized to the range of [0,1]. These features values are applied to the input layer of the neural network. Then, these multi-layered perceptron learns or gives otput result. By using the neural network we need not to design the matching function. This function may have nonlinear attributes considerably and would be ...

  • PDF

Face Extraction using Genetic Algorithm, Stochastic Variable and Geometrical Model (유전 알고리즘, 통계적 변수, 기하학적 모델에 의한 얼굴 영역 추출)

  • 이상진;홍준표이종실홍승홍
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.891-894
    • /
    • 1998
  • This paper introduces an automatic face region extraction method. This method consists of two part: face recognition and extraction of facial organs which are eye, eyebrow, nose and mouth. In first stage, we use genetic algorithms(GAs) to get face region in complex background. In second stage, we use Geometrical Face Model to textract eye, eyebrow, nose and mouth. In both stage, stochastic component is used to deal with the problems caused by had lighting condition. According to this value, blurring number is determined. Average Computation time is less than 1 sec, and using this method we can extract facial feature efficiently from several images which has different lightning condition.

  • PDF

Face Pose Estimation using Stereo Image (스테레오 영상을 이용한 얼굴 포즈 추정)

  • So, In-Mi;Kang, Sun-Kyung;Kim, Young-Un;Lee, Chi-Geun;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.151-159
    • /
    • 2006
  • In this paper. we Present an estimation method of a face pose by using two camera images. First, it finds corresponding facial feature points of eyebrow, eye and lip from two images After that, it computes three dimensional location of the facial feature points by using the triangulation method of stereo vision techniques. Next. it makes a triangle by using the extracted facial feature points and computes the surface normal vector of the triangle. The surface normal of the triangle represents the direction of the face. We applied the computed face pose to display a 3D face model. The experimental results show that the proposed method extracts correct face pose.

  • PDF

Face Recognition Using Knowledge-Based Feature Extraction and Back-Propagation Algorithm (지식에 기초한 특정추출과 역전파 알고리즘에 의한 얼굴인식)

  • 이상영;함영국;박래홍
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.7
    • /
    • pp.119-128
    • /
    • 1994
  • In this paper, we propose a method for facial feature extraction and recognition algorithm using neural networks. First we extract a face part from the background image based on the knowledge that it is located in the center of an input image and that the background is homogeneous. Then using vertical and horizontal projections. We extract features from the separated face image using knowledge base of human faces. In the recognition step we use the back propagation algorithm of the neural networks and in the learning step to reduce the computation time we vary learning and momentum rates. Our technique recognizes 6 women and 14 men correctly.

  • PDF

Sasang Constitution Classification System by Morphological Feature Extraction of Facial Images

  • Lee, Hye-Lim;Cho, Jin-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.8
    • /
    • pp.15-21
    • /
    • 2015
  • This study proposed a Sasang constitution classification system that can increase the objectivity and reliability of Sasang constitution diagnosis using the image of frontal face, in order to solve problems in the subjective classification of Sasang constitution based on Sasang constitution specialists' experiences. For classification, characteristics indicating the shapes of the eyes, nose, mouth and chin were defined, and such characteristics were extracted using the morphological statistic analysis of face images. Then, Sasang constitution was classified through a SVM (Support Vector Machine) classifier using the extracted characteristics as its input, and according to the results of experiment, the proposed system showed a correct recognition rate of 93.33%. Different from existing systems that designate characteristic points directly, this system showed a high correct recognition rate and therefore it is expected to be useful as a more objective Sasang constitution classification system.