• Title/Summary/Keyword: Training Face Image

Search Result 125, Processing Time 0.025 seconds

Real-Time Face Recognition Based on Subspace and LVQ Classifier (부분공간과 LVQ 분류기에 기반한 실시간 얼굴 인식)

  • Kwon, Oh-Ryun;Min, Kyong-Pil;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.19-32
    • /
    • 2007
  • This paper present a new face recognition method based on LVQ neural net to construct a real time face recognition system. The previous researches which used PCA, LDA combined neural net usually need much time in training neural net. The supervised LVQ neural net needs much less time in training and can maximize the separability between the classes. In this paper, the proposed method transforms the input face image by PCA and LDA sequentially into low-dimension feature vectors and recognizes the face through LVQ neural net. In order to make the system robust to external light variation, light compensation is performed on the detected face by max-min normalization method as preprocessing. PCA and LDA transformations are applied to the normalized face image to produce low-level feature vectors of the image. In order to determine the initial centers of LVQ and speed up the convergency of the LVQ neural net, the K-Means clustering algorithm is adopted. Subsequently, the class representative vectors can be produced by LVQ2 training using initial center vectors. The face recognition is achieved by using the euclidean distance measure between the center vector of classes and the feature vector of input image. From the experiments, we can prove that the proposed method is more effective in the recognition ratio for the cases of still images from ORL database and sequential images rather than using conventional PCA of a hybrid method with PCA and LDA.

  • PDF

Accurate Face Pose Estimation and Synthesis Using Linear Transform Among Face Models (얼굴 모델간 선형변환을 이용한 정밀한 얼굴 포즈추정 및 포즈합성)

  • Suvdaa, B.;Ko, J.
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.508-515
    • /
    • 2012
  • This paper presents a method that estimates face pose for a given face image and synthesizes any posed face images using Active Appearance Model(AAM). The AAM that having been successfully applied to various applications is an example-based learning model and learns the variations of training examples. However, with a single model, it is difficult to handle large pose variations of face images. This paper proposes to build a model covering only a small range of angle for each pose. Then, with a proper model for a given face image, we can achieve accurate pose estimation and synthesis. In case of the model used for pose estimation was not trained with the angle to synthesize, we solve this problem by training the linear relationship between the models in advance. In the experiments on Yale B public face database, we present the accurate pose estimation and pose synthesis results. For our face database having large pose variations, we demonstrate successful frontal pose synthesis results.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Low Resolution Rate Face Recognition Based on Multi-scale CNN

  • Wang, Ji-Yuan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1467-1472
    • /
    • 2018
  • For the problem that the face image of surveillance video cannot be accurately identified due to the low resolution, this paper proposes a low resolution face recognition solution based on convolutional neural network model. Convolutional Neural Networks (CNN) model for multi-scale input The CNN model for multi-scale input is an improvement over the existing "two-step method" in which low-resolution images are up-sampled using a simple bi-cubic interpolation method. Then, the up sampled image and the high-resolution image are mixed as a model training sample. The CNN model learns the common feature space of the high- and low-resolution images, and then measures the feature similarity through the cosine distance. Finally, the recognition result is given. The experiments on the CMU PIE and Extended Yale B datasets show that the accuracy of the model is better than other comparison methods. Compared with the CMDA_BGE algorithm with the highest recognition rate, the accuracy rate is 2.5%~9.9%.

Design of an observer-based decentralized fuzzy controller for discrete-time interconnected fuzzy systems (얼굴영상과 예측한 열 적외선 텍스처의 융합에 의한 얼굴 인식)

  • Kong, Seong G.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.437-443
    • /
    • 2015
  • This paper presents face recognition based on the fusion of visible image and thermal infrared (IR) texture estimated from the face image in the visible spectrum. The proposed face recognition scheme uses a multi- layer neural network to estimate thermal texture from visible imagery. In the training process, a set of visible and thermal IR image pairs are used to determine the parameters of the neural network to learn a complex mapping from a visible image to its thermal texture in the low-dimensional feature space. The trained neural network estimates the principal components of the thermal texture corresponding to the input visible image. Extensive experiments on face recognition were performed using two popular face recognition algorithms, Eigenfaces and Fisherfaces for NIST/Equinox database for benchmarking. The fusion of visible image and thermal IR texture demonstrated improved face recognition accuracies over conventional face recognition in terms of receiver operating characteristics (ROC) as well as first matching performances.

Performance Analysis of Face Recognition by Face Image resolutions using CNN without Backpropergation and LDA (역전파가 제거된 CNN과 LDA를 이용한 얼굴 영상 해상도별 얼굴 인식률 분석)

  • Moon, Hae-Min;Park, Jin-Won;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.24-29
    • /
    • 2016
  • To satisfy the needs of high-level intelligent surveillance system, it shall be able to extract objects and classify to identify precise information on the object. The representative method to identify one's identity is face recognition that is caused a change in the recognition rate according to environmental factors such as illumination, background and angle of camera. In this paper, we analyze the robust face recognition of face image by changing the distance through a variety of experiments. The experiment was conducted by real face images of 1m to 5m. The method of face recognition based on Linear Discriminant Analysis show the best performance in average 75.4% when a large number of face images per one person is used for training. However, face recognition based on Convolution Neural Network show the best performance in average 69.8% when the number of face images per one person is less than five. In addition, rate of low resolution face recognition decrease rapidly when the size of the face image is smaller than $15{\times}15$.

Structuring Program to Improve Unbalance of Woman's Face (여성 얼굴의 불균형 개선을 위한 프로그램 구축)

  • Kim, Ae-Kyung;Lee, Kyung-Hee
    • Fashion & Textile Research Journal
    • /
    • v.13 no.3
    • /
    • pp.398-408
    • /
    • 2011
  • This study shows that the self-satisfaction individually is rising and social life is attracted effective and successful in image making field by structuring the facial image improvement program through experimental study in order to improve unbalance of women's face. Experiment is conducted by electing 3 samples for 12 weeks and analyzing the measurement and visual analysis, infrared thermography, and evaluation of experts in order to check the facial unbalance. Subject 1 had the effect at approximately in 4 weeks with the severely distorted chin line and mouth appendage. The facial outline became softer to turn entire image to be softer and more feminine. Subject 2 had the severe distortion of location and size of eyes and nose. But the skin was getting better at first, followed by eyes getting clearer with the location changed in left and right. Subject 3 had the twisted nose and lower chin, but after two weeks, the eye area and skin were better and the width of left and right chin was similarly changed. On the basis of the above research result, the program to effectively improve the image was structured and presented with the resolution of facial unbalance. Program is consist of the training of breathing method, face washing method, facial muscle exercise.

Vehicle Face Re-identification Based on Nonnegative Matrix Factorization with Time Difference Constraint

  • Ma, Na;Wen, Tingxin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2098-2114
    • /
    • 2021
  • Light intensity variation is one of the key factors which affect the accuracy of vehicle face re-identification, so in order to improve the robustness of vehicle face features to light intensity variation, a Nonnegative Matrix Factorization model with the constraint of image acquisition time difference is proposed. First, the original features vectors of all pairs of positive samples which are used for training are placed in two original feature matrices respectively, where the same columns of the two matrices represent the same vehicle; Then, the new features obtained after decomposition are divided into stable and variable features proportionally, where the constraints of intra-class similarity and inter-class difference are imposed on the stable feature, and the constraint of image acquisition time difference is imposed on the variable feature; At last, vehicle face matching is achieved through calculating the cosine distance of stable features. Experimental results show that the average False Reject Rate and the average False Accept Rate of the proposed algorithm can be reduced to 0.14 and 0.11 respectively on five different datasets, and even sometimes under the large difference of light intensities, the vehicle face image can be still recognized accurately, which verifies that the extracted features have good robustness to light variation.

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

A Study on Mouth Features Detection in Face using HMM (HMM을 이용한 얼굴에서 입 특징점 검출에 관한 연구)

  • Kim, Hea-Chel;Jung, Chan-Ju;Kwag, Jong-Se;Kim, Mun-Hwan;Bae, Chul-Soo;Ra, Snag-Dong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.647-650
    • /
    • 2002
  • The human faces do not have distinct features unlike other general objects. In general the features of eyes, nose and mouth which are first recognized when human being see the face are defined. These features have different characteristics depending on different human face. In this paper, We propose a face recognition algorithm using the hidden Markov model(HMM). In the preprocessing stage, we find edges of a face using the locally adaptive threshold scheme and extract features based on generic knowledge of a face, then construct a database with extracted features. In training stage, we generate HMM parameters for each person by using the forward-backward algorithm. In the recognition stage, we apply probability values calculated by the HMM to input data. Then the input face is recognized by the euclidean distance of face feature vector and the cross-correlation between the input image and the database image. Computer simulation shows that the proposed HMM algorithm gives higher recognition rate compared with conventional face recognition algorithms.

  • PDF