• Title/Summary/Keyword: Facial Component

Search Result 182, Processing Time 0.028 seconds

A Study on Face Image Recognition Using Feature Vectors (특징벡터를 사용한 얼굴 영상 인식 연구)

  • Kim Jin-Sook;Kang Jin-Sook;Cha Eui-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.4
    • /
    • pp.897-904
    • /
    • 2005
  • Face Recognition has been an active research area because it is not difficult to acquire face image data and it is applicable in wide range area in real world. Due to the high dimensionality of a face image space, however, it is not easy to process the face images. In this paper, we propose a method to reduce the dimension of the facial data and extract the features from them. It will be solved using the method which extracts the features from holistic face images. The proposed algorithm consists of two parts. The first is the using of principal component analysis (PCA) to transform three dimensional color facial images to one dimensional gray facial images. The second is integrated linear discriminant analusis (PCA+LDA) to prevent the loss of informations in case of performing separated steps. Integrated LDA is integrated algorithm of PCA for reduction of dimension and LDA for discrimination of facial vectors. First, in case of transformation from color image to gray image, PCA(Principal Component Analysis) is performed to enhance the image contrast to raise the recognition rate. Second, integrated LDA(Linear Discriminant Analysis) combines the two steps, namely PCA for dimensionality reduction and LDA for discrimination. It makes possible to describe concise algorithm expression and to prevent the information loss in separate steps. To validate the proposed method, the algorithm is implemented and tested on well controlled face databases.

Face Recognition using Extended Center-Symmetric Pattern and 2D-PCA (Extended Center-Symmetric Pattern과 2D-PCA를 이용한 얼굴인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.111-119
    • /
    • 2013
  • Face recognition has recently become one of the most popular research areas in the fields of computer vision, machine learning, and pattern recognition because it spans numerous applications, such as access control, surveillance, security, credit-card verification, and criminal identification. In this paper, we propose a simple descriptor called an ECSP(Extended Center-Symmetric Pattern) for illumination-robust face recognition. The ECSP operator encodes the texture information of a local face region by emphasizing diagonal components of a previous CS-LBP(Center-Symmetric Local Binary Pattern). Here, the diagonal components are emphasized because facial textures along the diagonal direction contain much more information than those of other directions. The facial texture information of the ECSP operator is then used as the input image of an image covariance-based feature extraction algorithm such as 2D-PCA(Two-Dimensional Principal Component Analysis). Performance evaluation of the proposed approach was carried out using various binary pattern operators and recognition algorithms on the Yale B database. The experimental results demonstrated that the proposed approach achieved better recognition accuracy than other approaches, and we confirmed that the proposed approach is effective against illumination variation.

Design of Two-Dimensional Robust Face Recognition System Realized with the Aid of Facial Symmetry with Illumination Variation (얼굴의 대칭성을 이용하여 조명 변화에 강인한 2차원 얼굴 인식 시스템 설계)

  • Kim, Jong-Bum;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.7
    • /
    • pp.1104-1113
    • /
    • 2015
  • In this paper, we propose Two-Dimensional Robust Face Recognition System Realized with the Aid of Facial Symmetry with Illumination Variation. Preprocessing process is carried out to obtain mirror image which means new image rearranged by using difference between light and shade of right and left face based on a vertical axis of original face image. After image preprocessing, high dimensional image data is transformed to low-dimensional feature data through 2-directional and 2-dimensional Principal Component Analysis (2D)2PCA, which is one of dimensional reduction techniques. Polynomial-based Radial Basis Function Neural Network pattern classifier is used for face recognition. While FCM clustering is applied in the hidden layer, connection weights are defined as a linear polynomial function. In addition, the coefficients of linear function are learned through Weighted Least Square Estimation(WLSE). The Structural as well as parametric factors of the proposed classifier are optimized by using Particle Swarm Optimization(PSO). In the experiment, Yale B data is employed in order to confirm the advantage of the proposed methodology designed in the diverse illumination variation

Face Detection Based on Distribution Map (분포맵에 기반한 얼굴 영역 검출)

  • Cho Han-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.1
    • /
    • pp.11-22
    • /
    • 2006
  • Recently face detection has actively been researched due to its wide range of applications, such as personal identification and security systems. In this paper, a new face detection method based on the distribution map is proposed. Face-like regions are first extracted by applying the skin color map with the frequency to a color image and then, possible eye regions are determined by using the pupil color distribution map within the face-like regions. This enables the reduction of space for finding facial features. Eye candidates are detected by means of a template matching method using weighted window, which utilizes the correlation values of the luminance component and chrominance components as feature vectors. Finally, a cost function for mouth detection and location information between the facial features are applied to each pair of the eye candidates for face detection. Experimental results show that the proposed method can achieve a high performance.

  • PDF

Comparative Analysis of the Responses to Intruders with Anxiety-Related Behaviors of Mouse

  • Kim, Sang-Hyeon;Kang, Eun-Chai;Park, Chan-Kyu
    • Animal cells and systems
    • /
    • v.8 no.4
    • /
    • pp.301-306
    • /
    • 2004
  • Anxiety in mice can be measured by behavioral reactivity to social or non-social stressors. These behaviors were compared by performing the resident-intruder test (social) as well as the light-dark transition and open-field tests (non-social) for the FVB, C57BL/6, and BALB/c lines of mouse. The three inbred lines showed significant differences in their responses to intruder mice. Three factors, accounting for about 68% of the total variance, were extracted from the scores obtained from the three behavioral tests. The first two major factors are primarily associated with the anxiety-related behaviors. One includes anxiety behaviors with a locomotive basis, while the other includes defecation measured in both anxiety tests. The third factor explains the three social behaviors, facial investigation, ano-genital investigation, and following, observed in the resident intruder test, although facial investigation is also moderately associated with the second factor. The results indicate that the behavioral responses to an intruder share a component distinct from anxiety-related behaviors.

Non-Contact Heart Rate Monitoring from Face Video Utilizing Color Intensity

  • Sahin, Sarker Md;Deng, Qikang;Castelo, Jose;Lee, DoHoon
    • Journal of Multimedia Information System
    • /
    • v.8 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Heart Rate is a crucial physiological parameter that provides basic information about the state of the human body in the cardiovascular system, as well as in medical diagnostics and fitness assessments. At present day, it has been demonstrated that facial video-based photoplethysmographic signal captured using a low-cost RGB camera is possible to retrieve remote heart rate. Traditional heart rate measurement is mostly obtained by direct contact with the human body, therefore, it can result inconvenient for long-term measurement due to the discomfort that it causes to the subject. In this paper, we propose a non-contact-based remote heart rate measuring approach of the subject which depends on the color intensity variation of the subject's facial skin. The proposed method is applied in two regions of the subject's face, forehead and cheeks. For this, three different algorithms are used to measure the heart rate. i.e., Fast Fourier Transform (FFT), Independent Component Analysis (ICA) and Principal Component Analysis (PCA). The average accuracy for the three algorithms utilizing the proposed method was 89.25% in both regions. It is also noteworthy that the FastICA algorithm showed a higher average accuracy of more than 92% in both regions. The proposed method obtained 1.94% higher average accuracy than the traditional method based on average color value.

The effect of CR-CO discrepancy on cephalometric measurements in Class III malocclusion patients (골격성 III급 부정교합자에서 중심위 변위가 두부 방사선 계측치에 미치는 영향)

  • Park, Yang-Soo;Kim, Jong-Chul;Hwang, Hyeon-Shik
    • The korean journal of orthodontics
    • /
    • v.26 no.3
    • /
    • pp.255-265
    • /
    • 1996
  • The purpose of this study was to investigate if there were a significant difference between cephalometric measurements of mandibular position derived from a centric occlusion tracing compared to those of a converted centric relation tracing in the Class III malocclusion. The sample consisted of 25 Class III malocclusion and 25 normal occlusion persons who had no orthodontic treatment. The records included an lateral cephalometrics in centric occlusion, centric relation and centric occlusion bite registration and diagnostic casts mounted on the SAM II articulator in CR. The amount of CR-CO discrepancy of condyle was recorded using a MPI(Mandibular Position Indicator, MPI $200^{(R)}$, Great Lakes Orthodontics, USA). The conversion of the CO cephalogram to CR using the MPI readings was performed on the Conversion work sheet. Measures of mandibular position were chosen for the purpose of this study. The comparison of the difference between CO and CR cephalometric measurements in the normal occlusion and Class III malocclusion group were studied. The results were as follows: 1. In the features of CR-CO discrepancy of the condyle, the condyle was displaced posterior and inferior when the teeth were in centric occlusion. The horizontal component(${\Delta}X$) in Class HI malocclusion group was greater than the vertical component(${\Delta}Z$) and also greater than the horizontal component(${\Delta}X$) in normal occlusion group. There was no statistically significant correlation between MPI measurements and the groups of normal occlusion and Class III malocclusion group. 2. In the comparison of the cephalometric measurements in each group, Normal occlusion group showed significant difference in measurements such as ANB, Facial angle, Facial convexity and ODI. Class HI malocclusion group showed significant difference in measurements such as ANB, Facial angle, Facial convexity, ODI, SNB, APDI, L1-FP and it had more significance than the normal occlusion group. 3. The Value of cephalometric measurements was significantly different between CO and CR but there were no differences between the groups of normal occlusion and Class III malocclusion. The results of this study suggest that if the discrepancies are greater than the amount of normal displacement from clinically captured centric relation, centric relation should be considered as the starting point for proper diagnosis and treatment planning.

  • PDF

Automatic Extraction of the Facial Feature Points Using Moving Color (색상 움직임을 이용한 얼굴 특징점 자동 추출)

  • Kim, Nam-Ho;Kim, Hyoung-Gon;Ko, Sung-Jea
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.55-67
    • /
    • 1998
  • This paper presents an automatic facial feature point extraction algorithm in sequential color images. To extract facial region in the video sequence, a moving color detection technique is proposed that emphasize moving skin color region by applying motion detection algorithm on the skin-color transformed images. The threshold value for the pixel difference detection is also decided according to the transformed pixel value that represents the probability of the desired color information. Eye candidate regions are selected using both of the black/white color information inside the skin-color region and the valley information of the moving skin region detected using morphological operators. Eye region is finally decided by the geometrical relationship of the eyes and color histogram. To decide the exact feature points, the PCA(Principal Component Analysis) is used on each eye and mouth regions. Experimental results show that the feature points of eye and mouth can be obtained correctly irrespective of background, direction and size of face.

  • PDF

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

A STUDY ON THE FACIAL MORPHOLOGY AND GROWTH CHANGES IN UNILATERAL CLEFT LIP AND PALATE PATIENTS ACCORDING TO THE AGES (연령에 따른 편측성 순구개열자의 안모형태 변화에 관한 연구)

  • Kim, Young-Mi;Park, Soo-Byung;Rhee, Byung-Tae
    • The korean journal of orthodontics
    • /
    • v.22 no.3 s.38
    • /
    • pp.657-673
    • /
    • 1992
  • Orthodontic treatment of cleft patients is difficult as the growth is different from that of normal ones. So it is very important to know the characteristic features of the craniofacial morphology and growth pattern in unilateral cleft lip and palate patients. The materials for this study consisted of 55 normal males and 50 unilateral cleft lip and palate ones who received cheiloplasty and palatoplasty previously. The cleft subjects were divided into 4 groups according to their ages kto find out the growth pattern of hard and soft tissue, and to compare the features with those of normal ones. Each cephalogram analysed by McNamara method and others. The obtained results were as follows 1. In the unilateral cleft lip and palate subjects, forward growth of the maxilla was smaller than that of normal ones from 9 years old. So the maxilla was retruded. The maxillary incisors were severely retruded in all age groups. 2. The mandibular overall length and its anteroposterior position did not show any significant differences between two groups. But the height of ramus was very short and the mandible had vertical growth tendency to compensate for undergrowth of the maxilla in cleft subjects after 12 years of age. 3. Horizontal growth of the soft tissue in middle face was smaller than that of any other facial region from 9 years old. The vertical growth rate of upper lip was decreased as growing old. 4. In cleft subjects, the upper and lower facial component angle and the facial convexity angle were large. So their facial profile changed to straight or concave as growing old.

  • PDF