• Title/Summary/Keyword: Facial Component

Search Result 181, Processing Time 0.024 seconds

Face recognition by using independent component analysis (독립 성분 분석을 이용한 얼굴인식)

  • 김종규;장주석;김영일
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.10
    • /
    • pp.48-58
    • /
    • 1998
  • We present a method that can recognize face images using independent component analysis that is used mainly for blind sources separation in signal processing. We assumed that a face image can be expressed as the sum of a set of statistically independent feature images, which was obtained by using independent component analysis. Face recognition was peformed by projecting the input image to the feature image space and then by comparing its projection components with those of stored reference images. We carried out face recognition experiments with a database that consists of various varied face images (total 400 varied facial images collected from 10 per person) and compared the performance of our method with that of the eigenface method based on principal component analysis. The presented method gave better results of recognition rate than the eigenface method did, and showed robustness to the random noise added in the input facial images.

  • PDF

Development of a Recognition System of Smile Facial Expression for Smile Treatment Training (웃음 치료 훈련을 위한 웃음 표정 인식 시스템 개발)

  • Li, Yu-Jie;Kang, Sun-Kyung;Kim, Young-Un;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.4
    • /
    • pp.47-55
    • /
    • 2010
  • In this paper, we proposed a recognition system of smile facial expression for smile treatment training. The proposed system detects face candidate regions by using Haar-like features from camera images. After that, it verifies if the detected face candidate region is a face or non-face by using SVM(Support Vector Machine) classification. For the detected face image, it applies illumination normalization based on histogram matching in order to minimize the effect of illumination change. In the facial expression recognition step, it computes facial feature vector by using PCA(Principal Component Analysis) and recognizes smile expression by using a multilayer perceptron artificial network. The proposed system let the user train smile expression by recognizing the user's smile expression in real-time and displaying the amount of smile expression. Experimental result show that the proposed system improve the correct recognition rate by using face region verification based on SVM and using illumination normalization based on histogram matching.

Facial Regions Detection Using the Color and Shape Information in Color Still Images (컬러 정지 영상에서 색상과 모양 정보를 이용한 얼굴 영역 검출)

  • 김영길;한재혁;안재형
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.1
    • /
    • pp.67-74
    • /
    • 2001
  • In this paper, we propose a face detection algorithm using the color and shape information in color still images. The proposed algorithm is only applied to chrominance components(Cb and Cr) in order to reduce the variations of lighting condition in YCbCr color space. Input image is segmented by pixels with skin-tone color and then the segmented mage follows the morphological filtering an geometric correction to eliminate noise and simplify the segmented regions in facial candidate regions. Multiple facial regions in input images can be isolated by connected component labeling. Moreover tilting facial regions can be detected by extraction of second moment-based ellipse features.

  • PDF

The analysis of physical features and affective words on facial types of Korean females in twenties (얼굴의 물리적 특징 분석 및 얼굴 관련 감성 어휘 분석 - 20대 한국인 여성 얼굴을 대상으로 -)

  • 박수진;한재현;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.3
    • /
    • pp.1-10
    • /
    • 2002
  • This study was performed to analyze the physical attributes of the faces and affective words on the fares. For analyzing physical attributes inside of a face, 36 facial features were selected and almost of them were the lengths or distance values. For analyzing facial contour 14 points were selected and the lengths from nose-end to them were measured. The values of these features except ratio values normalized by facial vortical length or facial horizontal length because the face size of each person is different. The principal component analysis (PCA) was performed and four major factors were extracted: 'facial contour' component, 'vortical length of eye' component, 'facial width' component, 'eyebrow region' component. We supposed the five-dimensional imaginary space of faces using factor scores of PCA, and selected representative faces evenly in this space. On the other hand, the affective words on faces were collected from magazines and through surveys. The factor analysis and multidimensional scaling method were performed and two orthogonal dimensions for the affections on faces were suggested: babyish-mature and sharp-soft.

  • PDF

A Study on Local Micro Pattern for Facial Expression Recognition (얼굴 표정 인식을 위한 지역 미세 패턴 기술에 관한 연구)

  • Jung, Woong Kyung;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Convergence Security Journal
    • /
    • v.14 no.5
    • /
    • pp.17-24
    • /
    • 2014
  • This study proposed LDP (Local Directional Pattern) as a new local micro pattern for facial expression recognition to solve noise sensitive problem of LBP (Local Binary Pattern). The proposed method extracts 8-directional components using $m{\times}m$ mask to solve LBP's problem and choose biggest k components, each chosen component marked with 1 as a bit, otherwise 0. Finally, generates a pattern code with bit sequence as 8-directional components. The result shows better performance of rotation and noise adaptation. Also, a new local facial feature can be developed to present both PFF (permanent Facial Feature) and TFF (Transient Facial Feature) based on the proposed method.

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

Realtime Facial Expression Control of 3D Avatar by PCA Projection of Motion Data (모션 데이터의 PCA투영에 의한 3차원 아바타의 실시간 표정 제어)

  • Kim Sung-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1478-1484
    • /
    • 2004
  • This paper presents a method that controls facial expression in realtime of 3D avatar by having the user select a sequence of facial expressions in the space of facial expressions. The space of expression is created from about 2400 frames of facial expressions. To represent the state of each expression, we use the distance matrix that represents the distances between pairs of feature points on the face. The set of distance matrices is used as the space of expressions. Facial expression of 3D avatar is controled in real time as the user navigates the space. To help this process, we visualized the space of expressions in 2D space by using the Principal Component Analysis(PCA) projection. To see how effective this system is, we had users control facial expressions of 3D avatar by using the system. This paper evaluates the results.

  • PDF

Face Image Analysis using Adaboost Learning and Non-Square Differential LBP (아다부스트 학습과 비정방형 Differential LBP를 이용한 얼굴영상 특징분석)

  • Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.1014-1023
    • /
    • 2016
  • In this study, we presented a method for non-square Differential LBP operation that can well describe the micro pattern in the horizontal and vertical component. We proposed a way to represent a LBP operation with various direction components as well as the diagonal component. In order to verify the validity of the proposed operation, Differential LBP was investigated with respect to accuracy, sensitivity, and specificity for the classification of facial expression. In accuracy comparison proposed LBP operation obtains better results than Square LBP and LBP-CS operations. Also, Proposed Differential LBP gets better results than previous two methods in the sensitivity and specificity indicators 'Neutral', 'Happiness', 'Surprise', and 'Anger' and excellence Differential LBP was confirmed.

Greedy Learning of Sparse Eigenfaces for Face Recognition and Tracking

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.162-170
    • /
    • 2014
  • Appearance-based subspace models such as eigenfaces have been widely recognized as one of the most successful approaches to face recognition and tracking. The success of eigenfaces mainly has its origins in the benefits offered by principal component analysis (PCA), the representational power of the underlying generative process for high-dimensional noisy facial image data. The sparse extension of PCA (SPCA) has recently received significant attention in the research community. SPCA functions by imposing sparseness constraints on the eigenvectors, a technique that has been shown to yield more robust solutions in many applications. However, when SPCA is applied to facial images, the time and space complexity of PCA learning becomes a critical issue (e.g., real-time tracking). In this paper, we propose a very fast and scalable greedy forward selection algorithm for SPCA. Unlike a recent semidefinite program-relaxation method that suffers from complex optimization, our approach can process several thousands of data dimensions in reasonable time with little accuracy loss. The effectiveness of our proposed method was demonstrated on real-world face recognition and tracking datasets.

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.