• Title/Summary/Keyword: facial component features

Search Result 46, Processing Time 0.023 seconds

Face Tracking System Using Updated Skin Color (업데이트된 피부색을 이용한 얼굴 추적 시스템)

  • Ahn, Kyung-Hee;Kim, Jong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.5
    • /
    • pp.610-619
    • /
    • 2015
  • *In this paper, we propose a real-time face tracking system using an adaptive face detector and a tracking algorithm. An image is divided into the regions of background and face candidate by a real-time updated skin color identifying system in order to accurately detect facial features. The facial characteristics are extracted using the five types of simple Haar-like features. The extracted features are reinterpreted by Principal Component Analysis (PCA), and the interpreted principal components are processed by Support Vector Machine (SVM) that classifies into facial and non-facial areas. The movement of the face is traced by Kalman filter and Mean shift, which use the static information of the detected faces and the differences between previous and current frames. The proposed system identifies the initial skin color and updates it through a real-time color detecting system. A similar background color can be removed by updating the skin color. Also, the performance increases up to 20% when the background color is reduced in comparison to extracting features from the entire region. The increased detection rate and speed are acquired by the usage of Kalman filter and Mean shift.

Skin Condition Analysis of Facial Image using Smart Device: Based on Acne, Pigmentation, Flush and Blemish

  • Park, Ki-Hong;Kim, Yoon-Ho
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.8 no.2
    • /
    • pp.47-58
    • /
    • 2018
  • In this paper, we propose a method for skin condition analysis using a camera module embedded in a smartphone without a separate skin diagnosis device. The type of skin disease detected in facial image taken by smartphone is acne, pigmentation, blemish and flush. Face features and regions were detected using Haar features, and skin regions were detected using YCbCr and HSV color models. Acne and flush were extracted by setting the range of a component image hue, and pigmentation was calculated by calculating the factor between the minimum and maximum value of the corresponding skin pixel in the component image R. Blemish was detected on the basis of adaptive thresholds in gray scale level images. As a result of the experiment, the proposed skin condition analysis showed that skin diseases of acne, pigmentation, blemish and flush were effectively detected.

Robust Facial Expression Recognition Based on Local Directional Pattern

  • Jabid, Taskeed;Kabir, Md. Hasanul;Chae, Oksam
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.784-794
    • /
    • 2010
  • Automatic facial expression recognition has many potential applications in different areas of human computer interaction. However, they are not yet fully realized due to the lack of an effective facial feature descriptor. In this paper, we present a new appearance-based feature descriptor, the local directional pattern (LDP), to represent facial geometry and analyze its performance in expression recognition. An LDP feature is obtained by computing the edge response values in 8 directions at each pixel and encoding them into an 8 bit binary number using the relative strength of these edge responses. The LDP descriptor, a distribution of LDP codes within an image or image patch, is used to describe each expression image. The effectiveness of dimensionality reduction techniques, such as principal component analysis and AdaBoost, is also analyzed in terms of computational cost saving and classification accuracy. Two well-known machine learning methods, template matching and support vector machine, are used for classification using the Cohn-Kanade and Japanese female facial expression databases. Better classification accuracy shows the superiority of LDP descriptor against other appearance-based feature descriptors.

The analysis of physical features and affective words on facial types of Korean females in twenties (얼굴의 물리적 특징 분석 및 얼굴 관련 감성 어휘 분석 - 20대 한국인 여성 얼굴을 대상으로 -)

  • 박수진;한재현;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.3
    • /
    • pp.1-10
    • /
    • 2002
  • This study was performed to analyze the physical attributes of the faces and affective words on the fares. For analyzing physical attributes inside of a face, 36 facial features were selected and almost of them were the lengths or distance values. For analyzing facial contour 14 points were selected and the lengths from nose-end to them were measured. The values of these features except ratio values normalized by facial vortical length or facial horizontal length because the face size of each person is different. The principal component analysis (PCA) was performed and four major factors were extracted: 'facial contour' component, 'vortical length of eye' component, 'facial width' component, 'eyebrow region' component. We supposed the five-dimensional imaginary space of faces using factor scores of PCA, and selected representative faces evenly in this space. On the other hand, the affective words on faces were collected from magazines and through surveys. The factor analysis and multidimensional scaling method were performed and two orthogonal dimensions for the affections on faces were suggested: babyish-mature and sharp-soft.

  • PDF

Comparison of Computer and Human Face Recognition According to Facial Components

  • Nam, Hyun-Ha;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.1
    • /
    • pp.40-50
    • /
    • 2012
  • Face recognition is a biometric technology used to identify individuals based on facial feature information. Previous studies of face recognition used features including the eye, mouth and nose; however, there have been few studies on the effects of using other facial components, such as the eyebrows and chin, on recognition performance. We measured the recognition accuracy affected by these facial components, and compared the differences between computer-based and human-based facial recognition methods. This research is novel in the following four ways compared to previous works. First, we measured the effect of components such as the eyebrows and chin. And the accuracy of computer-based face recognition was compared to human-based face recognition according to facial components. Second, for computer-based recognition, facial components were automatically detected using the Adaboost algorithm and active appearance model (AAM), and user authentication was achieved with the face recognition algorithm based on principal component analysis (PCA). Third, we experimentally proved that the number of facial features (when including eyebrows, eye, nose, mouth, and chin) had a greater impact on the accuracy of human-based face recognition, but consistent inclusion of some feature such as chin area had more influence on the accuracy of computer-based face recognition because a computer uses the pixel values of facial images in classifying faces. Fourth, we experimentally proved that the eyebrow feature enhanced the accuracy of computer-based face recognition. However, the problem of occlusion by hair should be solved in order to use the eyebrow feature for face recognition.

A Study on the Ratio of Human and Dog Facial Components based on Principal Component Analysis (주성분 분석기반 인간과 개의 얼굴 비율 연구)

  • Lee, Young-suk;Ki, Dae Wook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1339-1347
    • /
    • 2020
  • This study is a preliminary study to design a character automation system that considers the facial characteristics of mammals. The experimental data of this study was conducted on dogs (dog breeds) and humans, which were designed to be used in many contents. First, data was extracted from 100 types of dogs and 100 human data. Second, the criteria for measuring the ratio of important parts of the dog and human face were suggested. In addition, a comparative analysis of the face of a dog and a human face is conducted. Lastly, by analyzing the main component(PCA), the most characteristic elements in the faces of dogs and humans were analyzed. As a result, it was confirmed that the length of the face, the size of the eyes, the length of the glabellar, and the length of the glabellar and other parts are important. Through this study, the features of the dog's face that are different from humans are expected to contribute to the animal character automation.

Automatic Facial Expression Recognition using Tree Structures for Human Computer Interaction (HCI를 위한 트리 구조 기반의 자동 얼굴 표정 인식)

  • Shin, Yun-Hee;Ju, Jin-Sun;Kim, Eun-Yi;Kurata, Takeshi;Jain, Anil K.;Park, Se-Hyun;Jung, Kee-Chul
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.3
    • /
    • pp.60-68
    • /
    • 2007
  • In this paper, we propose an automatic facial expressions recognition system to analyze facial expressions (happiness, disgust, surprise and neutral) using tree structures based on heuristic rules. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the origins of user's eyes are localized using neural network (NN)-based texture classifier, then the facial features using some heuristics are localized. After detection of facial features, the facial expression recognition are performed using decision tree. To assess the validity of the proposed system, we tested the proposed system using 180 facial image in the MMI, JAFFE, VAK DB. The results show that our system have the accuracy of 93%.

  • PDF

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Stylized Facial Illustration (스타일화된 얼굴 일러스트레이션)

  • Son, Min-Jung;Cho, Sung-Hyun;Lee, Seung-Wook;Koo, Bon-Ki;Lee, Seung-Yong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.2
    • /
    • pp.27-33
    • /
    • 2008
  • We propose a stylized facial illustration method that expresses important features of a target highly abstractly but effectively from a human facial picture. Our method first detects facial components such as eyes and their associated regions from an input image, and then uses the detected results to render a stylized portrait. Our illustration method mainly consists of two key components and additional components: a tonal illustration component to draw simple tones, a line illustration component to draw a set of lines, and additional illustration components for hair, clothes. etc. The illustration part of the proposed method aims at illustrating features of a target effectively in a highly abstracted way like hand-drawn paintings. In order to achieve this goal, our method adopts an oriental black-ink painting style, which expresses objects effectively with empty spaces and simple expressions such as abstracted lines.

  • PDF

Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features (표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.821-827
    • /
    • 2009
  • Human emotion can be reflected by their facial expressions. So, it is one of good ways to understand people's emotions by recognizing their facial expressions. General recognition system of facial expressions had selected interesting points, and then only extracted features without analyzing physical meanings. They takes a long time to find interesting points, and it is hard to estimate accurate positions of these feature points. And in order to implement a recognition system of facial expressions on real-time embedded system, it is needed to simplify the algorithm and reduce the using resources. In this paper, we propose a real-time recognition algorithm of facial expressions that project the grid points on an expression space based on Gabor wavelet feature. Facial expression is simply described by feature vectors on the expression space, and is classified by an neural network with its resources dramatically reduced. The proposed system deals 5 expressions: anger, happiness, neutral, sadness, and surprise. In experiment, average execution time is 10.251 ms and recognition rate is measured as 87~93%.