• Title/Summary/Keyword: facial feature

Search Result 509, Processing Time 0.023 seconds

The Facial Area Extraction Using Multi-Channel Skin Color Model and The Facial Recognition Using Efficient Feature Vectors (Multi-Channel 피부색 모델을 이용한 얼굴영역추출과 효율적인 특징벡터를 이용한 얼굴 인식)

  • Choi Gwang-Mi;Kim Hyeong-Gyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.7
    • /
    • pp.1513-1517
    • /
    • 2005
  • In this paper, I make use of a Multi-Channel skin color model with Hue, Cb, Cg using Red, Blue, Green channel altogether which remove bight component as being consider the characteristics of skin color to do modeling more effective to a facial skin color for extracting a facial area. 1 used efficient HOLA(Higher order local autocorrelation function) using 26 feature vectors to obtain both feature vectors of a facial area and the edge image extraction using Harr wavelet in image which split a facial area. Calculated feature vectors are used of date for the facial recognition through learning of neural network It demonstrate improvement in both the recognition rate and speed by proposed algorithm through simulation.

Analysis and Syntheris of Facial Images for Age Change (나이변화를 위한 얼굴영상의 분석과 합성)

  • 박철하;최창석;최갑석
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.101-111
    • /
    • 1994
  • The human face can provide a great deal of information in regard to his/her race, age, sex, personality, feeling, psychology, mental state, health condition and ect. If we pay a close attention to the aging process, we are able to find out that there are recognizable phenomena such as eyelid drooping, cheek drooping, forehead furrowing, hair falling-out, the hair becomes gray and etc. This paper proposes that the method to estimate the age by analyzing these feature components for the facial image. Ang we also introduce the method of facial image synthesis in accordance with the cange of age. The feature components according to the change of age can be obtainec by dividing the facial image into the 3-dimensional shape of a face and the texture of a face and then analyzing the principle component respectively using 3-dimensional model. We assume the age of the facial image by comparing the extracted feature component to the facial image and synthesize the resulted image by adding or subtracting the feature component to/from the facial image. As a resurt of this simulation, we have obtained the age changed ficial image of high quality.

  • PDF

A Facial Feature Detection using Light Compensation and Appearance-based Features (빛 보상과 외형 기반의 특징을 이용한 얼굴 특징 검출)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.143-153
    • /
    • 2006
  • Facial feature detection is a basic technology in applications such as human computer interface, face recognition, face tracking and image database management. The speed of feature detection algorithm is one of the main issues for facial feature detection in real-time environment. Primary factors like a variation by lighting effect, location, rotation and complex background give an effect to decrease a detection ratio. A facial feature detection algorithm is proposed to improve the detection ratio and the detection speed. The proposed algorithm detects skin regions over the entire image improved by CLAHE, an algorithm for light compensation against varying lighting conditions. To extract facial feature points on detected skin regions, it uses appearance-based geometrical characteristics of a face. Since the method shows fast detection speed as well as efficient face-detection ratio, it can be applied in real-time application to face tracking and face recognition.

  • PDF

Facial Phrenology Analysis and Automatic Face Avatar Drawing System Based on Internet Using Facial Feature Information (얼굴특징자 정보를 이용한 인터넷 기반 얼굴관상 해석 및 얼굴아바타 자동생성시스템)

  • Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.8
    • /
    • pp.982-999
    • /
    • 2006
  • In this paper, we propose an automatic facial phrenology analysis and avatar drawing system based on internet using multi color information and face geometry. In the proposed system, we detect face using logical product of Cr and I which is a components of YCbCr and YIQ color model, respectively. And then, we extract facial feature using face geometry and analyze user's facial phrenology with the classification of each facial feature. And also, the proposed system can make avatar drawing automatically using extracted and classified facial features. Experimental result shows that proposed algorithm can analyze facial phrenology as well as detect and recognize user's face at real-time.

  • PDF

New Rectangle Feature Type Selection for Real-time Facial Expression Recognition (실시간 얼굴 표정 인식을 위한 새로운 사각 특징 형태 선택기법)

  • Kim Do Hyoung;An Kwang Ho;Chung Myung Jin;Jung Sung Uk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.2
    • /
    • pp.130-137
    • /
    • 2006
  • In this paper, we propose a method of selecting new types of rectangle features that are suitable for facial expression recognition. The basic concept in this paper is similar to Viola's approach, which is used for face detection. Instead of previous Haar-like features we choose rectangle features for facial expression recognition among all possible rectangle types in a 3${\times}$3 matrix form using the AdaBoost algorithm. The facial expression recognition system constituted with the proposed rectangle features is also compared to that with previous rectangle features with regard to its capacity. The simulation and experimental results show that the proposed approach has better performance in facial expression recognition.

Emotion Detection Algorithm Using Frontal Face Image

  • Kim, Moon-Hwan;Joo, Young-Hoon;Park, Jin-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2373-2378
    • /
    • 2005
  • An emotion detection algorithm using frontal facial image is presented in this paper. The algorithm is composed of three main stages: image processing stage and facial feature extraction stage, and emotion detection stage. In image processing stage, the face region and facial component is extracted by using fuzzy color filter, virtual face model, and histogram analysis method. The features for emotion detection are extracted from facial component in facial feature extraction stage. In emotion detection stage, the fuzzy classifier is adopted to recognize emotion from extracted features. It is shown by experiment results that the proposed algorithm can detect emotion well.

  • PDF

Study of Model Based 3D Facial Modeling for Virtual Reality (가상현실에 적용을 위한 모델에 근거한 3차원 얼굴 모델링에 관한 연구)

  • 한희철;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.193-196
    • /
    • 2000
  • In this paper, we present a model based 3d facial modeling method for virtual reality application using only one front of face photography. We extract facial feature using facial photography and modify mesh of the basic 3D model by the facial feature. After this , We use texture mapping for more similarity. By experiment, we know that the modeling technic is useful method for Movie, Virtual Reality Application, Game , Clothing Industry , 3D Video Conference.

  • PDF

Systematic Review on Researches of Sasang Constitution Diagnosis Using Facial Feature (안면형상을 활용한 사상체질 진단 연구에 관한 체계적 고찰)

  • Lee, Seon-Young;Koh, Byung-Hee;Lee, Eui-Ju;Lee, Jun-Hee;Hwang, Min-Woo
    • Journal of Sasang Constitutional Medicine
    • /
    • v.24 no.4
    • /
    • pp.17-27
    • /
    • 2012
  • Objectives : This study proposes developing Sasang Medical Diagnosis Program using Facial form for increase in Sasang Constitution Diagnosis objectivity and putting the Diagnosis Program into practical use. The author presents a review of extant research on Sasang constitution diagnosis utilizing facial feature analysis and suggests an agenda for further research. Methods : For this thesis, a collection of dissertations on the subject of 'Usage of facial form for constitution diagnosis' published until September of 2012 such as RISS4U, OASIS, KISTI, Korean TK were reviewed. The final 33 dissertations were classified into two categories, basic or clinical research and then analyzed. Results : 9 out of 33 dissertations were of basic research and 24 were of clinical research. 1) As result of review of references, a uniform tendency was found in facial form according to Sasang Constitution. 2) In the grade of practical use, facial element is repeatedly used and the facial element of important use has constitutional differences. 3) Standard faces per Sasang Constitution were derived as result of 2-dimensional research. 4) 3-dimensional research focused on improvement of accuracy and reliability of 3D-AFRA, and there has been an attempt to develop a prototype for identification. Conclusions : For practical use of facial feature in Sasang Constitution Diagnosis, 1) Standardization of diagnosis through establishing Sasang Medical Diagnosis clinical protocol must be preceded. After the standardization, practical purpose and direction of facial form in general may be decided. 2) Information on high quality facial form of constitutional and conditional patients must be collected to form extensive database. 3) Subdivided symptomatology, as well as Sasang Constitution must be considered for diagnosis in order for diagnosis technique to acquire clinical practicality.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Model based Facial Expression Recognition using New Feature Space (새로운 얼굴 특징공간을 이용한 모델 기반 얼굴 표정 인식)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.309-316
    • /
    • 2010
  • This paper introduces a new model based method for facial expression recognition that uses facial grid angles as feature space. In order to be able to recognize the six main facial expression, proposed method uses a grid approach and therefore it establishes a new feature space based on the angles that each gird's edge and vertex form. The way taken in the paper is robust against several affine transformations such as translation, rotation, and scaling which in other approaches are considered very harmful in the overall accuracy of a facial expression recognition algorithm. Also, this paper demonstrates the process that the feature space is created using angles and how a selection process of feature subset within this space is applied with Wrapper approach. Selected features are classified by SVM, 3-NN classifier and classification results are validated with two-tier cross validation. Proposed method shows 94% classification result and feature selection algorithm improves results by up to 10% over the full set of feature.