• Title/Summary/Keyword: facial features

Search Result 633, Processing Time 0.032 seconds

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Real-time Recognition System of Facial Expressions Using Principal Component of Gabor-wavelet Features (표정별 가버 웨이블릿 주성분특징을 이용한 실시간 표정 인식 시스템)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.6
    • /
    • pp.821-827
    • /
    • 2009
  • Human emotion can be reflected by their facial expressions. So, it is one of good ways to understand people's emotions by recognizing their facial expressions. General recognition system of facial expressions had selected interesting points, and then only extracted features without analyzing physical meanings. They takes a long time to find interesting points, and it is hard to estimate accurate positions of these feature points. And in order to implement a recognition system of facial expressions on real-time embedded system, it is needed to simplify the algorithm and reduce the using resources. In this paper, we propose a real-time recognition algorithm of facial expressions that project the grid points on an expression space based on Gabor wavelet feature. Facial expression is simply described by feature vectors on the expression space, and is classified by an neural network with its resources dramatically reduced. The proposed system deals 5 expressions: anger, happiness, neutral, sadness, and surprise. In experiment, average execution time is 10.251 ms and recognition rate is measured as 87~93%.

Face Recognition Using Adaboost Loaming (Adaboost 학습을 이용한 얼굴 인식)

  • 정종률;최병욱
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2016-2019
    • /
    • 2003
  • In this paper, we take some features for face recognition out of face image, using a simple type of templates. We use the extracted features to do Adaboost learning for face recognition. Using a carefully-chosen feature among these features, we can make a weak face classifier for face recognition. And doing Adaboost learning on and on with those chosen several weak classifiers, we can get a strong face classifier. By using Adaboost Loaming, we can choose particular features which is not easily subject to changes in illumination and facial expression about several images of one person, and construct face recognition system. Therefore, the face classifier bulit like the above way has robustness in both facial expression and illumination variation, and it finally gives capability of recognizing face fast due to the simple feature.

  • PDF

Face Recognition Network using gradCAM (gradCam을 사용한 얼굴인식 신경망)

  • Chan Hyung Baek;Kwon Jihun;Ho Yub Jung
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.9-14
    • /
    • 2023
  • In this paper, we proposed a face recognition network which attempts to use more facial features awhile using smaller number of training sets. When combining the neural network together for face recognition, we want to use networks that use different part of the facial features. However, the network training chooses randomly where these facial features are obtained. Other hand, the judgment basis of the network model can be expressed as a saliency map through gradCAM. Therefore, in this paper, we use gradCAM to visualize where the trained face recognition model has made a observations and recognition judgments. Thus, the network combination can be constructed based on the different facial features used. Using this approach, we trained a network for small face recognition problem. In an simple toy face recognition example, the recognition network used in this paper improves the accuracy by 1.79% and reduces the equal error rate (EER) by 0.01788 compared to the conventional approach.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

Skin Color Based Facial Features Extraction

  • Alom, Md. Zahangir;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.351-354
    • /
    • 2011
  • This paper discusses on facial features extraction based on proposed skin color model. Different parts of face from input image are segmented based on skin color model. Moreover, this paper also discusses on concept to detect the eye and mouth position on face. A height and width ratio (${\delta}=1.1618$) based technique is also proposed to accurate detection of face region from the segmented image. Finally, we have cropped the desired part of the face. This exactly exacted face part is useful for face recognition and detection, facial feature analysis and expression analysis. Experimental results of propose method shows that the proposed method is robust and accurate.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Detection of Facial Feature Regionsby Manipulation of DCT's Coefficients (DCT 계수를 이용한 얼굴 특징 영역의 검출)

  • Lee, Boo-Hyung;Ryu, Jang-Ryeol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.2
    • /
    • pp.267-272
    • /
    • 2007
  • This paper proposes a new approach fur the detection of facial feature regions using the characteristic of DCT(discrete cosine transformation) thatconcentrates the energy of an image into lower frequency coefficients. Since the facial features are pertained to relatively high frequency in a face image, the inverse DCT after removing the DCT's coefficients corresponding to the lower frequencies generates the image where the facial feature regions are emphasized. Thus the facial regions can be easily segmented from the inversed image using any differential operator. In the segmented region, facial features can be found using face template. The proposed algorithm has been tested with the image MIT's CBCL DB and the Yale facedatabase B. The experimental results have shown superior performance under the variations of image size and lighting condition.

  • PDF

Clinical Studies on the General Features and the Obesity-Skinniness of Patients with Bell's Palsy (구안괘사(口眼喎斜)환자의 일반적 특성 및 비수(肥瘦)에 따른 임상적 고찰)

  • Choi, Gyu-Ho;Jang, Sao-Young;Shin, Hyeon-Cheol
    • The Journal of Internal Korean Medicine
    • /
    • v.30 no.1
    • /
    • pp.129-143
    • /
    • 2009
  • Objective : This study was aimed to investigate the general features and differences between obesity and skinniness of patients with Bell's palsy. Methods : We measured the sex, age. BMI. pulse diagnosis and HBGS (House-Brackmann Grading System) of 234 patients who were diagnosed with Bell's palsy. Results and Conclusions : The results with statistical significance were as follows (1) The distribution of age revealed that 40s was the most at 30.8 %: (2) The improvement period in facial palsy patients with sub-paralysis was shorter than whole-paralysis. And in one part the more we treated, the shorter the improvement period was: (3) In distribution of fat rate in facial palsy patients, obesity was the most at 61.37%, low weight 15.88%. So we found that the fatter the patients was. the higher the onset rate was: (4) In distribution of pulse diagnosis in facial palsy patients with obesity. the ratio of Xu mai (虛脈) was 67.06%. Shi mai (實脈) 32.94%. The Xu mai was similar to Qi xu (氣虛). So we found that the facial palsy patients with obesity were more Qi xu than with low weight. In distribution of pulse diagnosis in facial palsy patients with skinniness, the ratio of Chi mai (遲脈) was none. Shuo mai (數脈) was most: (5) In distribution of region in facial palsy patients with obesity-Xu mai. the ratio of left was 45.10%, right 54.90%, but this result was not statistically significant.

  • PDF

Facial Phrenology Analysis and Automatic Face Avatar Drawing System Based on Internet Using Facial Feature Information (얼굴특징자 정보를 이용한 인터넷 기반 얼굴관상 해석 및 얼굴아바타 자동생성시스템)

  • Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.8
    • /
    • pp.982-999
    • /
    • 2006
  • In this paper, we propose an automatic facial phrenology analysis and avatar drawing system based on internet using multi color information and face geometry. In the proposed system, we detect face using logical product of Cr and I which is a components of YCbCr and YIQ color model, respectively. And then, we extract facial feature using face geometry and analyze user's facial phrenology with the classification of each facial feature. And also, the proposed system can make avatar drawing automatically using extracted and classified facial features. Experimental result shows that proposed algorithm can analyze facial phrenology as well as detect and recognize user's face at real-time.

  • PDF