• Title/Summary/Keyword: Facial Feature

Search Result 510, Processing Time 0.026 seconds

A 3D Face Reconstruction Based on the Symmetrical Characteristics of Side View 2D Face Images (측면 2차원 얼굴 영상들의 대칭성을 이용한 3차원 얼굴 복원)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • A widely used 3D face reconstruction method, structure from motion(SfM), shows robust performance when frontal, left, and right face images are used. However, this method cannot reconstruct a self-occluded facial part correctly when only one side view face images are used because only partial facial feature points can be used in this case. In order to solve the problem, the proposed method exploit a constrain that is bilateral symmetry of human faces in order to generate bilateral facial feature points and use both input facial feature points and generated facial feature points to reconstruct a 3D face. For quantitative evaluation of the proposed method, 3D faces were obtained from a 3D face scanner and compared with the reconstructed 3D faces. The experimental results show that the proposed 3D face reconstruction method based on both facial feature points outperforms the previous 3D face reconstruction method based on only partial facial feature points.

A study on the implementation of identification system using facial multi-feature (얼굴의 다중특징을 이용한 인증 시스템 구현)

  • 정택준;문용선;박병석
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.448-451
    • /
    • 2002
  • This study will offer multi-feature recognition instead of an using mono-feature to improve the accuracy of recognition. Each Feature can be found by following ways. For a face, the feature is calculated by the principal component analysis with wavelet multiresolution. For a lip, a filter is used to find out on equation to calculate the edges of the lips first. Then the other feature is calculated by the distance ratio of facial parameters. We've sorted backpropagation neural network and experimented with the inputs used above and then based on the experimental results we discuss the advantage and efficiency.

  • PDF

Extraction of Facial Feature Parameters by Pixel Labeling (화소 라벨링에 의한 얼굴 특징 인수 추출)

  • 김승업;이우범;김욱현;강병욱
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.2
    • /
    • pp.47-54
    • /
    • 2001
  • The main purpose of this study is to propose the algorithm about the extraction of the facial feature. To achieve the above goal, first of all, this study produces binary image for input color image. It calculates area after pixel labeling by variant block-units. Secondly, by contour following, circumference have been calculated. So the proper degree of resemblance about area, circumference, the proper degree of a circle and shape have been calculated using the value of area and circumference. And Third, the algorithm about the methods of extracting parameters which are about the feature of eyes, nose, and mouse using the proper degree of resemblance, general structures and characteristics(symmetrical distance) in face have been accomplished. And then the feature parameters of the front face have been extracted. In this study, twelve facial feature parameters have been extracted by 297 test images taken from 100 people, and 92.93 % of the extracting rate has been shown.

  • PDF

A neural network model for recognizing facial expressions based on perceptual hierarchy of facial feature points (얼굴 특징점의 지각적 위계구조에 기초한 표정인식 신경망 모형)

  • 반세범;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.12 no.1_2
    • /
    • pp.77-89
    • /
    • 2001
  • Applying perceptual hierarchy of facial feature points, a neural network model for recognizing facial expressions was designed. Input data were convolution values of 150 facial expression pictures by Gabor-filters of 5 different sizes and 8 different orientations for each of 39 mesh points defined by MPEG-4 SNHC (Synthetic/Natural Hybrid Coding). A set of multiple regression analyses was performed with the rating value of the affective states for each facial expression and the Gabor-filtered values of 39 feature points. The results show that the pleasure-displeasure dimension of affective states is mainly related to the feature points around the mouth and the eyebrows, while a arousal-sleep dimension is closely related to the feature points around eyes. For the filter sizes. the affective states were found to be mostly related to the low spatial frequency. and for the filter orientations. the oblique orientations. An optimized neural network model was designed on the basis of these results by reducing original 1560(39x5x8) input elements to 400(25x2x8) The optimized model could predict human affective rating values. up to the correlation value of 0.886 for the pleasure-displeasure, and 0.631 for the arousal-sleep. Mapping the results of the optimized model to the six basic emotional categories (happy, sad, fear, angry, surprised, disgusted) fit 74% of human responses. Results of this study imply that, using human principles of recognizing facial expressions, a system for recognizing facial expressions can be optimized even with a a relatively little amount of information.

  • PDF

Homogeneous and Non-homogeneous Polynomial Based Eigenspaces to Extract the Features on Facial Images

  • Muntasa, Arif
    • Journal of Information Processing Systems
    • /
    • v.12 no.4
    • /
    • pp.591-611
    • /
    • 2016
  • High dimensional space is the biggest problem when classification process is carried out, because it takes longer time for computation, so that the costs involved are also expensive. In this research, the facial space generated from homogeneous and non-homogeneous polynomial was proposed to extract the facial image features. The homogeneous and non-homogeneous polynomial-based eigenspaces are the second opinion of the feature extraction of an appearance method to solve non-linear features. The kernel trick has been used to complete the matrix computation on the homogeneous and non-homogeneous polynomial. The weight and projection of the new feature space of the proposed method have been evaluated by using the three face image databases, i.e., the YALE, the ORL, and the UoB. The experimental results have produced the highest recognition rate 94.44%, 97.5%, and 94% for the YALE, ORL, and UoB, respectively. The results explain that the proposed method has produced the higher recognition than the other methods, such as the Eigenface, Fisherface, Laplacianfaces, and O-Laplacianfaces.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF

Relationship Between Morphologic measurement of Facial Feature and Eating Behavior During a Meal (얼굴생김새와 식사행동과의 관련성)

  • Kim, Gyeong-Eup;Kim, Seok-Young
    • Journal of the Korean Society of Food Culture
    • /
    • v.16 no.2
    • /
    • pp.109-117
    • /
    • 2001
  • Judging from the studies carried out by Dr. Jo, Yong Jin on the Koreans' faces, Koreans divided into two constitutions according to their facial features and heritages. The one population is the Northern lineage whose ancestor migrated from Siberia in ice age. In order to survive in cold climate, they have developed a high level of metabolic heat production. Cold adaptation for preventing heat loss results in a reduction in the facial surface area with small eyes, nose and lips. The other population is the Southern lineage who is the descent of native in Korean peninsular. They have big eyes with double edged eyelids, broad nose and thick lips. It is generally believed that both genetic and environmetal factors influence eating behaviors. Although we can't recognized their heritage that may contribute to the metabolism and eating behavior, we commonly recognize their physiological heritage acceding to their facial features. In order to investigate the relationship among the size and shape of facial feature, the eating behavior, anthropometric measurement in female college students, the eating behaviors was measured during an instant-noodle lunch eaten in a laboratory setting at the ambient temperature of $23^{\circ}C$. The anterior surface area of left eye and length of right eye were positively correlated with the difference between the peak postprandial and the meal-start core temperature. The surface area of lower lip also negatively correlated with the meal-start core temperature and meal duration. In addition, the total lips' area was positively correlated with the difference between the peak postprandial and the meal-start core temperature and negatively correlated with the meal duration. However anthropometric measurements were not related with the size of facial features.

  • PDF

Face classification and analysis based on geometrical feature of face (얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템)

  • Jeong, Kwang-Min;Kim, Jung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1495-1504
    • /
    • 2012
  • This paper proposes an algorithm to classify and analyze facial features such as eyebrow, eye, mouth and chin based on the geometric features of the face. As a preprocessing process to classify and analyze the facial features, the algorithm extracts the facial features such as eyebrow, eye, nose, mouth and chin. From the extracted facial features, it detects the shape and form information and the ratio of distance between the features and formulated them to evaluation functions to classify 12 eyebrows types, 3 eyes types, 9 mouth types and 4 chine types. Using these facial features, it analyzes a face. The face analysis algorithm contains the information about pixel distribution and gradient of each feature. In other words, the algorithm analyzes a face by comparing such information about the features.

A Study on Appearance-Based Facial Expression Recognition Using Active Shape Model (Active Shape Model을 이용한 외형기반 얼굴표정인식에 관한 연구)

  • Kim, Dong-Ju;Shin, Jeong-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • This paper introduces an appearance-based facial expression recognition method using ASM landmarks which is used to acquire a detailed face region. In particular, EHMM-based algorithm and SVM classifier with histogram feature are employed to appearance-based facial expression recognition, and performance evaluation of proposed method was performed with CK and JAFFE facial expression database. In addition, performance comparison was achieved through comparison with distance-based face normalization method and a geometric feature-based facial expression approach which employed geometrical features of ASM landmarks and SVM algorithm. As a result, the proposed method using ASM-based face normalization showed performance improvements of 6.39% and 7.98% compared to previous distance-based face normalization method for CK database and JAFFE database, respectively. Also, the proposed method showed higher performance compared to geometric feature-based facial expression approach, and we confirmed an effectiveness of proposed method.

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF