• Title/Summary/Keyword: face feature points

Search Result 127, Processing Time 0.031 seconds

The Study of Face Model and Face Type (사상인 용모분석을 위한 얼굴표준 및 얼굴유형에 대한 연구현황)

  • Pyeon, Young-Beom;Kwak, Chang-Kyu;Yoo, Jung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Kho, Byung-Hee;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.18 no.2
    • /
    • pp.25-33
    • /
    • 2006
  • 1. Objectives Recently there have been studied the trials to take out the characteristics of Sasangin's face. 3 dimensional modeling is essential to find out sasangin's face. So the studies of standard face model and face type are necessary. 2. Methods I have reviewed the researches of standard facial modeling and facial type in the inside and outside of the country. 3. Results and Conclusions The Facial Definition Parameters are a very complex set of parameters defined by MPEG-4. It has defineds set of 84 feature points and 68 Facial Animation Parameters. Face type has been researched to divide into male and female, or the westerns and the orientals, or sasangin(Taeyangin, Taeumin, Soyangin, Soeumin).

  • PDF

Automatic Generation of Rule-based Caricature Image (규칙 기반 캐리커쳐 자동 생성 기법)

  • Lee, Eun-Jung;Kwon, Ji-Yong;Lee, In-Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.12 no.4
    • /
    • pp.17-22
    • /
    • 2006
  • We present the technique that automatically generates caricatures from input face images. We get the mean-shape of training images and extract input image's feature point using AAM(Active Appearance Model). From literature of caricature artists, we define exaggeration rules and apply our rules to input feature points, then we can get exaggerated feature points. To change our results into cartoon-like images, we apply some cartoon-stylizing method to input image and combine it with facial sketch. The input image is warped to exaggerated feature point for final results. Our method can automatically generate a caricature image while it minimizes user interaction.

  • PDF

Pose Invariant 3D Face Recognition (포즈 변화에 강인한 3차원 얼굴인식)

  • 송환종;양욱일;이용욱;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2000-2003
    • /
    • 2003
  • This paper presents a three-dimensional (3D) head pose estimation algorithm for robust face recognition. Given a 3D input image, we automatically extract several important 3D facial feature points based on the facial geometry. To estimate 3D head pose accurately, we propose an Error Compensated-SVD (EC-SVD) algorithm. We estimate the initial 3D head pose of an input image using Singular Value Decomposition (SVD) method, and then perform a Pose refinement procedure in the normalized face space to compensate for the error for each axis. Experimental results show that the proposed method is capable of estimating pose accurately, therefore suitable for 3D face recognition.

  • PDF

Detection of Facial Direction for Automatic Image Arrangement (이미지 자동배치를 위한 얼굴 방향성 검출)

  • 동지연;박지숙;이환용
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.135-147
    • /
    • 2003
  • With the development of multimedia and optical technologies, application systems with facial features hare been increased the interests of researchers, recently. The previous research efforts in face processing mainly use the frontal images in order to recognize human face visually and to extract the facial expression. However, applications, such as image database systems which support queries based on the facial direction and image arrangement systems which place facial images automatically on digital albums, deal with the directional characteristics of a face. In this paper, we propose a method to detect facial directions by using facial features. In the proposed method, the facial trapezoid is defined by detecting points for eyes and a lower lip. Then, the facial direction formula, which calculates the right and left facial direction, is defined by the statistical data about the ratio of the right and left area in facial trapezoids. The proposed method can give an accurate estimate of horizontal rotation of a face within an error tolerance of $\pm1.31$ degree and takes an average execution time of 3.16 sec.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Behavior-classification of Human Using Fuzzy-classifier (퍼지분류기를 이용한 인간의 행동분류)

  • Kim, Jin-Kyu;Joo, Young-Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.12
    • /
    • pp.2314-2318
    • /
    • 2010
  • For human-robot interaction, a robot should recognize the meaning of human behavior. In the case of static behavior such as face expression and sign language, the information contained in a single image is sufficient to deliver the meaning to the robot. In the case of dynamic behavior such as gestures, however, the information of sequential images is required. This paper proposes behavior classification by using fuzzy classifier to deliver the meaning of dynamic behavior to the robot. The proposed method extracts feature points from input images by a skeleton model, generates a vector space from a differential image of the extracted feature points, and uses this information as the learning data for fuzzy classifier. Finally, we show the effectiveness and the feasibility of the proposed method through experiments.

Studies on the Modeling of the Three-dimensional Standard Face and Deriving of Facial Characteristics Depending on the Taeeumin and Soyangin (소양인, 태음인의 표준 3차원 얼굴 모델링 개발 및 그 특성에 관한 연구)

  • Lee, Seon-Young;Hwang, Min-Woo
    • Journal of Sasang Constitutional Medicine
    • /
    • v.26 no.4
    • /
    • pp.350-364
    • /
    • 2014
  • Objectives This study was aimed to find the significant features of face form according to the Taeeumin and Soyangin by analyzing the three-dimensional face information data. Also, making standard face of the Taeeumin and Soyangin was an object of this study. Methods We collected three-dimensional face data of patients aged between 20~45 years old diagnosed by a specialist of Sasang constitutional medicine. The data were collected using a 3D scanner, Morpheus 3D(Morpheus Corporation, KOREA). Extracting a face feature point total of 64, was set to 332 pieces(height, angle, ratio, etc.) of each variable between feature points. ANOVA test were used to compare the characteristics of subjects according to the Taeeumin and Soyangin. Results When not to consider gender, the Taeeumin and Soyangin were different from the 18 items(3 items in the ear, 9 items in the eye, 1 item in the nose, 1 item in the mouth, 4 items in the jaw). When to consider gender, the Taeeumin and Soyangin men were different from the 6 items(1 item in the ear, 2 items in the nose, 3 items in the face). And the Taeeumin and Soyangin women were different from 17 items(1 item in the ear, 10 items in the eye, 2 items in the nose, 1 item in the mouth, 3 items in the face). Conclusions These results show Taeeumin's face(both men and women) width of the right and left is larger than the length of the top and bottom. Compared to men of Soyangin, men of Taeeumin has greater wings of the nose. Compared to women of Soyangin, women of Taeeumin has longer length of the eye. Soyangin's face(both men and women) length of the top and bottom is larger than the width of the right and left. Compared to men of Taeeumin, men of Soyangin has smaller wings of the nose. Compared to women of Taeeumin, women of Soyangin has more stereoscopic facial features at the top and bottom of the lateral face. Also, by accumulating three-dimensional face data, this study modeled the standard facial features by Taeeumin and Soyangin. These results may be helpful in the development of Sasang constitutional diagnostics utilizing the characteristics of the facial form at later.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

A Study on Costume Feature of Italian Masque Commedia Dell'arte and Korean Masque (이태리 가면희극 코메디아 델라르테(commedia dell'arte)와 한국 가면극의 복식특성 연구)

  • Kim, Hee-Jung
    • Journal of the Korean Home Economics Association
    • /
    • v.47 no.2
    • /
    • pp.15-26
    • /
    • 2009
  • The purpose of this study is to research development process of commedia dell'arte and Korean masque that have similar figure, grasp similarity and difference and find the meaning of masque and costume in both theatrical arts. Italian commedia dell'arte and Korean masque are performed by wearing standardized mask and costume depending on the role. As common points, first, the characters have unique names and possess unique features of character, costumes, masks and playing styles. Through the feature, the audiences can understand role of actor and the actors can devote themselves to their role by wearing masks and costumes. Second, although background plays an important role in commedia dell'arte, the role of costume is more important. Because masque speaks for poverties of general people indirectly, the costumes of general people were used as they are. As different point, first, most of Korean masks cover entire face, restricting speech of actor but masks of commedia dell'arte cover only upper part of face and expos mouth and chin of actor, enabling actors to express various emotions depending on the character. Second, priority is given to personality of actor and origin area and current silhouette, material and color that changed by century is reflected in the costume of commedia dell'arte but silhouette, material and color of the Age of Joseon Dynasty were adopted in Korean masque.

Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection (다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.