• Title/Summary/Keyword: Facial Model

Search Result 519, Processing Time 0.026 seconds

A Study on Creation of 3D Facial Model Using Facial Image (임의의 얼굴 이미지를 이용한 3D 얼굴모델 생성에 관한 연구)

  • Lee, Hea-Jung;Joung, Suck-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.21-28
    • /
    • 2007
  • The facial modeling and animation technology had been studied in computer graphics field. The facial modeling technology is utilized much in virtual reality research purpose of MPEG-4 and so on and movie, advertisement, industry field of game and so on. Therefore, the development of 3D facial model that can do interaction with human is essential to little more realistic interface. We developed realistic and convenient 3D facial modeling system that using a optional facial image only. This system allows easily fitting to optional facial image by using the Korean standard facial model (generic model). So it generates intuitively 3D facial model as controling control points elastically after fitting control points on the generic model wire to the optional facial image. We can confirm and modify the 3D facial model by movement, magnify, reduce and turning. We experimented with 30 facial images of $630{\times}630$ sizes to verify usefulness of system that developed.

  • PDF

Emotion Recognition based on Tracking Facial Keypoints (얼굴 특징점 추적을 통한 사용자 감성 인식)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.1
    • /
    • pp.97-101
    • /
    • 2019
  • Understanding and classification of the human's emotion play an important tasks in interacting with human and machine communication systems. This paper proposes a novel emotion recognition method by extracting facial keypoints, which is able to understand and classify the human emotion, using active Appearance Model and the proposed classification model of the facial features. The existing appearance model scheme takes an expression of variations, which is calculated by the proposed classification model according to the change of human facial expression. The proposed method classifies four basic emotions (normal, happy, sad and angry). To evaluate the performance of the proposed method, we assess the ratio of success with common datasets, and we achieve the best 93% accuracy, average 82.2% in facial emotion recognition. The results show that the proposed method effectively performed well over the emotion recognition, compared to the existing schemes.

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Study of Model Based 3D Facial Modeling for Virtual Reality (가상현실에 적용을 위한 모델에 근거한 3차원 얼굴 모델링에 관한 연구)

  • 한희철;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.193-196
    • /
    • 2000
  • In this paper, we present a model based 3d facial modeling method for virtual reality application using only one front of face photography. We extract facial feature using facial photography and modify mesh of the basic 3D model by the facial feature. After this , We use texture mapping for more similarity. By experiment, we know that the modeling technic is useful method for Movie, Virtual Reality Application, Game , Clothing Industry , 3D Video Conference.

  • PDF

3D Facial Modeling and Synthesis System for Realistic Facial Expression (자연스러운 표정 합성을 위한 3차원 얼굴 모델링 및 합성 시스템)

  • 심연숙;김선욱;한재현;변혜란;정창섭
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.2
    • /
    • pp.1-10
    • /
    • 2000
  • Realistic facial animation research field which communicates with human and computer using face has increased recently. The human face is the part of the body we use to recognize individuals and the important communication channel that understand the inner states like emotion. To provide the intelligent interface. computer facial animation looks like human in talking and expressing himself. Facial modeling and animation research is focused on realistic facial animation recently. In this article, we suggest the method of facial modeling and animation for realistic facial synthesis. We can make a 3D facial model for arbitrary face by using generic facial model. For more correct and real face, we make the Korean Generic Facial Model. We can also manipulate facial synthesis based on the physical characteristics of real facial muscle and skin. Many application will be developed such as teleconferencing, education, movies etc.

  • PDF

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

Development of Facial Emotion Recognition System Based on Optimization of HMM Structure by using Harmony Search Algorithm (Harmony Search 알고리즘 기반 HMM 구조 최적화에 의한 얼굴 정서 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.3
    • /
    • pp.395-400
    • /
    • 2011
  • In this paper, we propose an study of the facial emotion recognition considering the dynamical variation of emotional state in facial image sequences. The proposed system consists of two main step: facial image based emotional feature extraction and emotional state classification/recognition. At first, we propose a method for extracting and analyzing the emotional feature region using a combination of Active Shape Model (ASM) and Facial Action Units (FAUs). And then, it is proposed that emotional state classification and recognition method based on Hidden Markov Model (HMM) type of dynamic Bayesian network. Also, we adopt a Harmony Search (HS) algorithm based heuristic optimization procedure in a parameter learning of HMM in order to classify the emotional state more accurately. By using all these methods, we construct the emotion recognition system based on variations of the dynamic facial image sequence and make an attempt at improvement of the recognition performance.