• Title/Summary/Keyword: 3-D facial model

Search Result 135, Processing Time 0.027 seconds

A Study on Creation of 3D Facial Model Using Facial Image (임의의 얼굴 이미지를 이용한 3D 얼굴모델 생성에 관한 연구)

  • Lee, Hea-Jung;Joung, Suck-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.21-28
    • /
    • 2007
  • The facial modeling and animation technology had been studied in computer graphics field. The facial modeling technology is utilized much in virtual reality research purpose of MPEG-4 and so on and movie, advertisement, industry field of game and so on. Therefore, the development of 3D facial model that can do interaction with human is essential to little more realistic interface. We developed realistic and convenient 3D facial modeling system that using a optional facial image only. This system allows easily fitting to optional facial image by using the Korean standard facial model (generic model). So it generates intuitively 3D facial model as controling control points elastically after fitting control points on the generic model wire to the optional facial image. We can confirm and modify the 3D facial model by movement, magnify, reduce and turning. We experimented with 30 facial images of $630{\times}630$ sizes to verify usefulness of system that developed.

  • PDF

Extraction and Implementation of MPEG-4 Facial Animation Parameter for Web Application (웹 응용을 위한 MPEC-4 얼굴 애니메이션 파라미터 추출 및 구현)

  • 박경숙;허영남;김응곤
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1310-1318
    • /
    • 2002
  • In this study, we developed a 3D facial modeler and animator that will not use the existing method by 3D scanner or camera. Without expensive image-input equipments, we can easily create 3D models only using front and side images. The system is available to animate 3D facial models as we connect to animation server on the WWW which is independent from specific platforms and softwares. It was implemented using Java 3D API. The facial modeler detects MPEG-4 FDP(Facial Definition Parameter) feature points from 2D input images, creates 3D facial model modifying generic facial model with the points. The animator animates and renders the 3D facial model according to MPEG-4 FAP(Facial Animation Parameter). This system can be used for generating an avatar on WWW.

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.

Study of Model Based 3D Facial Modeling for Virtual Reality (가상현실에 적용을 위한 모델에 근거한 3차원 얼굴 모델링에 관한 연구)

  • 한희철;권중장
    • Proceedings of the IEEK Conference
    • /
    • 2000.11c
    • /
    • pp.193-196
    • /
    • 2000
  • In this paper, we present a model based 3d facial modeling method for virtual reality application using only one front of face photography. We extract facial feature using facial photography and modify mesh of the basic 3D model by the facial feature. After this , We use texture mapping for more similarity. By experiment, we know that the modeling technic is useful method for Movie, Virtual Reality Application, Game , Clothing Industry , 3D Video Conference.

  • PDF

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Validity of Three-dimensional Facial Scan Taken with Facial Scanner and Digital Photo Wrapping on the Cone-beam Computed Tomography: Comparison of Soft Tissue Parameters

  • Aljawad, Hussein;Lee, Kyungmin Clara
    • Journal of Korean Dental Science
    • /
    • v.15 no.1
    • /
    • pp.19-30
    • /
    • 2022
  • Purpose: The purpose of the study was to assess the validity of three-dimensional (3D) facial scan taken with facial scanner and digital photo wrapping on the cone-beam computed tomography (CBCT). Materials and Methods: Twenty-five patients had their CBCT scan, two-dimensional (2D) standardized frontal photographs and 3D facial scan obtained on the same day. The facial scans were taken with a facial scanner in an upright position. The 2D standardized frontal photographs were taken at a fixed distance from patients using a camera fixed to a cephalometric apparatus. The 2D integrated facial models were created using digital photo wrapping of frontal photographs on the corresponding CBCT images. The 3D integrated facial models were created using the integration process of 3D facial scans on the CBCT images. On the integrated facial models, sixteen soft tissue landmarks were identified, and the vertical, horizontal, oblique and angular distances between soft tissue landmarks were compared among the 2D facial models and 3D facial models, and CBCT images. Result: The results showed no significant differences of linear and angular measurements among CBCT images, 2D and 3D facial models except for Se-Sn vertical linear measurement which showed significant difference for the 3D facial models. The Bland-Altman plots showed that all measurements were within the limit of agreement. For 3D facial model, all Bland-Altman plots showed that systematic bias was less than 2.0 mm and 2.0° except for Se-Sn linear vertical measurement. For 2D facial model, the Bland-Altman plots of 6 out of 11 of the angular measurements showed systematic bias of more than 2.0°. Conclusion: The facial scan taken with facial scanner showed a clinically acceptable performance. The digital 2D photo wrapping has limitations in clinical use compared to 3D facial scans.

Realistic individual 3D face modeling (사실적인 3D 얼굴 모델링 시스템)

  • Kim, Sang-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.8
    • /
    • pp.1187-1193
    • /
    • 2013
  • In this paper, we present realistic 3D head modeling and facial expression systems. For 3D head modeling, we perform generic model fitting to make individual head shape and texture mapping. To calculate the deformation function in the generic model fitting, we determine correspondence between individual heads and the generic model. Then, we reconstruct the feature points to 3D with simultaneously captured images from calibrated stereo camera. For texture mapping, we project the fitted generic model to image and map the texture in the predefined triangle mesh to generic model. To prevent extracting the wrong texture, we propose a simple method using a modified interpolation function. For generating 3D facial expression, we use the vector muscle based algorithm. For more realistic facial expression, we add the deformation of the skin according to the jaw rotation to basic vector muscle model and apply mass spring model. Finally, several 3D facial expression results are shown at the end of the paper.

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

A 3D Face Reconstruction Method Robust to Errors of Automatic Facial Feature Point Extraction (얼굴 특징점 자동 추출 오류에 강인한 3차원 얼굴 복원 방법)

  • Lee, Youn-Joo;Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.122-131
    • /
    • 2011
  • A widely used single image-based 3D face reconstruction method, 3D morphable shape model, reconstructs an accurate 3D facial shape when 2D facial feature points are correctly extracted from an input face image. However, in the case that a user's cooperation is not available such as a real-time 3D face reconstruction system, this method can be vulnerable to the errors of automatic facial feature point extraction. In order to solve this problem, we automatically classify extracted facial feature points into two groups, erroneous and correct ones, and then reconstruct a 3D facial shape by using only the correctly extracted facial feature points. The experimental results showed that the 3D reconstruction performance of the proposed method was remarkably improved compared to that of the previous method which does not consider the errors of automatic facial feature point extraction.

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.