• Title/Summary/Keyword: 3-D facial model

Search Result 135, Processing Time 0.036 seconds

A 3D Face Generation Method using Single Frontal Face Image for Game Users (단일 정면 얼굴 영상을 이용한 게임 사용자의 3차원 얼굴 생성 방법)

  • Jeong, Min-Yi;Lee, Sung-Joo;Park, Kang-Ryong;Kim, Jai-Hie
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1013-1014
    • /
    • 2008
  • In this paper, we propose a new method of generating 3D face by using single frontal face image and 3D generic face model. By using active appearance model (AAM), the control points among facial feature points were localized in the 2D input face image. Then, the transform parameters of 3D generic face model were found to minimize the error between the 2D control points and the corresponding 2D points projected from 3D facial model. Finally, by using the obtained model parameters, 3D face was generated. We applied this 3D face to 3D game framework and found that the proposed method could make a realistic 3D face of game user.

  • PDF

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

A Study on Effective Facial Expression of 3D Character through Variation of Emotions (Model using Facial Anatomy) (감정변화에 따른 3D캐릭터의 표정연출에 관한 연구 (해부학적 구조 중심으로))

  • Kim, Ji-Ae
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.894-903
    • /
    • 2006
  • Rapid technology growth of hardware have brought about development and expansion of various digital motion pictured information including 3-Dimension. 3D digital techniques can be used to be diversity in Animation, Virtual-Reality, Movie, Advertisement, Game and so on. 3D characters in digital motion picture take charge of the core as to communicate emotions and information to users through sounds, facial expression and characteristic motions. Concerns about 3D motion and facial expression is getting higher with extension of frequency in use and range about 3D character design. In this study, the facial expression can be used as a effective method about implicit emotions will be studied and research 3D character's facial expressions and muscles movement which are based on human anatomy and then try to find effective method of facial expression. Finally, also, study the difference and distinguishing between 2D and 3D character through the preceding study what I have researched before.

  • PDF

Designing and Implementing 3D Virtual Face Aesthetic Surgery System Based on Korean Standard Facial Data (한국 표준 얼굴 데이터를 적용한 3D 가상 얼굴 성형 제작 시스템 설계 및 구현)

  • Lee, Cheol-Woong;Kim, II-Min;Cho, Sae-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.5
    • /
    • pp.737-744
    • /
    • 2009
  • This paper is to study and implement 3D Virtual Face Aesthetic Surgery System which provides more satisfaction by comparing the before-and-after plastic face surgery using 3D face model. For this study, we implemented 3D Face Model Generating System which resembles 2D image of the user based on 3D Korean standard face model and user's 2D pictures. The proposed 3D Virtual Face Aesthetic Surgery System in this paper consists of 3D Face Model Generating System, 3D Skin Texture Mapping System, and Detailed Adjustment System for reflecting the detailed description of face. The proposed system provides more satisfaction to the medical uses and stability in the surgery in compare with other existing systems.

  • PDF

Convergence Study on the Three-dimensional Educational Model of the Functional Anatomy of Facial Muscles Based on Cadaveric Data (카데바 자료를 이용한 얼굴근육의 해부학적 기능 학습을 위한 삼차원 교육 콘텐츠 제작과 관련된 융합 연구)

  • Lee, Jae-Gi
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.9
    • /
    • pp.57-63
    • /
    • 2021
  • This study dissected and three-dimensionally (3D) scanned the facial muscles of Korean adult cadavers, created a three-dimensional model with realistic facial muscle shapes, and reproduced facial expressions to provide educational materials to allow the 3D observation of the complex movements of cadaver facial muscles. Using the cadavers' anatomical photo data, 3D modeling of facial muscles was performed. We produced models describing four different expressions, namely sad, happy, surprised, and angry. We confirmed the complex action of the 3D cadaver facial muscles when making various facial expressions. Although the results of this study cannot confirm the individual functions of facial muscles quantitatively, we were able to observe the realistic shape of the cadavers' facial muscles, and produce models that would show different expressions depending on the actions performed. The data from this study may be used as educational materials when studying the anatomy of facial muscles.

A Study on the Fabrication of Facial Blend Shape of 3D Character - Focusing on the Facial Capture of the Unreal Engine (3D 캐릭터의 얼굴 블렌드쉐입(blendshape)의 제작연구 -언리얼 엔진의 페이셜 캡처를 중심으로)

  • Lou, Yi-Si;Choi, Dong-Hyuk
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.73-80
    • /
    • 2022
  • Facial expression is an important means of representing characteristics in movies and animations, and facial capture technology can support the production of facial animation for 3D characters more quickly and effectively. Blendshape techniques are the most widely used methods for producing high-quality 3D face animations, but traditional blendshape often takes a long time to produce. Therefore, the purpose of this study is to achieve results that are not far behind the effectiveness of traditional production to reduce the production period of blend shape. In this paper, in order to make a blend shape, the method of using the cross-model to convey the blend shape is compared with the traditional method of making the blend shape, and the validity of the new method is verified. This study used kit boy developed by Unreal Engine as an experiment target conducted a facial capture test using two blend shape production techniques, and compared and analyzed the facial effects linked to blend shape.

Spectrum-Based Color Reproduction Algorithm for Makeup Simulation of 3D Facial Avatar

  • Jang, In-Su;Kim, Jae Woo;You, Ju-Yeon;Kim, Jin Seo
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.969-979
    • /
    • 2013
  • Various simulation applications for hair, clothing, and makeup of a 3D avatar can provide more useful information to users before they select a hairstyle, clothes, or cosmetics. To enhance their reality, the shapes, textures, and colors of the avatars should be similar to those found in the real world. For a more realistic 3D avatar color reproduction, this paper proposes a spectrum-based color reproduction algorithm and color management process with respect to the implementation of the algorithm. First, a makeup color reproduction model is estimated by analyzing the measured spectral reflectance of the skin samples before and after applying the makeup. To implement the model for a makeup simulation system, the color management process controls all color information of the 3D facial avatar during the 3D scanning, modeling, and rendering stages. During 3D scanning with a multi-camera system, spectrum-based camera calibration and characterization are performed to estimate the spectrum data. During the virtual makeup process, the spectrum data of the 3D facial avatar is modified based on the makeup color reproduction model. Finally, during 3D rendering, the estimated spectrum is converted into RGB data through gamut mapping and display characterization.

Dynamic Emotion Model in 3D Affect Space for a Mascot-Type Facial Robot (3차원 정서 공간에서 마스코트 형 얼굴 로봇에 적용 가능한 동적 감정 모델)

  • Park, Jeong-Woo;Lee, Hui-Sung;Jo, Su-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.3
    • /
    • pp.282-287
    • /
    • 2007
  • Humanoid and android robots are emerging as a trend shifts from industrial robot to personal robot. So human-robot interaction will increase. Ultimate objective of humanoid and android would be a robot like a human. In this aspect, implementation of robot's facial expression is necessary in making a human-like robot. This paper proposes a dynamic emotion model for a mascot-type robot to display similar facial and more recognizable expressions.

  • PDF

Use of 3D Printing Model for the Management of Fibrous Dysplasia: Preliminary Case Study

  • Choi, Jong-Woo;Jeong, Woo Shik
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.1
    • /
    • pp.36-38
    • /
    • 2016
  • Fibrous dysplasia is a relatively rare disease but the management would be quite challenging. Because this is not a malignant tumor, the preservation of the facial contour and the various functions seems to be important in treatment planning. Until now the facial bone reconstruction with autogenous bone would be the standard. Although the autogenous bone would be the ideal one for facial bone reconstruction, donor site morbidity would be the inevitable problem in many cases. Meanwhile, various types of allogenic and alloplastic materials have been also used. However, facial bone reconstruction with many alloplastic material have produced no less complications including infection, exposure, and delayed wound healing. Because the 3D printing technique evolved so fast that 3D printed titanium implant were possible recently. The aim of this trial is to try to restore the original maxillary anatomy as possible using the 3D printing model, based on the mirrored three dimensional CT images based on the computer simulation. Preoperative computed tomography (CT) data were processed for the patient and a rapid prototyping (RP) model was produced. At the same time, the uninjured side was mirrored and superimposed onto the traumatized side, to create a mirror-image of the RP model. And we molded Titanium mesh to reconstruct three-dimensional maxillary structure during the operation. This prefabricated Titanium-mesh implant was then inserted onto the defected maxilla and fixed. Three dimensional printing technique of titanium material based on the computer simulation turned out to be successful in this patient. Individualized approach for each patient could be an ideal way to restore the facial bone.