• Title/Summary/Keyword: realistic facial expressions

Search Result 18, Processing Time 0.025 seconds

Generation of Robot Facial Gestures based on Facial Actions and Animation Principles (Facial Actions 과 애니메이션 원리에 기반한 로봇의 얼굴 제스처 생성)

  • Park, Jeong Woo;Kim, Woo Hyun;Lee, Won Hyong;Lee, Hui Sung;Chung, Myung Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.5
    • /
    • pp.495-502
    • /
    • 2014
  • This paper proposes a method to generate diverse robot facial expressions and facial gestures in order to help long-term HRI. First, nine basic dynamics for diverse robot facial expressions are determined based on the dynamics of human facial expressions and principles of animation for even identical emotions. In the second stage, facial actions are added to express facial gestures such as sniffling or wailing loudly corresponding to sadness, laughing aloud or smiling corresponding to happiness, etc. To evaluate the effectiveness of our approach, we compared the facial expressions of the developed robot when the proposed method is used or not. The results of the survey showed that the proposed method can help robots generate more realistic facial expressions.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Convergence Study on the Three-dimensional Educational Model of the Functional Anatomy of Facial Muscles Based on Cadaveric Data (카데바 자료를 이용한 얼굴근육의 해부학적 기능 학습을 위한 삼차원 교육 콘텐츠 제작과 관련된 융합 연구)

  • Lee, Jae-Gi
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.9
    • /
    • pp.57-63
    • /
    • 2021
  • This study dissected and three-dimensionally (3D) scanned the facial muscles of Korean adult cadavers, created a three-dimensional model with realistic facial muscle shapes, and reproduced facial expressions to provide educational materials to allow the 3D observation of the complex movements of cadaver facial muscles. Using the cadavers' anatomical photo data, 3D modeling of facial muscles was performed. We produced models describing four different expressions, namely sad, happy, surprised, and angry. We confirmed the complex action of the 3D cadaver facial muscles when making various facial expressions. Although the results of this study cannot confirm the individual functions of facial muscles quantitatively, we were able to observe the realistic shape of the cadavers' facial muscles, and produce models that would show different expressions depending on the actions performed. The data from this study may be used as educational materials when studying the anatomy of facial muscles.

Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change (비선형 피부색 변화 모델을 이용한 실감적인 표정 합성)

  • Lee Jeong-Ho;Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-75
    • /
    • 2006
  • Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.

Facial Expression Explorer for Realistic Character Animation

  • Ko, Hee-Dong;Park, Moon-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.16.1-164
    • /
    • 1998
  • This paper describes Facial Expression Explorer to search for the components of a facial expression and to map the expression to other expressionless figures like a robot, frog, teapot, rabbit and others. In general, it is a time-consuming and laborious job to create a facial expression manually, especially when the facial expression must personify a well-known public figure or an actor. In order to extract a blending ratio from facial images automatically, the Facial Expression Explorer uses Networked Genetic Algorithm(NGA) which is a fast method for the convergence by GA. The blending ratio is often used to create facial expressions through shape blending methods by animators. With the Facial Expression Explorer a realistic facial expression can be modeled more efficiently.

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Image-based Realistic Facial Expression Animation

  • Yang, Hyun-S.;Han, Tae-Woo;Lee, Ju-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.133-140
    • /
    • 1999
  • In this paper, we propose a method of image-based three-dimensional modeling for realistic facial expression. In the proposed method, real human facial images are used to deform a generic three-dimensional mesh model and the deformed model is animated to generate facial expression animation. First, we take several pictures of the same person from several view angles. Then we project a three-dimensional face model onto the plane of each facial image and match the projected model with each image. The results are combined to generate a deformed three-dimensional model. We use the feature-based image metamorphosis to match the projected models with images. We then create a synthetic image from the two-dimensional images of a specific person's face. This synthetic image is texture-mapped to the cylindrical projection of the three-dimensional model. We also propose a muscle-based animation technique to generate realistic facial expression animations. This method facilitates the control of the animation. lastly, we show the animation results of the six represenative facial expressions.

Comparative Analysis of Facial Animation Production by Digital Actors - Keyframe Animation and Mobile Capture Animation

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • v.13 no.3
    • /
    • pp.176-182
    • /
    • 2024
  • Looking at the recent game market, classic games released in the past are being re-released with high-quality visuals, and users are generally satisfied. It can be said that the realization of realistic digital actors, which was not possible in the past, is now becoming a reality. Epic Games launched the MetaHuman Creator website in September 2021, allowing anyone to easily create realistic human characters. Since then, the number of animations created using MetaHumans has been increasing. As the characters become more realistic, the movement and expression animations expected by the audience must also be convincingly realized. Until recently, traditional methods were the primary approach for producing realistic character animations. For facial animation, Epic Games introduced an improved method on the Live Link app in 2023, which provides the highest quality among mobile-based techniques. In this context, this paper compares the results of animation produced using both keyframe facial capture and mobile-based capture. After creating an emotional expression animation with four sentences, the results were compared using Unreal Engine. While the facial capture method is more natural and easier to use, the precise and exaggerated expressions possible with the keyframe method cannot be overlooked, suggesting that a hybrid approach using both methods will likely continue for the foreseeable future.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Putting Your Best Face Forward: Development of a Database for Digitized Human Facial Expression Animation

  • Lee, Ning-Sung;Alia Reid Zhang Yu;Edmond C. Prakash;Tony K.Y Chan;Edmund M-K. Lai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.6-153
    • /
    • 2001
  • 3-Dimentional 3D digitization of the human is a technology that is still relatively new. There are present uses such as radiotherapy, identification systems and commercial uses and potential future applications. In this paper, we analyzed and experimented to determine the easiest and most efficient method, which would give us the most accurate results. We also constructed a database of realistic expressions and high quality human heads. We scanned people´s heads and facial expressions in 3D using a Minolta Vivid 700 scanner, then edited the models obtained on a Silicon Graphics workstation. Research was done into the present and potential uses of the 3D digitized models of the human head and we develop ideas for ...

  • PDF