• Title/Summary/Keyword: Face Animation

Search Result 119, Processing Time 0.033 seconds

Implementation of Face Animation For MPEG-4 SNHC

  • Lee, Ju-Sang;Yoo, Ji-Sang;Ahn, Chie-Teuk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.141-144
    • /
    • 1999
  • MPEG-4 SNHC FBA(face and body animation) group is going to standardize the MPEG-4 system for low-bit rate communication with the implementation and animation of human body and face on virtual environment. In the first version of MPEG-4 standard, only the face object will be implemented and animated by using FDP (face definition parameter) and FAP(facial animation parameter), which are the abstract parameters of human face for low-bit rate coding. In this paper, MPEG-4 SNHC face object and it's animation were implemented based on the computer graphics tools such as VRML and OpenGL.

Online Face Avatar Motion Control based on Face Tracking

  • Wei, Li;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.6
    • /
    • pp.804-814
    • /
    • 2009
  • In this paper, a novel system for avatar motion controlling by tracking face is presented. The system is composed of three main parts: firstly, LCS (Local Cluster Searching) method based face feature detection algorithm, secondly, HMM based feature points recognition algorithm, and finally, avatar controlling and animation generation algorithm. In LCS method, face region can be divided into many small piece regions in horizontal and vertical direction. Then the method will judge each cross point that if it is an object point, edge point or the background point. The HMM method will distinguish the mouth, eyes, nose etc. from these feature points. Based on the detected facial feature points, the 3D avatar is controlled by two ways: avatar orientation and animation, the avatar orientation controlling information can be acquired by analyzing facial geometric information; avatar animation can be generated from the face feature points smoothly. And finally for evaluating performance of the developed system, we implement the system on Window XP OS, the results show that the system can have an excellent performance.

  • PDF

Estimation of 3D Rotation Information of Animation Character Face (애니메이션 캐릭터 얼굴의 3차원 회전정보 측정)

  • Jang, Seok-Woo;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.49-56
    • /
    • 2011
  • Recently, animation contents has become extensively available along with the development of cultural industry. In this paper, we propose a method to analyze a face of animation character and extract 3D rotational information of the face. The suggested method first generates a dominant color model of a face by learning the face image of animation character. Our system then detects the face and its components with the model, and establishes two coordinate systems: base coordinate system and target coordinate system. Our system estimates three dimensional rotational information of the animation character face using the geometric relationship of the two coordinate systems. Finally, in order to visually represent the extracted 3D information, a 3D face model in which the rotation information is reflected is displayed. In experiments, we show that our method can extract 3D rotation information of a character face reasonably.

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

facial Expression Animation Using 3D Face Modelling of Anatomy Base (해부학 기반의 3차원 얼굴 모델링을 이용한 얼굴 표정 애니메이션)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.328-333
    • /
    • 2003
  • This paper did to do with 18 muscle pairs that do fetters in anatomy that influence in facial expression change and mix motion of muscle for face facial animation. After set and change mash and make standard model in individual's image, did mapping to mash using individual facial front side and side image to raise truth stuff. Muscle model who become motive power that can do animation used facial expression creation correcting Waters' muscle model. Created deformed face that texture is dressed using these method. Also, 6 facial expression that Ekman proposes did animation.

Synthesizing Faces of Animation Characters Using a 3D Model (3차원 모델을 사용한 애니메이션 캐릭터 얼굴의 합성)

  • Jang, Seok-Woo;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.8
    • /
    • pp.31-40
    • /
    • 2012
  • In this paper, we propose a method of synthesizing faces of a user and an animation character using a 3D face model. The suggested method first receives two orthogonal 2D face images and extracts major features of the face through the template snake. It then generates a user-customized 3D face model by adjusting a generalized face model using the extracted facial features and by mapping texture maps obtained from two input images to the 3D face model. Finally, it generates a user-customized animation character by synthesizing the generated 3D model to an animation character reflecting the position, size, facial expressions, and rotational information of the character. Experimental results show some results to verify the performance of the suggested algorithm. We expect that our method will be useful to various applications such as games and animation movies.

Face and Its Components Extraction of Animation Characters Based on Dominant Colors (주색상 기반의 애니메이션 캐릭터 얼굴과 구성요소 검출)

  • Jang, Seok-Woo;Shin, Hyun-Min;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.93-100
    • /
    • 2011
  • The necessity of research on extracting information of face and facial components in animation characters have been increasing since they can effectively express the emotion and personality of characters. In this paper, we introduce a method to extract face and facial components of animation characters by defining a mesh model adequate for characters and by using dominant colors. The suggested algorithm first generates a mesh model for animation characters, and extracts dominant colors for face and facial components by adapting the mesh model to the face of a model character. Then, using the dominant colors, we extract candidate areas of the face and facial components from input images and verify if the extracted areas are real face or facial components by means of color similarity measure. The experimental results show that our method can reliably detect face and facial components of animation characters.

Comparing Learning Outcome of e-Learning with Face-to-Face Lecture of a Food Processing Technology Course in Korean Agricultural High School

  • PARK, Sung Youl;LEE, Hyeon-ah
    • Educational Technology International
    • /
    • v.8 no.2
    • /
    • pp.53-71
    • /
    • 2007
  • This study identified the effectiveness of e-learning by comparing learning outcome in conventional face-to-face lecture with the selected e-learning methods. Two e-learning contents (animation based and video based) were developed based on the rapid prototyping model and loaded onto the learning management system (LMS), which is http://www.enaged.co.kr. Fifty-four Korean agricultural high school students were randomly assigned into three groups (face-to-face lecture, animation based e-learning, and video based e-learning group). The students of the e-learning group logged on the LMS in school computer lab and completed each e-learning. All students were required to take a pretest and posttest before and after learning under the direction of the subject teacher. A one-way analysis of covariance was administered to verify whether there was any difference between face-to-face lecture and e-learning in terms of students' learning outcomes after controlling the covariate variable, pretest score. According to the results, no differences between animation based and video based e-learning as well as between face-to-face learning and e-learning were identified. Findings suggest that the use of well designed e-learning could be worthy even in agricultural education, which stresses hands-on experience and lab activities if e-learning was used appropriately in combination with conventional learning. Further research is also suggested, focusing on a preference of e-learning content type and its relationship with learning outcome.

A Study on 3D Face Modelling based on Dynamic Muscle Model for Face Animation (얼굴 애니메이션을 위한 동적인 근육모델에 기반한 3차원 얼굴 모델링에 관한 연구)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.322-327
    • /
    • 2003
  • Based on dynamic muscle model to construct efficient face animation in this paper 30 face modelling techniques propose. Composed face muscle by faceline that connect 256 point and this point based on dynamic muscle model, and constructed wireframe because using this. After compose standard model who use wireframe, because using front side and side 2D picture, enforce texture mapping and created 3D individual face model. Used front side of characteristic points and side part for correct mapping, after make face that have texture coordinates using 2D coordinate of front side image and front side characteristic points, constructed face that have texture coordinates using 2D coordinate of side image and side characteristic points.

HEEAS: On the Implementation and an Animation Algorithm of an Emotional Expression (HEEAS: 감정표현 애니메이션 알고리즘과 구현에 관한 연구)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.3
    • /
    • pp.125-134
    • /
    • 2006
  • The purpose of this paper is constructed a HEEAAS(Human Emotional Expression Animaion System), which is an animation system to show both the face and the body motion from the inputted voice about just 4 types of emotions such as fear, dislike, surprise and normal. To implement our paper, we chose the korean young man in his twenties who was to show appropriate emotions the most correctly. Also, we have focused on reducing the processing time about making the real animation in making both face and body codes of emotions from the inputted voice signal. That is, we can reduce the search time to use the binary search technique from the face and body motion databases, Throughout the experiment, we have a 99.9% accuracy of the real emotional expression in the cartoon animation.

  • PDF