• Title/Summary/Keyword: face animation

Search Result 119, Processing Time 0.021 seconds

Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network (RNN을 이용한 Expressive Talking Head from Speech의 합성)

  • Sakurai, Ryuhei;Shimba, Taiki;Yamazoe, Hirotake;Lee, Joo-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.

Class Design Applying Flipped Learning Combined with Project-Based Learning: Focusing on Digital Painting Tool for Class (플립러닝형 프로젝트 기반학습을 적용한 수업 설계: Digital Painting Tool 수업을 중심으로)

  • Sung, Rea;Kong, Hyunhee
    • Journal of Information Technology Applications and Management
    • /
    • v.29 no.1
    • /
    • pp.29-45
    • /
    • 2022
  • The Fourth Industrial Revolution era requires people to have the ability of integrated thinking, critics, sensitivity, and creativity in an integrated manner. Therefore, teaching methods are expected to become more suitable for the trend. In this belief, current teacher-leading education method should move to students' self motivating one and consist of programs in which students voluntarily involve. In this reason, this study suggests FPBL educational method model that is combines project-based learning with flipped learning by analysing preceding research and digital painting tool class was designed by applying it. As a result of applying the designed class model to the class, all of the class satisfaction, effectiveness, and interaction were evaluated positively. Problems such as limitations of project classes due to non-face-to-face classes, large amount of learning before class, and reduced concentration during class were found. Therefore, when the FPBL class model is conducted non-face-to-face, it will be necessary to further strengthen the role of the instructor, provide lecture videos summarizing the core contents, and improve concentration by providing active participation and fun using various digital tools. The result of the study looks significant by confirming the possibility of applying FPBL model not only in design education but also other educational settings.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

A Facial Animation System Using 3D Scanned Data (3D 스캔 데이터를 이용한 얼굴 애니메이션 시스템)

  • Gu, Bon-Gwan;Jung, Chul-Hee;Lee, Jae-Yun;Cho, Sun-Young;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.17A no.6
    • /
    • pp.281-288
    • /
    • 2010
  • In this paper, we describe the development of a system for generating a 3-dimensional human face using 3D scanned facial data and photo images, and morphing animation. The system comprises a facial feature input tool, a 3-dimensional texture mapping interface, and a 3-dimensional facial morphing interface. The facial feature input tool supports texture mapping and morphing animation - facial morphing areas between two facial models are defined by inputting facial feature points interactively. The texture mapping is done first by means of three photo images - a front and two side images - of a face model. The morphing interface allows for the generation of a morphing animation between corresponding areas of two facial models after texture mapping. This system allows users to interactively generate morphing animations between two facial models, without programming, using 3D scanned facial data and photo images.

3D Emotional Avatar Creation and Animation using Facial Expression Recognition (표정 인식을 이용한 3D 감정 아바타 생성 및 애니메이션)

  • Cho, Taehoon;Jeong, Joong-Pill;Choi, Soo-Mi
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.9
    • /
    • pp.1076-1083
    • /
    • 2014
  • We propose an emotional facial avatar that portrays the user's facial expressions with an emotional emphasis, while achieving visual and behavioral realism. This is achieved by unifying automatic analysis of facial expressions and animation of realistic 3D faces with details such as facial hair and hairstyles. To augment facial appearance according to the user's emotions, we use emotional templates representing typical emotions in an artistic way, which can be easily combined with the skin texture of the 3D face at runtime. Hence, our interface gives the user vision-based control over facial animation of the emotional avatar, easily changing its moods.

Recognition of Facial Expressions of Animation Characters Using Dominant Colors and Feature Points (주색상과 특징점을 이용한 애니메이션 캐릭터의 표정인식)

  • Jang, Seok-Woo;Kim, Gye-Young;Na, Hyun-Suk
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.375-384
    • /
    • 2011
  • This paper suggests a method to recognize facial expressions of animation characters by means of dominant colors and feature points. The proposed method defines a simplified mesh model adequate for the animation character and detects its face and facial components by using dominant colors. It also extracts edge-based feature points for each facial component. It then classifies the feature points into corresponding AUs(action units) through neural network, and finally recognizes character facial expressions with the suggested AU specification. Experimental results show that the suggested method can recognize facial expressions of animation characters reliably.

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

The Study of Face Model and Face Type (사상인 용모분석을 위한 얼굴표준 및 얼굴유형에 대한 연구현황)

  • Pyeon, Young-Beom;Kwak, Chang-Kyu;Yoo, Jung-Hee;Kim, Jong-Won;Kim, Kyu-Kon;Kho, Byung-Hee;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.18 no.2
    • /
    • pp.25-33
    • /
    • 2006
  • 1. Objectives Recently there have been studied the trials to take out the characteristics of Sasangin's face. 3 dimensional modeling is essential to find out sasangin's face. So the studies of standard face model and face type are necessary. 2. Methods I have reviewed the researches of standard facial modeling and facial type in the inside and outside of the country. 3. Results and Conclusions The Facial Definition Parameters are a very complex set of parameters defined by MPEG-4. It has defineds set of 84 feature points and 68 Facial Animation Parameters. Face type has been researched to divide into male and female, or the westerns and the orientals, or sasangin(Taeyangin, Taeumin, Soyangin, Soeumin).

  • PDF

A Study on Non-face-to-face Educational Methods which can be used in Practical Subject of Game Production (게임제작 실습 교과목에서 활용할 수 있는 비대면 교육방법 연구)

  • Park, Sunha
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.125-133
    • /
    • 2021
  • Due to Covid-19, the un-contact culture has affected society as a whole, and the methods of education conducted offline has been greatly affected. In the private education of preparing for university entrance, the public official examinations and certification acquisition, the method of online education has been shown to have positive effects. While private class and school class which have offered in off-line to cope with rapid changes caused various problems such as decline in quality for education. Due to the characteristic of design class, practical training is important. As interactive feedback between students and educators is more important than one-way of delivering knowledge while class is conducted in online, educators have a challenge when they prepare for class. This study handles the methods of online education for the purpose of practical education methods in university nowadays, Especially, the non-face-to-face education methods for game animation production. Based on this study, I propose an effective educational method with non-face-to-face class that allows students to be satisfied and increases their knowledge, beyond face-to-face class.

A Study on the Expression of Philosophy Agenda through Animation Contents - Focusing on Korea's Animation film "Padak(2012)" - (애니메이션 콘텐츠를 통한 철학적 의제표현 연구 - 한국 애니메이션 영화 "파닥파닥(2012)"을 중심으로 -)

  • Kim, Ye Eun;Lee, Tae Hoon
    • Journal of Digital Convergence
    • /
    • v.18 no.8
    • /
    • pp.391-399
    • /
    • 2020
  • Even though the animation industry has been growth since 2011, there is still imitation by public stereotype which insists that animation is only able to cover the young age group and is not the proper genre of art to convey a social or a philosophical agenda. However, describes the philosophical agenda of 'social class' and 'life and death' in the limited space by expressing characteristics and background of fish through its own way. Thus, it shows how animation goes beyond aforementioned limits. Straying from traditional happy-ending, it criticizes present social problems by telling despite fishes' effort they cannot escape from structural contradictions. The drawing technique in musical expresses the character's ideology and attitude to make people think about how we will behave in the face of life and death. Therefore the purpose of this paper is analyzing director Lee Dae-hee's animation and present the genre expandability of animation.