• Title/Summary/Keyword: Facial Expression Animation

Search Result 76, Processing Time 0.035 seconds

Speech Animation with Multilevel Control (다중 제어 레벨을 갖는 입모양 중심의 표정 생성)

  • Moon, Bo-Hee;Lee, Son-Ou;Wohn, Kwang-yun
    • Korean Journal of Cognitive Science
    • /
    • v.6 no.2
    • /
    • pp.47-79
    • /
    • 1995
  • Since the early age of computer graphics, facial animation has been applied to various fields, and nowadays it has found several novel applications such as virtual reality(for representing virtual agents), teleconference, and man-machine interface.When we want to apply facial animation to the system with multiple participants connected via network, it is hard to animate facial expression as we desire in real-time because of the size of information to maintain an efficient communication.This paper's major contribution is to adapt 'Level-of-Detail'to the facial animation in order to solve the above problem.Level-of-Detail has been studied in the field of computer graphics to reperesent the appearance of complicated objects in efficient and adaptive way, but until now no attempt has mode in the field of facial animation. In this paper, we present a systematic scheme which enables this kind of adaptive control using Level-of-Detail.The implemented system can generate speech synchronized facial expressions with various types of user input such as text, voice, GUI, head motion, etc.

  • PDF

Hierarchical Visualization of the Space of Facial Expressions (얼굴 표정공간의 계층적 가시화)

  • Kim Sung-Ho;Jung Moon-Ryul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.12
    • /
    • pp.726-734
    • /
    • 2004
  • This paper presents a facial animation method that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically Our system creates the facial expression space from about 2400 captured facial frames. To represent the state of each expression, we use the distance matrix that represents the distance between pairs of feature points on the face. The shortest trajectories are found by dynamic programming. The space of facial expressions is multidimensional. To navigate this space, we visualize the space of expressions in 2D space by using the multidimensional scaling(MDS). But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 10 clusters from the space of 2400 facial expressions. Every tine the level increases, the system doubles the number of clusters. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. The user selects new key frames along the navigation path of the previous level. At the maximum level, the user completes key frame specification. We let animators use the system to create example animations, and evaluate the system based on the results.

A Study on Character's Emotional Appearance in Distinction Focused on 3D Animation "Inside Out" (3D 애니메이션 "인사이드 아웃" 분석을 통한 감성별 캐릭터 외형특징 연구)

  • Ahn, Duck-ki;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.361-368
    • /
    • 2017
  • This study analyzes into the characteristic appearance in distintion with emotional changes toward visual forms of psychology along with character development in the 3D animation industry. In this regard, the study seeks to propose essential targets of the five emotional characters from the Pixar's animation Inside-Out to prove psychological effects to the character's visual appearance. As a previous research, the study analysis the visual representations oriented toward both emotional facial expression and emotional color expression using both Paul Ekman and Robert Plutchik's human basic emotion research. The purpose of this study is to present the visual guideline of emotional character's appearance through the various human expression for differentiated character development in animation production.

Synthesis of Expressive Talking Heads from Speech with Recurrent Neural Network (RNN을 이용한 Expressive Talking Head from Speech의 합성)

  • Sakurai, Ryuhei;Shimba, Taiki;Yamazoe, Hirotake;Lee, Joo-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.16-25
    • /
    • 2018
  • The talking head (TH) indicates an utterance face animation generated based on text and voice input. In this paper, we propose the generation method of TH with facial expression and intonation by speech input only. The problem of generating TH from speech can be regarded as a regression problem from the acoustic feature sequence to the facial code sequence which is a low dimensional vector representation that can efficiently encode and decode a face image. This regression was modeled by bidirectional RNN and trained by using SAVEE database of the front utterance face animation database as training data. The proposed method is able to generate TH with facial expression and intonation TH by using acoustic features such as MFCC, dynamic elements of MFCC, energy, and F0. According to the experiments, the configuration of the BLSTM layer of the first and second layers of bidirectional RNN was able to predict the face code best. For the evaluation, a questionnaire survey was conducted for 62 persons who watched TH animations, generated by the proposed method and the previous method. As a result, 77% of the respondents answered that the proposed method generated TH, which matches well with the speech.

A Comic Facial Expression Using Cheeks and Jaws Movements for Intelligent Avatar Communications (지적 아바타 통신에서 볼과 턱 움직임을 사용한 코믹한 얼굴 표정)

  • ;;Yoshinao Aoki
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.121-124
    • /
    • 2001
  • In this paper, a method of generating the facial gesture CG animation on different avatar models is provided. At first, to edit emotional expressions efficiently, regeneration of the comic expression on different polygonal mesh models is carried out, where the movements of the cheeks and numerical methods. Experimental results show a possibility that the method could be used for intelligent avatar communications between Korea and Japan.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

On the Implementation of a Facial Animation Using the Emotional Expression Techniques (FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.147-155
    • /
    • 2005
  • In this paper, we present a FAES(a Facial Animation with Emotion and Speech) system for speech-driven face animation with emotions. We animate face cartoons not only from input speech, but also based on emotions derived from speech signal. And also our system can ensure smooth transitions and exact representation in animation. To do this, after collecting the training data, we have made the database using SVM(Support Vector Machine) to recognize four different categories of emotions: neutral, dislike, fear and surprise. So that, we can make the system for speech-driven animation with emotions. Also, we trained on Korean young person and focused on only Korean emotional face expressions. Experimental results of our system demonstrate that more emotional areas expanded and the accuracies of the emotional recognition and the continuous speech recognition are respectively increased 7% and 5% more compared with the previous method.

  • PDF

A Study on Lip Sync and Facial Expression Development in Low Polygon Character Animation (로우폴리곤 캐릭터 애니메이션에서 립싱크 및 표정 개발 연구)

  • Ji-Won Seo;Hyun-Soo Lee;Min-Ha Kim;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.409-414
    • /
    • 2023
  • We described how to implement character expressions and animations that play an important role in expressing emotions and personalities in low-polygon character animation. With the development of the video industry, character expressions and mouth-shaped lip-syncing in animation can realize natural movements at a level close to real life. However, for non-experts, it is difficult to use expert-level advanced technology. Therefore, We aimed to present a guide for low-budget low-polygon character animators or non-experts to create mouth-shaped lip-syncing more naturally using accessible and highly usable features. A total of 8 mouth shapes were developed for mouth shape lip-sync animation: 'ㅏ', 'ㅔ', 'ㅣ', 'ㅗ', 'ㅜ', 'ㅡ', 'ㅓ' and a mouth shape that expresses a labial consonant. In the case of facial expression animation, a total of nine animations were produced by adding highly utilized interest, boredom, and pain to the six basic human emotions classified by Paul Ekman: surprise, fear, disgust, anger, happiness, and sadness. This study is meaningful in that it makes it easy to produce natural animation using the features built into the modeling program without using complex technologies or programs.

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.