• Title/Summary/Keyword: real-time facial animation

Search Result 21, Processing Time 0.028 seconds

A Vision-based Approach for Facial Expression Cloning by Facial Motion Tracking

  • Chun, Jun-Chul;Kwon, Oryun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.2 no.2
    • /
    • pp.120-133
    • /
    • 2008
  • This paper presents a novel approach for facial motion tracking and facial expression cloning to create a realistic facial animation of a 3D avatar. The exact head pose estimation and facial expression tracking are critical issues that must be solved when developing vision-based computer animation. In this paper, we deal with these two problems. The proposed approach consists of two phases: dynamic head pose estimation and facial expression cloning. The dynamic head pose estimation can robustly estimate a 3D head pose from input video images. Given an initial reference template of a face image and the corresponding 3D head pose, the full head motion is recovered by projecting a cylindrical head model onto the face image. It is possible to recover the head pose regardless of light variations and self-occlusion by updating the template dynamically. In the phase of synthesizing the facial expression, the variations of the major facial feature points of the face images are tracked by using optical flow and the variations are retargeted to the 3D face model. At the same time, we exploit the RBF (Radial Basis Function) to deform the local area of the face model around the major feature points. Consequently, facial expression synthesis is done by directly tracking the variations of the major feature points and indirectly estimating the variations of the regional feature points. From the experiments, we can prove that the proposed vision-based facial expression cloning method automatically estimates the 3D head pose and produces realistic 3D facial expressions in real time.

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

The facial expression generation of vector graphic character using the simplified principle component vector (간소화된 주성분 벡터를 이용한 벡터 그래픽 캐릭터의 얼굴표정 생성)

  • Park, Tae-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.9
    • /
    • pp.1547-1553
    • /
    • 2008
  • This paper presents a method that generates various facial expressions of vector graphic character by using the simplified principle component vector. First, we analyze principle components to the nine facial expression(astonished, delighted, etc.) redefined based on Russell's internal emotion state. From this, we find principle component vector having the biggest effect on the character's facial feature and expression and generate the facial expression by using that. Also we create natural intermediate characters and expressions by interpolating weighting values to character's feature and expression. We can save memory space considerably, and create intermediate expressions with a small computation. Hence the performance of character generation system can be considerably improved in web, mobile service and game that real time control is required.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Interactive Animation by Action Recognition (동작 인식을 통한 인터랙티브 애니메이션)

  • Hwang, Ji-Yeon;Lim, Yang-Mi;Park, Jin-Wan;Jahng, Surng-Gahb
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.269-277
    • /
    • 2006
  • In this paper, we propose an interactive system that generates emotional expressions from arm gestures. By extracting relevant features from key frames, we can infer emotions from arm gestures. The necessary factor for real-time animation is tremendous frame rates. Thus, we propose processing facial emotion expression with 3D application for minimizing animation time. And we propose a method for matching frames and actions. By matching image sequences of exagerrated arm gestures from participants, they feel that they are communicating directly with the portraits.

  • PDF

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

Real-Time Face Extraction using Color Information based Region Segment and Symmetry Technique (실시간 얼굴 특징 점 추출을 위한 색 정보 기반의 영역분할 및 영역 대칭 기법)

  • 최승혁;김재경;박준;최윤철
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.721-723
    • /
    • 2004
  • 최근 가상환경에서 아바타의 활용이 빠르게 증가하면서 아바타 애니메이션에 대한 연구가 활발히 진행되고 있다. 특히 아바타의 사람과 같은 자연스러운 얼굴 애니메이션(Facial Animation)은 사용자에게 아바타가 살아 있는 듯한 느낌(Life-likeness)과 사실감(Believability)을 심어주어 보다 친숙한 인터페이스로 활용될 수 있다. 이러한 얼굴 애니메이션 생성을 위해 얼굴의 특징 점을 추출하는 기법이 끊임없이 이루어져 왔다. 그러나 지금까지의 연구는 실시간으로 사람 얼굴로부터 모션을 생성하고 이를 바로 3D 얼굴 모델에 적용 및 모션 라이브러리를 구축하기 위한 최적화된 알고리즘 개발에 대한 연구가 미흡하였다. 본 논문은 실제 사랑 얼굴 모델로부터 실시간으로 특징 점 인식을 통한 애니메이션 적용 및 라이브러리 생성 기법에 대친 제안한다. 제안 기법에서는 빠르고 정확한 특징 점 추출을 위하여 색 정보를 가공하여 얼굴 영역을 추출해내고 이를 영역 분할하여 필요한 특징 점을 추출하였으며, 자연스러운 모션 생성을 위하여 에러 발생 시 대칭점을 이용한 복구 알고리즘을 개발하였다. 본 논문에서는 이와 같은 색 정보 기반의 영역분할 및 영역 대칭 기법을 제시하여 실시간으로 끊김이 없고 자연스러운 얼굴 모션 라이브러리를 생성 및 적용하였다.

  • PDF

A Mapping Algorithm for Real Time Animation Based on Facial Features (얼굴 구성 정보 기반의 실시간 애니메이션을 위한 매핑 알고리즘)

  • Yi, Jung-Hoon;Lee, Chan;Rhee, Phill-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10b
    • /
    • pp.919-922
    • /
    • 2000
  • 본 논문에서는 가상 인터페이스로서 범용적으로 사용할 수 있는 실시간 비전 기반 얼굴 애니메이션을 제안한다. 이를 위해서 실시간으로 얼굴 구성요소를 추출하고, 이를 정량화 하였다. 정량화된 값과 자연스럽게 매핑하기 위해서 정합함수를 통해 3차원 모델에 맵핑하기 위한 방안을 제안한다. 일반적으로 3차원 애니메이션을 수행할 경우, 기본 모델을 중심으로 특정한 사용자만을 위주로 수행되나, 본 논문에서는 임의의 일반 사용자를 위한 3차원 애니메이션을 수행하였다. 여러 사용자에 대하여 얼굴 구성요소 추출을 이용한 3차원 얼굴 애니메이션 동작에 대하여 실험하였으며 실험결과 여러 사용자 얼굴 애니메이션 동작에 대하여 만족할 만한 성능을 보였다.

  • PDF