• Title/Summary/Keyword: 3D Facial Animation

Search Result 66, Processing Time 0.03 seconds

Multiresolution 3D Facial Model Compression (다해상도 3D 얼굴 모델의 압축)

  • 박동희;이종석;이영식;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2002.05a
    • /
    • pp.602-607
    • /
    • 2002
  • In this paper, we proposed an approach to efficiently compress and transmit multiresoltion 3D lariat models for multimedia and very low bit rate applications. A personal facial model is obtained by a 3D laser digitizer, and further re-quantized at several resolutions according to different scope of applications, such as animation, video game, and video conference. By deforming 2D templates to match and re-quantize a 3D digitized facial model, we obtain its compressed model. In the present study, we create hierarchical 2D lariat wireframe templates are adapted according to facial feature points and the proposed piecewise chainlet affined transformation(PACT) method. The 3D digitized model after requantization are reduced significantly without perceptual loss. Moreover the proposed multiresoulation lariat models possessed of hierarchial data structure are apt to be progressively transmitted and displayed across internet.

  • PDF

Model-Independent Facial Animation Tool (모델 독립적 얼굴 표정 애니메이션 도구)

  • 이지형;김상원;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.193-196
    • /
    • 1999
  • 컴퓨터 그래픽스에서 인간의 얼굴 표정을 생성하는 것은 고전적인 주제중의 하나이다. 따라서 관련된 많은 연구가 이루어져 왔지만 대부분의 연구들은 특정 모델을 이용한 얼굴 표정 애니메이션을 제공하였다. 이는 얼굴 표정에는 얼굴 근육 정보 및 부수적인 정보가 필요하고 이러한 정보는 3D 얼굴 모델에 종속되기 때문이다. 본 논문에서는 일반적인 3D 얼굴 모델에 근육을 설정하고 기타 정보를 편집하여, 다양한 3D 얼굴모델에서 표정 애니메이션이 가능한 도구를 제안한다.

  • PDF

The Multi-marker Tracking for Facial Animation (Facial Animation을 위한 다중 마커의 추적)

  • 이문희;김철기;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.553-557
    • /
    • 2001
  • 얼굴 표정을 애니메이션하는 것은 얼굴 구조의 복잡성과 얼굴 표면의 섬세한 움직임으로 인해 컴퓨터 애니메이션 분야에서 가장 어려운 분야로 인식되고 있다. 최근 3D 애니메이션, 영화 특수효과 그리고 게임 제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 얼굴 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 그리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리기법을 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적응할 수 있는 경제적이고 효율적인 얼굴 움직임 추적기법을 제안한다.

  • PDF

Putting Your Best Face Forward: Development of a Database for Digitized Human Facial Expression Animation

  • Lee, Ning-Sung;Alia Reid Zhang Yu;Edmond C. Prakash;Tony K.Y Chan;Edmund M-K. Lai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.6-153
    • /
    • 2001
  • 3-Dimentional 3D digitization of the human is a technology that is still relatively new. There are present uses such as radiotherapy, identification systems and commercial uses and potential future applications. In this paper, we analyzed and experimented to determine the easiest and most efficient method, which would give us the most accurate results. We also constructed a database of realistic expressions and high quality human heads. We scanned people´s heads and facial expressions in 3D using a Minolta Vivid 700 scanner, then edited the models obtained on a Silicon Graphics workstation. Research was done into the present and potential uses of the 3D digitized models of the human head and we develop ideas for ...

  • PDF

Emotion Based Gesture Animation Generation Mobile System (감정 기반 모바일 손제스쳐 애니메이션 제작 시스템)

  • Lee, Jung-Suk;Byun, Hae-Won
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.129-134
    • /
    • 2009
  • Recently, percentage of people who use SMS service is increasing. However, it is difficult to express own complicated emotion with text and emoticon of exited SMS service. This paper focuses on that point and practical uses character animation to express emotion and nuance correctly, funny. Also this paper suggests emotion based gesture animation generation system that use character's facial expression and gesture to delivery emotion excitably and clearly than only speaking. Michel[1] investigated interview movies of a person whose gesturing style they wish to animate and suggested gesture generation graph for stylized gesture animation. In this paper, we make focus to analyze and abstracted emotional gestures of Disney animation characters and did 3D modeling of these emotional gestures expanding Michel[1]'s research. To express emotion of person, suggests a emotion gesture generation graph that reflects emotion flow graph express emotion flow for probability. We investigated user reaction for research the propriety of suggested system and alternation propriety.

  • PDF

Interactive Animation by Action Recognition (동작 인식을 통한 인터랙티브 애니메이션)

  • Hwang, Ji-Yeon;Lim, Yang-Mi;Park, Jin-Wan;Jahng, Surng-Gahb
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.269-277
    • /
    • 2006
  • In this paper, we propose an interactive system that generates emotional expressions from arm gestures. By extracting relevant features from key frames, we can infer emotions from arm gestures. The necessary factor for real-time animation is tremendous frame rates. Thus, we propose processing facial emotion expression with 3D application for minimizing animation time. And we propose a method for matching frames and actions. By matching image sequences of exagerrated arm gestures from participants, they feel that they are communicating directly with the portraits.

  • PDF

The Study on the Effects of Elasticity on the Animation Characters (탄성이 애니메이션 캐릭터에 미치는 영향에 관한 연구)

  • Kyung Byung-Pyo;Ryu Seuc-Ho;Lee Nam-Kook
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.3
    • /
    • pp.135-142
    • /
    • 2006
  • An animation character can be applied an Elasticity principle to strengthen an action and expression. That principle is called as Squash & Stretch in animation. The application of this principle gives the illusion of weight and volume to an animation character, and makes it possible that an animation action be the smooth and soft by escaping from the stiffness and rigidity. If an action of human or object on animation is expressed like a real world, it seems to be unnatural. Any action without Squash & Stretch will look rigid, uninteresting and not alive. It can be applied to movement of all objects, characters' actions, dialogues and facial expressions with a basic rule of mass, volume and gravity. Havier characters squash and stretch more than thinner ones. Any action will not be well expressed without this principle. To be a good animation movement, it should be widely applied in all type of animation, not only 2D animation.

  • PDF

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

Realtime Facial Expression Data Tracking System using Color Information (컬러 정보를 이용한 실시간 표정 데이터 추적 시스템)

  • Lee, Yun-Jung;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.7
    • /
    • pp.159-170
    • /
    • 2009
  • It is very important to extract the expression data and capture a face image from a video for online-based 3D face animation. In recently, there are many researches on vision-based approach that captures the expression of an actor in a video and applies them to 3D face model. In this paper, we propose an automatic data extraction system, which extracts and traces a face and expression data from realtime video inputs. The procedures of our system consist of three steps: face detection, face feature extraction, and face tracing. In face detection, we detect skin pixels using YCbCr skin color model and verifies the face area using Haar-based classifier. We use the brightness and color information for extracting the eyes and lips data related facial expression. We extract 10 feature points from eyes and lips area considering FAP defined in MPEG-4. Then, we trace the displacement of the extracted features from continuous frames using color probabilistic distribution model. The experiments showed that our system could trace the expression data to about 8fps.

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.