• Title/Summary/Keyword: Facial Animation

Search Result 145, Processing Time 0.025 seconds

Gesture Communications Between Different Avatar Models Using A FBML (FBML을 이용한 서로 다른 아바타 모델간의 제스처 통신)

  • 이용후;김상운;아오끼요시나오
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.5
    • /
    • pp.41-49
    • /
    • 2004
  • As a means of overcoming the linguistic barrier between different languages in the Internet cyberspace, a sign-language communication system has been proposed. However, the system supports avatars having the same model structure so that it is difficult to communicate between different avatar models. Therefore, in this paper, we propose a new gesture communication system in which different avatars models can communicate with each other by using a FBML (Facial Body Markup Language). Using the FBML, we define a standard document format that contains the messages to be transferred between models, where the document includes the action units of facial expression and the joint angles of gesture animation. The proposed system is implemented with Visual C++ and Open Inventor on Windows platforms. The experimental results demonstrate a possibility that the method could be used as an efficient means to overcome the linguistic problem.

Realtime Facial Expression Control and Projection of Facial Motion Data using Locally Linear Embedding (LLE 알고리즘을 사용한 얼굴 모션 데이터의 투영 및 실시간 표정제어)

  • Kim, Sung-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.2
    • /
    • pp.117-124
    • /
    • 2007
  • This paper describes methodology that enables animators to create the facial expression animations and to control the facial expressions in real-time by reusing motion capture datas. In order to achieve this, we fix a facial expression state expression method to express facial states based on facial motion data. In addition, by distributing facial expressions into intuitive space using LLE algorithm, it is possible to create the animations or to control the expressions in real-time from facial expression space using user interface. In this paper, approximately 2400 facial expression frames are used to generate facial expression space. In addition, by navigating facial expression space projected on the 2-dimensional plane, it is possible to create the animations or to control the expressions of 3-dimensional avatars in real-time by selecting a series of expressions from facial expression space. In order to distribute approximately 2400 facial expression data into intuitional space, there is need to represents the state of each expressions from facial expression frames. In order to achieve this, the distance matrix that presents the distances between pairs of feature points on the faces, is used. In order to distribute this datas, LLE algorithm is used for visualization in 2-dimensional plane. Animators are told to control facial expressions or to create animations when using the user interface of this system. This paper evaluates the results of the experiment.

A Generation Methodology of Facial Expressions for Avatar Communications (아바타 통신에서의 얼굴 표정의 생성 방법)

  • Kim Jin-Yong;Yoo Jae-Hwi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.3 s.35
    • /
    • pp.55-64
    • /
    • 2005
  • The avatar can be used as an auxiliary methodology of text and image communications in cyber space. An intelligent communication method can also be utilized to achieve real-time communication, where intelligently coded data (joint angles for arm gestures and action units for facial emotions) are transmitted instead of real or compressed pictures. In this paper. for supporting the action of arm and leg gestures, a method of generating the facial expressions that can represent sender's emotions is provided. The facial expression can be represented by Action Unit(AU), in this paper we suggest the methodology of finding appropriate AUs in avatar models that have various shape and structure. And, to maximize the efficiency of emotional expressions, a comic-style facial model having only eyebrows, eyes, nose, and mouth is employed. Then generation of facial emotion animation with the parameters is also investigated.

  • PDF

A Three-Dimensional Facial Modeling and Prediction System (3차원 얼굴 모델링과 예측 시스템)

  • Gu, Bon-Gwan;Jeong, Cheol-Hui;Cho, Sun-Young;Lee, Myeong-Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.1
    • /
    • pp.9-16
    • /
    • 2011
  • In this paper, we describe the development of a system for generating a 3-dimensional human face and predicting it's appearance as it ages over subsequent years using 3D scanned facial data and photo images. It is composed of 3-dimensional texture mapping functions, a facial definition parameter input tool, and 3-dimensional facial prediction algorithms. With the texture mapping functions, we can generate a new model of a given face at a specified age using a scanned facial model and photo images. The texture mapping is done using three photo images - a front and two side images of a face. The facial definition parameter input tool is a user interface necessary for texture mapping and used for matching facial feature points between photo images and a 3D scanned facial model in order to obtain material values in high resolution. We have calculated material values for future facial models and predicted future facial models in high resolution with a statistical analysis using 100 scanned facial models.

Interactive Animation by Action Recognition (동작 인식을 통한 인터랙티브 애니메이션)

  • Hwang, Ji-Yeon;Lim, Yang-Mi;Park, Jin-Wan;Jahng, Surng-Gahb
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.12
    • /
    • pp.269-277
    • /
    • 2006
  • In this paper, we propose an interactive system that generates emotional expressions from arm gestures. By extracting relevant features from key frames, we can infer emotions from arm gestures. The necessary factor for real-time animation is tremendous frame rates. Thus, we propose processing facial emotion expression with 3D application for minimizing animation time. And we propose a method for matching frames and actions. By matching image sequences of exagerrated arm gestures from participants, they feel that they are communicating directly with the portraits.

  • PDF

Speech Animation by Visualizing the Organs of Articulation (조음 기관의 시각화를 이용한 음성 동기화 애니메이션)

  • Lee, Sung-Jin;Kim, Ig-Jae;Ko, Hyeong-Seok
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.843-851
    • /
    • 2006
  • 본 논문에서는 음성에 따른 얼굴 애니메이션을 사실적으로 표현하기 위한 조음기관(혀, 성대 등)의 움직임을 시각화하는 방법을 제시한다. 이를 위해서, 음성에 따른 얼굴 애니메이션을 위한 말뭉치(Corpus)를 생성하고, 생성된 말뭉치에 대해서 음소 단위의 분석(Phoneme alignment) 처리를 한 후, 각 음소에 따른 조음기관의 움직임을 생성한다. 본 논문에서는 조음기관의 움직임 생성을 위해서 얼굴 애니메이션 처리에서 널리 사용되고 있는 기저 모델 기반 형태 혼합 보간 기법(Blend shape Interpolation)을 사용하였다. 그리고 이를 통하여 프레임/키프레임 기반 움직임 생성 사용자 인터페이스를 구축하였다. 구축된 인터페이스를 통해 언어치료사가 직접 각 음소 별 조음기관의 정확한 모션 데이터를 생성토록 한다. 획득된 모션 데이터를 기반으로 각 음소 별 조음기관의 3차원 기본 기저를 모델링하고, 새롭게 입력된 음소 시퀀스(phoneme sequence)에 대해서 동기화된 3차원 조음기관의 움직임을 생성한다. 이를 통해 자연스러운 3차원 얼굴 애니메이션에 적용하여 얼굴과 동기화된 조음 기관의 움직임을 만들어 낼 수 있다.

  • PDF

Realtime Face Animation using High-Speed Texture Mapping Algorithm (고속 텍스처 매핑 알고리즘을 이용한 실시간 얼굴 애니메이션)

  • 최창석;김지성;최운영;전준현
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.544-547
    • /
    • 1999
  • This paper proposes a high-speed texture mapping algorithm and apply it for the realtime face animation. The mapping process devide into pixel correspondences, Z-buffering, and pixel value interpolation. Pixel correspondences and Z-buffering are calculated exactly through the algorithm. However, pixel values interpolation is approximated without additional calculations. The algorithm dramatically reduces the operations needed for texture mapping. Only three additions are needed in calculation of a pixel value. We simulate the 256$\times$240 pixel facial image with about 100 pixel face width. Simulation results shows that frame generation speed are about 60, 44, 21 frames/second in pentium PC 550MHz, 400MHz, 200MHz, respectively,

  • PDF

Facial Expression Control of 3D Avatar by Hierarchical Visualization of Motion Data (모션 데이터의 계층적 가시화에 의한 3차원 아바타의 표정 제어)

  • Kim, Sung-Ho;Jung, Moon-Ryul
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.277-284
    • /
    • 2004
  • This paper presents a facial expression control method of 3D avatar that enables the user to select a sequence of facial frames from the facial expression space, whose level of details the user can select hierarchically. Our system creates the facial expression spare from about 2,400 captured facial frames. But because there are too many facial expressions to select from, the user faces difficulty in navigating the space. So, we visualize the space hierarchically. To partition the space into a hierarchy of subspaces, we use fuzzy clustering. In the beginning, the system creates about 11 clusters from the space of 2,400 facial expressions. The cluster centers are displayed on 2D screen and are used as candidate key frames for key frame animation. When the user zooms in (zoom is discrete), it means that the user wants to see mort details. So, the system creates more clusters for the new level of zoom-in. Every time the level of zoom-in increases, the system doubles the number of clusters. The user selects new key frames along the navigation path of the previous level. At the maximum zoom-in, the user completes facial expression control specification. At the maximum, the user can go back to previous level by zooming out, and update the navigation path. We let users use the system to control facial expression of 3D avatar, and evaluate the system based on the results.

Putting Your Best Face Forward: Development of a Database for Digitized Human Facial Expression Animation

  • Lee, Ning-Sung;Alia Reid Zhang Yu;Edmond C. Prakash;Tony K.Y Chan;Edmund M-K. Lai
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.6-153
    • /
    • 2001
  • 3-Dimentional 3D digitization of the human is a technology that is still relatively new. There are present uses such as radiotherapy, identification systems and commercial uses and potential future applications. In this paper, we analyzed and experimented to determine the easiest and most efficient method, which would give us the most accurate results. We also constructed a database of realistic expressions and high quality human heads. We scanned people´s heads and facial expressions in 3D using a Minolta Vivid 700 scanner, then edited the models obtained on a Silicon Graphics workstation. Research was done into the present and potential uses of the 3D digitized models of the human head and we develop ideas for ...

  • PDF

Study on Effective Facial Rigging Process for Facial Expression of 3D Animation Character (3D 애니메이션 캐릭터의 표정연출을 위한 효율적인 페이셜 리깅 공정 연구)

  • Yu, Jiseon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2014.11a
    • /
    • pp.169-170
    • /
    • 2014
  • 컴퓨터 그래픽의 발달로 3D 애니메이션은 시각적 리얼리티와 화려한 영상미로 애니메이션 특유의 비현실적인 상황과 허구적 캐릭터가 주는 재미를 관객에게 전한다. 특히 캐릭터의 얼굴 표정은 관객과의 감정 소통과 의사전달에 중요한 정보로서 디테일한 연기를 필요로 한다. 이에 3D 애니메이션 캐릭터의 경우 페이셜에 다양한 기능들이 요구되며, 일반적인 블렌드 쉐입과 클러스터 외에도 만화적 표현을 위한 다양한 기술들이 사용된다. 기존의 공정 과정에는 한 페이셜에 이러한 모든 기능들이 접목되어 복잡하며 까다로운 페이셜 리깅 공정이 이뤄진다. 본 연구에서는 기존의 공정들에서 한정되게 사용되었던 블렌드 쉐입을 이용하여 다양한 기능들을 타겟팅하는 레이어 방식을 통해 효율적인 페이셜 리깅 공정을 연구하고자 한다.

  • PDF