• Title/Summary/Keyword: face animation

Search Result 119, Processing Time 0.032 seconds

A Study on Production of Low Storage Capacity of Character Animation for 3D Mobile Games (3D 모바일 게임용 저용량 3D캐릭터 애니메이션 제작에 관한 연구)

  • Lee Ji-Won;Kim Tae-Yul;Kyung Byung-Pyo
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.5
    • /
    • pp.107-114
    • /
    • 2005
  • The next generation of 3D mobile games market is becoming increasingly active, more as a result of improvement in the CPU speed of hardware phone, embarkation of 3D engines and high memory capacity, In response to this trend, popular 3D games of PS2 (PlayStation2) and popular online games are being launched as mobile games. However, mobile units have different hardware characteristics compared to those of other platforms such as the PC or the game console. Therefore, mobile game versions of the popular PC games face many limitations in many aspects such as in battery capacity, size of display, capacity of the game, and other user interface issues. Among these many limitations, study for allowing low capacity storage of the game is becoming important. In addition, realistic animation of the 3D character on the small screen of the mobile unit is more important than any other matter. The purpose of this study is to find a solution to providing realistic 3D character animation, and for decreasing the storage capacity of character animation for application in 3D mobile games.

  • PDF

Anatomy-Based Face Animation for Virtual Reality (가상현실을 위한 해부학에 기반한 얼굴 애니메이션)

  • 김형균;오무송;고석만;김장형
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.04c
    • /
    • pp.280-282
    • /
    • 2003
  • 본 논문에서는 가상현실 환경에서 인체 모델의 애니메이션을 위하여 얼굴의 표정 변화에 영향을 주는 해부학에 기반한 18개의 근육군쌍을 바탕으로 하여 얼굴 표정 애니메이션을 위한 근육의 움직임을 조합할 수 있도록 하였다. 개인의 이미지에 맞춰 메쉬를 변형하여 표준 모델을 만든 다음, 사실감을 높이기 위해 개인 얼굴의 정면과 측면 2 장의 이미지를 이용하여 메쉬에 매핑하였다. 얼굴의 표정 생성을 애니메이션 할 수 있는 원동력이 되는 근육 모델은 Waters의 근육 모델을 수정하여 사용하였다.

  • PDF

Realistics Facial Expression Animation and 3D Face Synthesis (실감 있는 얼굴 표정 애니메이션 및 3차원 얼굴 합성)

  • 한태우;이주호;양현승
    • Science of Emotion and Sensibility
    • /
    • v.1 no.1
    • /
    • pp.25-31
    • /
    • 1998
  • 컴퓨터 하드웨어 기술과 멀티미디어 기술의 발달로 멀티미디어 입출력 장치를 이용한 고급 인터메이스의 필요성이 대두되었다. 친근감 있는 사용자 인터페이스를 제공하기 위해 실감 있는 얼굴 애니메이션에 대한 요구가 증대되고 있다. 본 논문에서는 사람의 내적 상태를 잘 표현하는 얼굴의 표정을 3차원 모델을 이용하여 애니메이션을 수행한다. 애니메이션에 실재감을 더하기 위해 실제 얼굴 영상을 사용하여 3차원의 얼굴 모델을 변형하고, 여러 방향에서 얻은 얼굴 영상을 이용하여 텍스터 매핑을 한다. 변형된 3차원 모델을 이용하여 얼굴 표정을 애니메이션 하기 위해서 해부학에 기반한 Waters의 근육 모델을 수정하여 사용한다. 그리고, Ekman이 제안한 대표적인 6가지 표정들을 합성한다.

  • PDF

The Design of Realtime Character Animation System (실시간 캐릭터 애니메이션 시스템의 설계)

  • 이지형;김상원;박찬종
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1999.11a
    • /
    • pp.189-192
    • /
    • 1999
  • 최근 많은 영화나 컴퓨터 애니메이션에는 인간형 3차원 캐릭터의 애니메이션이 등장하고 있다. 이러한 인체 애니메이션에는 인체의 움직임, 손가락이나 얼굴표정이 포함된다. 대부분의 경우에 자연스러운 인체의 움직임을 추적하기 위해 모션캡쳐를 이용하고 있지만, 이 경우 손가락이나 얼굴표정은 제외되므로 이에 대한 추가 작업이 필요하게 된다. 본 논문에서는 모션캡쳐 장비, 사이버글러브와 Face Tracker를 통합한 시스템을 소개하며, 이 시스템을 이용하여 실시간으로 캐릭터 애니메이션이 가능하게 한다.

  • PDF

Face Animation Editor for the Korean Lip_Sync and Face Expression (한글 입술 움직임과 얼굴 표정 동기화를 위한 얼굴 애니메이션 편집기)

  • 송미영;조형제
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2000.11a
    • /
    • pp.451-454
    • /
    • 2000
  • 본 논문은 한글 단어에 따른 한글 발음에 적합한 입술의 움직임을 자동 생성하며 또한 단어에 적절한 얼굴 보정을 생성할 수 있는 입순 움직임과 얼굴 표정을 동기화하는 3차인 일관애니메이션 편집기를 구축하였다. 얼굴 애니메이션 편집기에서 얼굴 표정은 근육 기반 모델 방법으로 정의된 각 얼굴 부위별 근육에 따라 가중치를 조절하여 생성하여 입술 움직임은 텍스트 구동 방법으로 음소에 따른 정의된 입모양 연속적으로 표현하여 동작한다. 또한 이렇게 생성된 얼굴 표정을 저장관리한다. 따라서 3차원 얼굴 애니메이션 편집기는 6가지의 기본 얼굴 표정을 자동적으로 생성할 수 있으며 또한 입력 단어에 적합하도록 각 얼굴 부위별 근육 움직임을 편집한 수 있다. 이렇게 생성된 얼굴 표정들은 데이터베이스에 저장관리할 수 있으며 컴퓨터 대화시 자동적으로 입력 단어에 적합한 입술의 움직임과 얼굴 표정을 동기화하여 자연스러운 3차원 얼굴 애니메이션을 표현할 수 있다.

  • PDF

Avatar's Lip Synchronization in Talking Involved Virtual Reality (대화형 가상 현실에서 아바타의 립싱크)

  • Lee, Jae Hyun;Park, Kyoungju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.4
    • /
    • pp.9-15
    • /
    • 2020
  • Having a virtual talking face along with a virtual body increases immersion in VR applications. As virtual reality (VR) techniques develop, various applications are increasing including multi-user social networking and education applications that involve talking avatars. Due to a lack of sensory information for full face and body motion capture in consumer-grade VR, most VR applications do not show a synced talking face and body. We propose a novel method, targeted for VR applications, for talking face synced with audio with an upper-body inverse kinematics. Our system presents a mirrored avatar of a user himself in single-user applications. We implement the mirroring in a single user environment and by visualizing a synced conversational partner in multi-user environment. We found that a realistic talking face avatar is more influential than an un-synced talking avatar or an invisible avatar.

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

Generation of Stage Tour Contents with Deep Learning Style Transfer (딥러닝 스타일 전이 기반의 무대 탐방 콘텐츠 생성 기법)

  • Kim, Dong-Min;Kim, Hyeon-Sik;Bong, Dae-Hyeon;Choi, Jong-Yun;Jeong, Jin-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.11
    • /
    • pp.1403-1410
    • /
    • 2020
  • Recently, as interest in non-face-to-face experiences and services increases, the demand for web video contents that can be easily consumed using mobile devices such as smartphones or tablets is rapidly increasing. To cope with these requirements, in this paper we propose a technique to efficiently produce video contents that can provide experience of visiting famous places (i.e., stage tour) in animation or movies. To this end, an image dataset was established by collecting images of stage areas using Google Maps and Google Street View APIs. Afterwards, a deep learning-based style transfer method to apply the unique style of animation videos to the collected street view images and generate the video contents from the style-transferred images was presented. Finally, we showed that the proposed method could produce more interesting stage-tour video contents through various experiments.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

A study on the expression of animation character's personality according the five elements thoughts (오행사상을 적용한 애니메이션 캐릭터 성격 표현 연구)

  • Yi, si xiang;Lee, dong hun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.1020-1025
    • /
    • 2009
  • The five elements theory that people in the world objective theory is earned by a long-term observation. The key to the world and everything in the neck. The core content of this theroy is summed up in the world all things wood, fire, earth, gold, water-five material form. In the theory, the chracter's appearance, feature the five elements face is content. And the chracter's appearance, personality chracterisics and features of chracters and abstract nature of the mutual response plan which was formed way. Based on the number of world-famous cartoon chracter design analysis, special features and art theroy point the five elements the nature of man, who looks and chracter design by comparing the response of theroy and research and has the five elements. The analysis of theory and research the five elements art and people who find the nature of the systematic correlation to the future of animation, chracter design is applied.

  • PDF