• Title/Summary/Keyword: Facial animation

Search Result 142, Processing Time 0.045 seconds

An Empirical Study on Real-time Generation of Facial Expressions in Avatar Communications (지적 아바타 통신을 위한 얼굴 표정의 실시간 생성에 관한 검토)

  • 이용후;김상운;일본명
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1673-1676
    • /
    • 2003
  • As a means of overcoming the linguistic barrier in the Internet cyberspace, recently a couple of studies on intelligent avatar communications between avatars of different languages such as Japanese-Korean have been performed. In this paper we measure the generation time of facial expressions on different avatar models in order to consider avatar models to be available in real-time system. We also provide a short overview about DTD (Document Type Definition) to deliver the facial and gesture animation parameters between avatars as an XML data.

  • PDF

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face (동작 및 효정 동시 포착을 위한 데이터 기반 표정 복원에 관한 연구)

  • Park, Sang Il
    • Journal of the Korea Computer Graphics Society
    • /
    • v.18 no.3
    • /
    • pp.9-16
    • /
    • 2012
  • In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.

A Study on Lip Sync and Facial Expression Development in Low Polygon Character Animation (로우폴리곤 캐릭터 애니메이션에서 립싱크 및 표정 개발 연구)

  • Ji-Won Seo;Hyun-Soo Lee;Min-Ha Kim;Jung-Yi Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.409-414
    • /
    • 2023
  • We described how to implement character expressions and animations that play an important role in expressing emotions and personalities in low-polygon character animation. With the development of the video industry, character expressions and mouth-shaped lip-syncing in animation can realize natural movements at a level close to real life. However, for non-experts, it is difficult to use expert-level advanced technology. Therefore, We aimed to present a guide for low-budget low-polygon character animators or non-experts to create mouth-shaped lip-syncing more naturally using accessible and highly usable features. A total of 8 mouth shapes were developed for mouth shape lip-sync animation: 'ㅏ', 'ㅔ', 'ㅣ', 'ㅗ', 'ㅜ', 'ㅡ', 'ㅓ' and a mouth shape that expresses a labial consonant. In the case of facial expression animation, a total of nine animations were produced by adding highly utilized interest, boredom, and pain to the six basic human emotions classified by Paul Ekman: surprise, fear, disgust, anger, happiness, and sadness. This study is meaningful in that it makes it easy to produce natural animation using the features built into the modeling program without using complex technologies or programs.

A Realtime Expression Control for Realistic 3D Facial Animation (현실감 있는 3차원 얼굴 애니메이션을 위한 실시간 표정 제어)

  • Kim Jung-Gi;Min Kyong-Pil;Chun Jun-Chul;Choi Yong-Gil
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.23-35
    • /
    • 2006
  • This work presents o novel method which extract facial region und features from motion picture automatically and controls the 3D facial expression in real time. To txtract facial region and facial feature points from each color frame of motion pictures a new nonparametric skin color model is proposed rather than using parametric skin color model. Conventionally used parametric skin color models, which presents facial distribution as gaussian-type, have lack of robustness for varying lighting conditions. Thus it needs additional work to extract exact facial region from face images. To resolve the limitation of current skin color model, we exploit the Hue-Tint chrominance components and represent the skin chrominance distribution as a linear function, which can reduce error for detecting facial region. Moreover, the minimal facial feature positions detected by the proposed skin model are adjusted by using edge information of the detected facial region along with the proportions of the face. To produce the realistic facial expression, we adopt Water's linear muscle model and apply the extended version of Water's muscles to variation of the facial features of the 3D face. The experiments show that the proposed approach efficiently detects facial feature points and naturally controls the facial expression of the 3D face model.

  • PDF

Implementation of Face Animation For MPEG-4 SNHC

  • Lee, Ju-Sang;Yoo, Ji-Sang;Ahn, Chie-Teuk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.141-144
    • /
    • 1999
  • MPEG-4 SNHC FBA(face and body animation) group is going to standardize the MPEG-4 system for low-bit rate communication with the implementation and animation of human body and face on virtual environment. In the first version of MPEG-4 standard, only the face object will be implemented and animated by using FDP (face definition parameter) and FAP(facial animation parameter), which are the abstract parameters of human face for low-bit rate coding. In this paper, MPEG-4 SNHC face object and it's animation were implemented based on the computer graphics tools such as VRML and OpenGL.

Analysis of Squash & Stretch Principle for Animation Action (애니메이션 동작을 위한 Squash & Stretch 원칙의 분석)

  • Lee Nam-Kook;Kyung Byung-Pyo;Ryu Seuc-Ho
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.111-114
    • /
    • 2005
  • Squash & Stretch principle is playing an essential principle for animation action. The application of this principle gives the illusion of weight and volume to an animation character, and makes it possible that an animation action be the smooth and soft by escaping from the stiffness and rigidity. If an action of human or object on animation is expressed like a real world, it seems to be unnatural. Any action without Squash & Stretch will look rigid, uninteresting and not alive. It can be applied to movement of all objects, characters' actions, dialogues and facial expressions with a basic rule of mass, volume and gravity. Any action will not be well expressed without this principle. To be a good animation action, it should be deeply applied in 3D animation, not only 2D animation. Thus, a systemic analysis of Squash & Stretch principle is required.

  • PDF

Direct Retargeting Method from Facial Capture Data to Facial Rig (페이셜 리그에 대한 페이셜 캡처 데이터의 다이렉트 리타겟팅 방법)

  • Cho, Hyunjoo;Lee, Jeeho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.2
    • /
    • pp.11-19
    • /
    • 2016
  • This paper proposes a method to directly retarget facial motion capture data to the facial rig. Facial rig is an essential tool in the production pipeline, which allows helping the artist to create facial animation. The direct mapping method from the motion capture data to the facial rig provides great convenience because artists are already familiar with the use of a facial rig and the direct mapping produces the mapping results that are ready for the artist's follow-up editing process. However, mapping the motion data into a facial rig is not a trivial task because a facial rig typically has a variety of structures, and therefore it is hard to devise a generalized mapping method for various facial rigs. In this paper, we propose a data-driven approach to the robust mapping from motion capture data to an arbitary facial rig. The results show that our method is intuitive and leads to increased productivity in the creation of facial animation. We also show that our method can retarget the expression successfully to non-human characters which have a very different shape of face from that of human.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

A Study on Multi-patch Surface in Improving Efficiency of 3D Facial Modeling (Multi-patch Surface를 이용한 3D Facial Model 제작 효율 향상에 관한 연구)

  • 진영애;김종기;김치용
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.05b
    • /
    • pp.492-498
    • /
    • 2003
  • 본 논문에서는 실사와 같은 사실적인 3차원 Facial Model 제작을 위해 해부학 접근을 통한 근육기반의 자연스러운 Facial Modeling 제작을 연구하였다 한국인 기본형 얼굴을 연구 대상으로 선정하여 안면근육 비례를 분석한 후 Multi-patch Surface Modeling 방법을 적용하며 제작하였다. 이 방법에는 통계적 분석기법 중 L/sub 27/(3/sup 13/) 3수준계 직교배열표를 이용하여 검증하였다. 본 연구를 통하여 Facial Model 제작 시 최소의 UV spans 수로 최대 시각화 즉, 원본의 형상을 최대한 유지하면서 작업시간과 Rendering 시간 단축 및 Data 용량을 줄일 수 있는 Modeling 방법을 제안하였고, 향후 자연스런 Facial Animation 제작 및 연구에도 많은 도움이 될 것으로 기대된다.

  • PDF

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF