• Title/Summary/Keyword: Data-based animation

Search Result 223, Processing Time 0.025 seconds

Technology Trends for Motion Synthesis and Control of 3D Character

  • Choi, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.4
    • /
    • pp.19-26
    • /
    • 2019
  • In this study, we study the development and control of motion of 3D character animation and discuss the development direction of technology. Character animation has been developed as a data-based method and a physics-based method. The animation generation technique based on the keyframe method has been made possible by the development of the hardware technology, and the motion capture device has been used. Various techniques for effectively editing the motion data have appeared. At the same time, animation techniques based on physics have emerged, which realistically generate the motion of the character by physically optimized numerical computation. Recently, animation techniques using machine learning have shown new possibilities for creating characters that can be controlled by the user in real time and are expected to be developed in the future.

Natural 3D Lip-Synch Animation Based on Korean Phonemic Data (한국어 음소를 이용한 자연스러운 3D 립싱크 애니메이션)

  • Jung, Il-Hong;Kim, Eun-Ji
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.331-339
    • /
    • 2008
  • This paper presents the development of certain highly efficient and accurate system for producing animation key data for 3D lip-synch animation. The system developed herein extracts korean phonemes from sound and text data automatically and then computes animation key data using the segmented phonemes. This animation key data is used for 3D lip-synch animation system developed herein as well as commercial 3D facial animation system. The conventional 3D lip-synch animation system segments the sound data into the phonemes based on English phonemic system and produces the lip-synch animation key data using the segmented phoneme. A drawback to this method is that it produces the unnatural animation for Korean contents. Another problem is that this method needs the manual supplementary work. In this paper, we propose the 3D lip-synch animation system that can segment the sound and text data into the phonemes automatically based on Korean phonemic system and produce the natural lip-synch animation using the segmented phonemes.

  • PDF

An Interactive Character Animation and Data Management Tool (대화형 캐릭터 애니메이션 생성과 데이터 관리 도구)

  • Lee, Min-Geun;Lee, Myeong-Won
    • The KIPS Transactions:PartA
    • /
    • v.8A no.1
    • /
    • pp.63-69
    • /
    • 2001
  • In this paper, we present an interactive 3D character modeling and animation including a data management tool for editing the animation. It includes an animation editor for changing animation sequences according to the modified structure of 3D object in the object structure editor. The animation tool has the feature that it can produce motion data independently of any modeling tool including our modeling tool. Differently from conventional 3D graphics tools that model objects based on geometrically calculated data, our tool models 3D geometric and animation data by approximating to the real object using 2D image interactively. There are some applications that do not need precise representation, but an easier way to obtain an approximated model looking similar to the real object. Our tool is appropriate for such applications. This paper has focused on the data management for enhancing the automatin and convenience when editing a motion or when mapping a motion to the other character.

  • PDF

Study of Script Conversion for Data Extraction of Constrained Objects

  • Choi, Chul Young
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.155-160
    • /
    • 2022
  • In recent years, Unreal Engine has been increasingly included in the animation process produced in the studio. In this case, there will be more than one of main software, and it is very important to accurately transfer data between the software and Unreal Engine. In animation data, not only the animation data of the character but also the animation data of objects interacting with the character must be individually produced and transferred. Most of the objects that interact with the character have a condition of constraints with the part of character. In this paper, I tried to stipulate the production process for extracting animation data of constrained objects, and to analyze why users experience difficulties due to the complexity of the regulations in the process of executing them. And based on the flowchart prescribed for user convenience, I created a program using a Python script to prove the user's convenience. Finally, by comparing the results generated according to the manual flowchart with the results generated through the script command, it was found that the data were consistent.

Template-Based Reconstruction of Surface Mesh Animation from Point Cloud Animation

  • Park, Sang Il;Lim, Seong-Jae
    • ETRI Journal
    • /
    • v.36 no.6
    • /
    • pp.1008-1015
    • /
    • 2014
  • In this paper, we present a method for reconstructing a surface mesh animation sequence from point cloud animation data. We mainly focus on the articulated body of a subject - the motion of which can be roughly described by its internal skeletal structure. The point cloud data is assumed to be captured independently without any inter-frame correspondence information. Using a template model that resembles the given subject, our basic idea for reconstructing the mesh animation is to deform the template model to fit to the point cloud (on a frame-by-frame basis) while maintaining inter-frame coherence. We first estimate the skeletal motion from the point cloud data. After applying the skeletal motion to the template surface, we refine it to fit to the point cloud data. We demonstrate the viability of the method by applying it to reconstruct a fast dancing motion.

Subdivided Facial Muscle Modeling based on MPEG-4 EAP (MPEG-4 FAP 기반 세분화된 얼굴 근육 모델 구현)

  • 이인서;박운기;전병우
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.631-634
    • /
    • 2000
  • In this paper, we propose a method for implementing a system for decoding the parameter data based on Facial Animation Parameter (FAP) developed by MPEG-4 Synthetic/Natural Hybrid Coding (SNHC) subcommittee. The data is displayed according to FAP with human mucle model animation engine. Proposed model has the basic properties of the human skin specified by be energy funtional for realistic facial animation.

  • PDF

Jitter Correction of the Face Motion Capture Data for 3D Animation

  • Lee, Junsang;Han, Soowhan;Lee, Imgeun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.39-45
    • /
    • 2015
  • Along with the advance of digital technology, various methods are adopted for capturing the 3D animating data. Especially, in 3D animation production market, the motion capture system is widely used to make films, games, and animation contents. The technique quickly tracks the movements of the actor and translate the data to use as animating character's motion. Thus the animation characters are able to mimic the natural motion and gesture, even face expression. However, the conventional motion capture system needs tricky conditions, such as space, light, number of camera etc. Furthermore the data acquired from the motion capture system is frequently corrupted by noise, drift and surrounding environment. In this paper, we introduce the post production techniques to stabilizing the jitters of motion capture data from the low cost handy system based on Kinect.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Data-driven Facial Animation Using Sketch Interface (스케치 인터페이스를 이용한 데이터 기반 얼굴 애니메이션)

  • Ju, Eun-Jung;Ahn, Soh-Min;Lee, Je-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.13 no.3
    • /
    • pp.11-18
    • /
    • 2007
  • Creating stylistic facial animation is one of the most important problems in character animation. Traditionally facial animation is created manually by animators of captured using motion capture systems. But this process is very difficult and labor-intensive. In this work, we present an intuitive, easy-to-use, sketch-based user interface system that facilitates the process of creating facial animation and key-frame interpolation method using facial capture data. The user of our system is allowed to create expressive speech facial animation easily and rapidly.

  • PDF

Case Study of Animation Production using 'MetaHuman'

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • v.11 no.3
    • /
    • pp.150-156
    • /
    • 2022
  • Recently, the use of Unreal Engine for animation production is increasing. In this situation, Unreal Engine's 'MetaHuman Creator' helps make it easier to apply realistic characters to animation. In this regard, we tried to produce animations using 'MetaHuman' and verify the effectiveness and differences from the animation production process using only Maya software. To increase the efficiency of the production process, the animation process was made with Maya software. We tried to import animation data from Unreal Engine and go through the process of making animations, and try to find out if there are any problems. And we tried to compare animations made with realistic 'MetaHuman' characters and animation works using cartoon-type characters. The use of the same camera lens in realistic character animations and cartoon character animations produced based on the same scenario was judged to be the cause of the lack of realistic animation screen composition. The analysis revealed that a different approach from the existing animation camera lens selection is required for the selection of the camera lens in the production of realistic animation.