• 제목/요약/키워드: Data-based animation

검색결과 223건 처리시간 0.022초

Technology Trends for Motion Synthesis and Control of 3D Character

  • Choi, Jong-In
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권4호
    • /
    • pp.19-26
    • /
    • 2019
  • In this study, we study the development and control of motion of 3D character animation and discuss the development direction of technology. Character animation has been developed as a data-based method and a physics-based method. The animation generation technique based on the keyframe method has been made possible by the development of the hardware technology, and the motion capture device has been used. Various techniques for effectively editing the motion data have appeared. At the same time, animation techniques based on physics have emerged, which realistically generate the motion of the character by physically optimized numerical computation. Recently, animation techniques using machine learning have shown new possibilities for creating characters that can be controlled by the user in real time and are expected to be developed in the future.

한국어 음소를 이용한 자연스러운 3D 립싱크 애니메이션 (Natural 3D Lip-Synch Animation Based on Korean Phonemic Data)

  • 정일홍;김은지
    • 디지털콘텐츠학회 논문지
    • /
    • 제9권2호
    • /
    • pp.331-339
    • /
    • 2008
  • 본 논문에서는 3D 립싱크 애니메이션에 필요한 키 데이터를 생성하는 효율적이고 정확한 시스템 개발을 제안한다. 여기서 개발한 시스템은 한국어를 기반으로 발화된 음성 데이터와 텍스트 정보에서 한국어 음소를 추출하고 분할된 음소들을 사용하여 정확하고 자연스러운 입술 애니메이션 키 데이터를 계산한다. 이 애니메이션 키 데이터는 본 본문에서 개발한 3D 립싱크 애니메이션 시스템뿐만 아니라 상업적인 3D 얼굴 애니메이션 시스템에서도 사용된다. 전통적인 3D 립싱크 애니메이션 시스템은 음성 데이터를 영어 음소 기반으로 음소를 분할하고 분할된 음소를 사용하여 립싱크 애니메이션 키 데이터를 생성한다. 이러한 방법의 단점은 한국어 콘텐츠에 대해 부자연스러운 애니메이션을 생성하고 이에 따른 추가적인 수작업이 필요하다는 것이다. 본 논문에서는 음성 데이터와 텍스트 정보에서 한국어 음소를 추출하고 분할된 음소를 사용하여 자연스러운 립싱크 애니메이션을 생성하는 3D 립싱크 애니메이션 시스템을 제안한다.

  • PDF

대화형 캐릭터 애니메이션 생성과 데이터 관리 도구 (An Interactive Character Animation and Data Management Tool)

  • 이민근;이명원
    • 정보처리학회논문지A
    • /
    • 제8A권1호
    • /
    • pp.63-69
    • /
    • 2001
  • In this paper, we present an interactive 3D character modeling and animation including a data management tool for editing the animation. It includes an animation editor for changing animation sequences according to the modified structure of 3D object in the object structure editor. The animation tool has the feature that it can produce motion data independently of any modeling tool including our modeling tool. Differently from conventional 3D graphics tools that model objects based on geometrically calculated data, our tool models 3D geometric and animation data by approximating to the real object using 2D image interactively. There are some applications that do not need precise representation, but an easier way to obtain an approximated model looking similar to the real object. Our tool is appropriate for such applications. This paper has focused on the data management for enhancing the automatin and convenience when editing a motion or when mapping a motion to the other character.

  • PDF

Study of Script Conversion for Data Extraction of Constrained Objects

  • Choi, Chul Young
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제14권3호
    • /
    • pp.155-160
    • /
    • 2022
  • In recent years, Unreal Engine has been increasingly included in the animation process produced in the studio. In this case, there will be more than one of main software, and it is very important to accurately transfer data between the software and Unreal Engine. In animation data, not only the animation data of the character but also the animation data of objects interacting with the character must be individually produced and transferred. Most of the objects that interact with the character have a condition of constraints with the part of character. In this paper, I tried to stipulate the production process for extracting animation data of constrained objects, and to analyze why users experience difficulties due to the complexity of the regulations in the process of executing them. And based on the flowchart prescribed for user convenience, I created a program using a Python script to prove the user's convenience. Finally, by comparing the results generated according to the manual flowchart with the results generated through the script command, it was found that the data were consistent.

Template-Based Reconstruction of Surface Mesh Animation from Point Cloud Animation

  • Park, Sang Il;Lim, Seong-Jae
    • ETRI Journal
    • /
    • 제36권6호
    • /
    • pp.1008-1015
    • /
    • 2014
  • In this paper, we present a method for reconstructing a surface mesh animation sequence from point cloud animation data. We mainly focus on the articulated body of a subject - the motion of which can be roughly described by its internal skeletal structure. The point cloud data is assumed to be captured independently without any inter-frame correspondence information. Using a template model that resembles the given subject, our basic idea for reconstructing the mesh animation is to deform the template model to fit to the point cloud (on a frame-by-frame basis) while maintaining inter-frame coherence. We first estimate the skeletal motion from the point cloud data. After applying the skeletal motion to the template surface, we refine it to fit to the point cloud data. We demonstrate the viability of the method by applying it to reconstruct a fast dancing motion.

MPEG-4 FAP 기반 세분화된 얼굴 근육 모델 구현 (Subdivided Facial Muscle Modeling based on MPEG-4 EAP)

  • 이인서;박운기;전병우
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 제13회 신호처리 합동 학술대회 논문집
    • /
    • pp.631-634
    • /
    • 2000
  • In this paper, we propose a method for implementing a system for decoding the parameter data based on Facial Animation Parameter (FAP) developed by MPEG-4 Synthetic/Natural Hybrid Coding (SNHC) subcommittee. The data is displayed according to FAP with human mucle model animation engine. Proposed model has the basic properties of the human skin specified by be energy funtional for realistic facial animation.

  • PDF

Jitter Correction of the Face Motion Capture Data for 3D Animation

  • Lee, Junsang;Han, Soowhan;Lee, Imgeun
    • 한국컴퓨터정보학회논문지
    • /
    • 제20권9호
    • /
    • pp.39-45
    • /
    • 2015
  • Along with the advance of digital technology, various methods are adopted for capturing the 3D animating data. Especially, in 3D animation production market, the motion capture system is widely used to make films, games, and animation contents. The technique quickly tracks the movements of the actor and translate the data to use as animating character's motion. Thus the animation characters are able to mimic the natural motion and gesture, even face expression. However, the conventional motion capture system needs tricky conditions, such as space, light, number of camera etc. Furthermore the data acquired from the motion capture system is frequently corrupted by noise, drift and surrounding environment. In this paper, we introduce the post production techniques to stabilizing the jitters of motion capture data from the low cost handy system based on Kinect.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

스케치 인터페이스를 이용한 데이터 기반 얼굴 애니메이션 (Data-driven Facial Animation Using Sketch Interface)

  • 주은정;안소민;이제희
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제13권3호
    • /
    • pp.11-18
    • /
    • 2007
  • 자연스러운 얼굴 애니메이션 생성은 캐릭터 애니메이션 분야에서 중요한 문제이다. 지금까지 얼굴 애니메이션은 3차원 모델링 프로그램을 이용한 전문 애니메이터들의 수작업을 통해 생성되거나, 필요한 움직임 데이터를 직접 동작 캡쳐함으로써 만들어 왔다. 그러나 이러한 방식은 일반 사용자가 쉽게 접근 할 수 없으며 많은 시간과 비용을 요구한다는 단점이 있다. 본 연구에서는 실제에 가깝고 자연스러운 얼굴애니메이션을 만들기 위해, 누구나 쉽게 사용할 수 있는 직관적인 방식의 스케치 인터페이스를 이용하고자 한다. 이를 통해 키-프레임을 생성하는 시스템을 구축하고, 얼굴 캡쳐를 통하여 얻은 데이터로부터 추출한 얼굴 표정간의 전이 정보를 이용하여 키-프레임을 보간하는 방식을 제안한다. 본 시스템은 전문 애니메이터가 아닌 일반 사용자도 쉽고 빠르게 다양한 감점을 표출하며, 동시에 말하는 얼굴 애니메이션을 만들 수 있도록 한다.

  • PDF

Case Study of Animation Production using 'MetaHuman'

  • Choi, Chul Young
    • International journal of advanced smart convergence
    • /
    • 제11권3호
    • /
    • pp.150-156
    • /
    • 2022
  • Recently, the use of Unreal Engine for animation production is increasing. In this situation, Unreal Engine's 'MetaHuman Creator' helps make it easier to apply realistic characters to animation. In this regard, we tried to produce animations using 'MetaHuman' and verify the effectiveness and differences from the animation production process using only Maya software. To increase the efficiency of the production process, the animation process was made with Maya software. We tried to import animation data from Unreal Engine and go through the process of making animations, and try to find out if there are any problems. And we tried to compare animations made with realistic 'MetaHuman' characters and animation works using cartoon-type characters. The use of the same camera lens in realistic character animations and cartoon character animations produced based on the same scenario was judged to be the cause of the lack of realistic animation screen composition. The analysis revealed that a different approach from the existing animation camera lens selection is required for the selection of the camera lens in the production of realistic animation.