• Title/Summary/Keyword: 애니메이션 기법

Search Result 449, Processing Time 0.024 seconds

Editing Graphical Objects using Noise Editing (노이즈 편집을 이용한 그래픽스 객체 편집)

  • Yoon Jong-Chul;Lee In-Kwon;Choi Jung-Ju
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.675-681
    • /
    • 2005
  • Noise is used to create realistic animations that look like natural phenomena as well as procedural textures and shapes by adding randomness to graphical applications. In this paper, we suggest a method to edit noise values to satisfy the constraints that reflect the user's demands while maintaining the inherent statistical features of the noise function. Noise editing uses optimization to minimize the difference between the statistical characteristics of the ideal and edited versions of a noise source. Using our editing method, detailed control of animation and shape data that include noise is possible.

A Case Study of Making Logo Animation Using Particles (파티클을 이용한 로고 애니메이션 제작 사례 연구)

  • Jung Jai-Min;Suk Hae-Jung;Oh Gyu-Hwan
    • Journal of Game and Entertainment
    • /
    • v.2 no.3
    • /
    • pp.15-23
    • /
    • 2006
  • In this paper, we present a case of making LOGO animation using particles with MAYA, a 3D computer graphics packages of Autodesk Inc. We composite a visual which shows a similar effects with a information movie of Torino 2006 Winter Olympic Games. By analysing the movie, we model a human body part as a set of cubes and animated the cubes to have dynamic visuals which shows similar visuals with the movie. All the system is implemented with MAYA MEL scripts. The system shows various visual effects by controlling options available in the designed UI.

  • PDF

On-line Motion Synthesis Using Analytically Differentiable System Dynamics (분석적으로 미분 가능한 시스템 동역학을 이용한 온라인 동작 합성 기법)

  • Han, Daseong;Noh, Junyong;Shin, Joseph S.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.133-142
    • /
    • 2019
  • In physics-based character animation, trajectory optimization has been widely adopted for automatic motion synthesis, through the prediction of an optimal sequence of future states of the character based on its system dynamics model. In general, the system dynamics model is neither in a closed form nor differentiable when it handles the contact dynamics between a character and the environment with rigid body collisions. Employing smoothed contact dynamics, researchers have suggested efficient trajectory optimization techniques based on numerical differentiation of the resulting system dynamics. However, the numerical derivative of the system dynamics model could be inaccurate unlike its analytical counterpart, which may affect the stability of trajectory optimization. In this paper, we propose a novel method to derive the closed-form derivative for the system dynamics by properly approximating the contact model. Based on the resulting derivatives of the system dynamics model, we also present a model predictive control (MPC)-based motion synthesis framework to robustly control the motion of a biped character according to on-line user input without any example motion data.

An Implementation of virtual traffic lamp system using VR authoring Tool (가상현실 저작툴을 이용한 가상 신호등 시스템 설계 및 구현)

  • 김외열
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.11a
    • /
    • pp.531-535
    • /
    • 2001
  • 인터넷에서 가상현실 기법의 도입은 인터넷의 범용성과 함께 사용자가 쉽게 접할 수 있고 다양한 정보를 획득할 수 있기 때문에 지속적으로 발전해 가고 있다. VRML은 이러한 인터넷에서 가상세계를 구현하는 표준언어로 자리잡고 있다. 본 연구에서는 현재 많이 활용되고있는 VRML 저작툴인 ISA(Internet Scene Assembler)와 ISB(Internet Scene Builder)를 이용해 가상신호등 시스템을 설계하고 구현하는 것이 목적이다. 가상 신호등 시스템의 시뮬레이션을 위해 애니메이션 기법과 Sensor Node 및 Time_Bool_Converter 등의 Logic 함수를 사용하게 되고, Routing Diagram의 연결을 통해 시뮬레이션을 제작하게 된다.

  • PDF

A Study on the Morphologic Features of Characters in Animation CF (CF에 등장하는 애니메이션 캐릭터의 조형적 특성에 관한 연구)

  • Park, Chan-Ik
    • Proceedings of the KAIS Fall Conference
    • /
    • 2010.11b
    • /
    • pp.773-776
    • /
    • 2010
  • 최근 CF의 추세는 인기스타보다 캐릭터 모델들이 대거 등장하여 시청자들의 이목을 끌고 있다. CF에 캐릭터가 활용되는 경우는 표현기법의 특수성과 무한한 표현 가능성을 생각할 때 정보전달 효율성과 고만고만한 광고의 홍수 속에 조금이라도 튀어보려는 차별화의 필요성에 의해서다. 이에 본 연구에서는 현재 국내 CF에서 다양한 기법으로 표현되는 캐릭터의 활용 현황을 살펴보고 광고 효과를 극대화 할 수 있는 표현방법을 제시하는데 있다.

  • PDF

A Study on the Dynamic Painterly Stroke Generation for 3D Animation (3차원 애니메이션을 위한 회화적 스트로크의 동적 관리 기법)

  • Lee, Hyo-Keun;Ryoo, Seung-Taek;Yoon, Kyung-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.4
    • /
    • pp.554-568
    • /
    • 2005
  • We suggest the dynamic stroke generation algorithm that provides frame-to-frame coherence in 3D non-photorealistic animations. We use 3D particle system to eliminate the visual popping effect in the animated scene. Since we have located particles on the 3D object's surface, the coherence is maintained when the object or the camera is moving in the scene. Also, this algorithm maintains the coherence when camera is zooming in/out. However, the brush strokes on the surface also zoom in/out. This result(too large or too small brush strokes) can not represent hand-crafted brush strokes. To remove this problem, we suggest stroke generation algorithm that dynamically maintains the number of brush stroke and its size during camera zoom in/out.

  • PDF

Interactive Facial Expression Animation of Motion Data using CCA (CCA 투영기법을 사용한 모션 데이터의 대화식 얼굴 표정 애니메이션)

  • Kim Sung-Ho
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.85-93
    • /
    • 2005
  • This paper describes how to distribute high multi-dimensional facial expression data of vast quantity over a suitable space and produce facial expression animations by selecting expressions while animator navigates this space in real-time. We have constructed facial spaces by using about 2400 facial expression frames on this paper. These facial spaces are created by calculating of the shortest distance between two random expressions. The distance between two points In the space of expression, which is manifold space, is described approximately as following; When the linear distance of them is shorter than a decided value, if the two expressions are adjacent after defining the expression state vector of facial status using distance matrix expressing distance between two markers, this will be considered as the shortest distance (manifold distance) of the two expressions. Once the distance of those adjacent expressions was decided, We have taken a Floyd algorithm connecting these adjacent distances to yield the shortest distance of the two expressions. We have used CCA(Curvilinear Component Analysis) technique to visualize multi-dimensional spaces, the form of expressing space, into two dimensions. While the animators navigate this two dimensional spaces, they produce a facial animation by using user interface in real-time.

  • PDF

Auto Setup Method of Best Expression Transfer Path at the Space of Facial Expressions (얼굴 표정공간에서 최적의 표정전이경로 자동 설정 방법)

  • Kim, Sung-Ho
    • The KIPS Transactions:PartA
    • /
    • v.14A no.2
    • /
    • pp.85-90
    • /
    • 2007
  • This paper presents a facial animation and expression control method that enables the animator to select any facial frames from the facial expression space, whose expression transfer paths the system can setup automatically. Our system creates the facial expression space from approximately 2500 captured facial frames. To create the facial expression space, we get distance between pairs of feature points on the face and visualize the space of expressions in 2D space by using the Multidimensional scaling(MDS). To setup most suitable expression transfer paths, we classify the facial expression space into four field on the basis of any facial expression state. And the system determine the state of expression in the shortest distance from every field, then the system transfer from the state of any expression to the nearest state of expression among thats. To complete setup, our system continue transfer by find second, third, or fourth near state of expression until finish. If the animator selects any key frames from facial expression space, our system setup expression transfer paths automatically. We let animators use the system to create example animations or to control facial expression, and evaluate the system based on the results.