• Title/Summary/Keyword: Physically Based Character Animation

Search Result 6, Processing Time 0.022 seconds

Technology Trends for Motion Synthesis and Control of 3D Character

  • Choi, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.4
    • /
    • pp.19-26
    • /
    • 2019
  • In this study, we study the development and control of motion of 3D character animation and discuss the development direction of technology. Character animation has been developed as a data-based method and a physics-based method. The animation generation technique based on the keyframe method has been made possible by the development of the hardware technology, and the motion capture device has been used. Various techniques for effectively editing the motion data have appeared. At the same time, animation techniques based on physics have emerged, which realistically generate the motion of the character by physically optimized numerical computation. Recently, animation techniques using machine learning have shown new possibilities for creating characters that can be controlled by the user in real time and are expected to be developed in the future.

Animating Reactive Motions for Physics-Based Character Animation (물리기반 캐릭터 애니메이션을 위한 반응 모션 생성 기법)

  • Jee, Hyun-Ho;Han, Jung-Hyun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.420-425
    • /
    • 2008
  • The technique for synthesizing reactive motion in real-time is important in many applications such as computer games and virtual reality. This paper presents a dynamic motion control technique for creating reactive motions in a physically based character animation system. The leg to move in the next step is chosen using the direction of external disturbance forces and states of human figures and then is lifted though joint PD control. We decide the target position of the foot to balance the body without leg cross. Finally, control mechanism is used to generate reactive motion. The advantage of our method is that it is possible to generate reactive animations without example motions.

  • PDF

Transferring Skin Weights to 3D Scanned Clothes

  • Yoon, Seung-Hyun;Kim, Taejoon;Kim, Ho-Won;Lee, Jieun
    • ETRI Journal
    • /
    • v.38 no.6
    • /
    • pp.1095-1103
    • /
    • 2016
  • We present a method for transferring deformation weights of a human character to three-dimensional (3D) scanned clothes. First, clothing vertices are projected onto a character skin. Their deformation weights are determined from the barycentric coordinates of the projection points. For more complicated parts, such as shoulders and armpits, continuously moving planes are constructed and employed as projection reference planes. Clothing vertices on a plane are projected onto the intersection curve of the plane with a character skin to achieve a smooth weight transfer. The proposed method produces an initial deformation for physically based clothing simulations. We demonstrated the effectiveness of our method through several deformation results for 3D scanned clothes.

Interactive Animation of Articulated Bodies using a Procedural Method (절차적 방법을 이용한 다관절체의 대화형 동작생성)

  • Bae, Hui-Jeong;Baek, Nak-Hun;Lee, Jong-Won;Yu, Gwan-U
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.28 no.12
    • /
    • pp.620-631
    • /
    • 2001
  • In interactive environments including computer games and virtual reality applications, we have increased need for interactive control of articulated body motions. Recently, physically based methods including constrained dynamics techniques are introduced to this area, in order to produce more realistic animation sequences. However, they are hard to achieve real-time control of articulated bodies, due to their heavy computations. In this paper, we present a procedural method for interactive animation of articulated bodies. In our method, each object of the constrained body is first moved according to their physical properties and external forces, without considering any constraints. Then, the locations of objects are adjusted to satisfy given constraints. Through adapting this two-stage approach, we have avoided the solving of large linear systems of equations, to finally achieve the interactive animation of articulated bodies. We also present a few example sequences of animations, which are interactively generated on PC platforms. This method can be easily applied to character animations in virtual environments.

  • PDF

Punching Motion Generation using Reinforcement Learning and Trajectory Search Method (경로 탐색 기법과 강화학습을 사용한 주먹 지르기동작 생성 기법)

  • Park, Hyun-Jun;Choi, WeDong;Jang, Seung-Ho;Hong, Jeong-Mo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.969-981
    • /
    • 2018
  • Recent advances in machine learning approaches such as deep neural network and reinforcement learning offer significant performance improvements in generating detailed and varied motions in physically simulated virtual environments. The optimization methods are highly attractive because it allows for less understanding of underlying physics or mechanisms even for high-dimensional subtle control problems. In this paper, we propose an efficient learning method for stochastic policy represented as deep neural networks so that agent can generate various energetic motions adaptively to the changes of tasks and states without losing interactivity and robustness. This strategy could be realized by our novel trajectory search method motivated by the trust region policy optimization method. Our value-based trajectory smoothing technique finds stably learnable trajectories without consulting neural network responses directly. This policy is set as a trust region of the artificial neural network, so that it can learn the desired motion quickly.

Avoiding Inter-Leg Collision for Data-Driven Control (데이터 기반보행 제어를 위한 다리 간 충돌 회피 기법)

  • Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.23-27
    • /
    • 2017
  • We propose an inter-leg collision avoidance method that compensates the disadvantage of the data-driven biped control method. The data-driven biped control technique proposed by Lee et. al [1] sometimes generates the movement that the two legs intersect with each other while walking, which can not be realized in walking of a real person or a biped robot. The proposed method changes the angle of the swing hip so that the swing foot can move inward only after passing the stance foot. This process introduces an additional angle adjustment algorithm to avoid collisions with the stance leg to the original feedback rule of the stance hip. It generates a stable walking simulation without any inter-leg collisions, by adding minimal changes and additional calculations to the existing controller behavior.