• Title/Summary/Keyword: Physics-based character control

Search Result 9, Processing Time 0.023 seconds

Technology Trends for Motion Synthesis and Control of 3D Character

  • Choi, Jong-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.4
    • /
    • pp.19-26
    • /
    • 2019
  • In this study, we study the development and control of motion of 3D character animation and discuss the development direction of technology. Character animation has been developed as a data-based method and a physics-based method. The animation generation technique based on the keyframe method has been made possible by the development of the hardware technology, and the motion capture device has been used. Various techniques for effectively editing the motion data have appeared. At the same time, animation techniques based on physics have emerged, which realistically generate the motion of the character by physically optimized numerical computation. Recently, animation techniques using machine learning have shown new possibilities for creating characters that can be controlled by the user in real time and are expected to be developed in the future.

Luxo character control using deep reinforcement learning (심층 강화 학습을 이용한 Luxo 캐릭터의 제어)

  • Lee, Jeongmin;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.4
    • /
    • pp.1-8
    • /
    • 2020
  • Motion synthesis using physics-based controllers can generate a character animation that interacts naturally with the given environment and other characters. Recently, various methods using deep neural networks have improved the quality of motions generated by physics-based controllers. In this paper, we present a control policy learned by deep reinforcement learning (DRL) that enables Luxo, the mascot character of Pixar animation studio, to run towards a random goal location while imitating a reference motion and maintaining its balance. Instead of directly training our DRL network to make Luxo reach a goal location, we use a reference motion that is generated to keep Luxo animation's jumping style. The reference motion is generated by linearly interpolating predetermined poses, which are defined with Luxo character's each joint angle. By applying our method, we could confirm a better Luxo policy compared to the one without any reference motions.

On-line Motion Synthesis Using Analytically Differentiable System Dynamics (분석적으로 미분 가능한 시스템 동역학을 이용한 온라인 동작 합성 기법)

  • Han, Daseong;Noh, Junyong;Shin, Joseph S.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.133-142
    • /
    • 2019
  • In physics-based character animation, trajectory optimization has been widely adopted for automatic motion synthesis, through the prediction of an optimal sequence of future states of the character based on its system dynamics model. In general, the system dynamics model is neither in a closed form nor differentiable when it handles the contact dynamics between a character and the environment with rigid body collisions. Employing smoothed contact dynamics, researchers have suggested efficient trajectory optimization techniques based on numerical differentiation of the resulting system dynamics. However, the numerical derivative of the system dynamics model could be inaccurate unlike its analytical counterpart, which may affect the stability of trajectory optimization. In this paper, we propose a novel method to derive the closed-form derivative for the system dynamics by properly approximating the contact model. Based on the resulting derivatives of the system dynamics model, we also present a model predictive control (MPC)-based motion synthesis framework to robustly control the motion of a biped character according to on-line user input without any example motion data.

Animating Reactive Motions for Physics-Based Character Animation (물리기반 캐릭터 애니메이션을 위한 반응 모션 생성 기법)

  • Jee, Hyun-Ho;Han, Jung-Hyun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.420-425
    • /
    • 2008
  • The technique for synthesizing reactive motion in real-time is important in many applications such as computer games and virtual reality. This paper presents a dynamic motion control technique for creating reactive motions in a physically based character animation system. The leg to move in the next step is chosen using the direction of external disturbance forces and states of human figures and then is lifted though joint PD control. We decide the target position of the foot to balance the body without leg cross. Finally, control mechanism is used to generate reactive motion. The advantage of our method is that it is possible to generate reactive animations without example motions.

  • PDF

A Supervised Learning Framework for Physics-based Controllers Using Stochastic Model Predictive Control (확률적 모델예측제어를 이용한 물리기반 제어기 지도 학습 프레임워크)

  • Han, Daseong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.1
    • /
    • pp.9-17
    • /
    • 2021
  • In this paper, we present a simple and fast supervised learning framework based on model predictive control so as to learn motion controllers for a physic-based character to track given example motions. The proposed framework is composed of two components: training data generation and offline learning. Given an example motion, the former component stochastically controls the character motion with an optimal controller while repeatedly updating the controller for tracking the example motion through model predictive control over a time window from the current state of the character to a near future state. The repeated update of the optimal controller and the stochastic control make it possible to effectively explore various states that the character may have while mimicking the example motion and collect useful training data for supervised learning. Once all the training data is generated, the latter component normalizes the data to remove the disparity for magnitude and units inherent in the data and trains an artificial neural network with a simple architecture for a controller. The experimental results for walking and running motions demonstrate how effectively and fast the proposed framework produces physics-based motion controllers.

On-line Trajectory Optimization Based on Automatic Time Warping (자동 타임 워핑에 기반한 온라인 궤적 최적화)

  • Han, Daseong;Noh, Junyong;Shin, Joseph S.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.105-113
    • /
    • 2017
  • This paper presents a novel on-line trajectory optimization framework based on automatic time warping, which performs the time warping of a reference motion while optimizing character motion control. Unlike existing physics-based character animation methods where sampling times for a reference motion are uniform or fixed during optimization in general, our method considers the change of sampling times on top of the dynamics of character motion in the same optimization, which allows the character to effectively respond to external pushes with optimal time warping. In order to do so, we formulate an optimal control problem which takes into account both the full-body dynamics and the change of sampling time for a reference motion, and present a model predictive control framework that produces an optimal control policy for character motion and sampling time by repeatedly solving the problem for a fixed-span time window while shifting it along the time axis. Our experimental results show the robustness of our framework to external perturbations and the effectiveness on rhythmic motion synthesis in accordance with a given piece of background music.

Punching Motion Generation using Reinforcement Learning and Trajectory Search Method (경로 탐색 기법과 강화학습을 사용한 주먹 지르기동작 생성 기법)

  • Park, Hyun-Jun;Choi, WeDong;Jang, Seung-Ho;Hong, Jeong-Mo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.969-981
    • /
    • 2018
  • Recent advances in machine learning approaches such as deep neural network and reinforcement learning offer significant performance improvements in generating detailed and varied motions in physically simulated virtual environments. The optimization methods are highly attractive because it allows for less understanding of underlying physics or mechanisms even for high-dimensional subtle control problems. In this paper, we propose an efficient learning method for stochastic policy represented as deep neural networks so that agent can generate various energetic motions adaptively to the changes of tasks and states without losing interactivity and robustness. This strategy could be realized by our novel trajectory search method motivated by the trust region policy optimization method. Our value-based trajectory smoothing technique finds stably learnable trajectories without consulting neural network responses directly. This policy is set as a trust region of the artificial neural network, so that it can learn the desired motion quickly.

Comparison of learning performance of character controller based on deep reinforcement learning according to state representation (상태 표현 방식에 따른 심층 강화 학습 기반 캐릭터 제어기의 학습 성능 비교)

  • Sohn, Chaejun;Kwon, Taesoo;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.55-61
    • /
    • 2021
  • The character motion control based on physics simulation using reinforcement learning continue to being carried out. In order to solve a problem using reinforcement learning, the network structure, hyperparameter, state, action and reward must be properly set according to the problem. In many studies, various combinations of states, action and rewards have been defined and successfully applied to problems. Since there are various combinations in defining state, action and reward, many studies are conducted to analyze the effect of each element to find the optimal combination that improves learning performance. In this work, we analyzed the effect on reinforcement learning performance according to the state representation, which has not been so far. First we defined three coordinate systems: root attached frame, root aligned frame, and projected aligned frame. and then we analyze the effect of state representation by three coordinate systems on reinforcement learning. Second, we analyzed how it affects learning performance when various combinations of joint positions and angles for state.

Generating a Ball Sport Scene in a Virtual Environment

  • Choi, Jongin;Kim, Sookyun;Kim, Sunjeong;Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5512-5526
    • /
    • 2019
  • In sports video games, especially ball games, motion capture techniques are used to reproduce the ball-driven performances. The amount of motion data needed to create different situations in which athletes exchange balls is bound to increase exponentially with resolution. This paper proposes how avatars in virtual worlds can not only imitate professional athletes in ball games, but also create and edit their actions effectively. First, various ball-handling movements are recorded using motion sensors. We do not really have to control an actual ball; imitating the motions is enough. Next, motion is created by specifying what to pass the ball through, and then making motion to handle the ball in front of the motion sensor. The ball's occupant then passes the ball to the user-specified target through a motion that imitates the user's, and the process is repeated. The method proposed can be used as a convenient user interface for motion based games for players who handle balls.