• Title/Summary/Keyword: 물리 기반 캐릭터 제어

Search Result 14, Processing Time 0.143 seconds

Luxo character control using deep reinforcement learning (심층 강화 학습을 이용한 Luxo 캐릭터의 제어)

  • Lee, Jeongmin;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.4
    • /
    • pp.1-8
    • /
    • 2020
  • Motion synthesis using physics-based controllers can generate a character animation that interacts naturally with the given environment and other characters. Recently, various methods using deep neural networks have improved the quality of motions generated by physics-based controllers. In this paper, we present a control policy learned by deep reinforcement learning (DRL) that enables Luxo, the mascot character of Pixar animation studio, to run towards a random goal location while imitating a reference motion and maintaining its balance. Instead of directly training our DRL network to make Luxo reach a goal location, we use a reference motion that is generated to keep Luxo animation's jumping style. The reference motion is generated by linearly interpolating predetermined poses, which are defined with Luxo character's each joint angle. By applying our method, we could confirm a better Luxo policy compared to the one without any reference motions.

A Supervised Learning Framework for Physics-based Controllers Using Stochastic Model Predictive Control (확률적 모델예측제어를 이용한 물리기반 제어기 지도 학습 프레임워크)

  • Han, Daseong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.1
    • /
    • pp.9-17
    • /
    • 2021
  • In this paper, we present a simple and fast supervised learning framework based on model predictive control so as to learn motion controllers for a physic-based character to track given example motions. The proposed framework is composed of two components: training data generation and offline learning. Given an example motion, the former component stochastically controls the character motion with an optimal controller while repeatedly updating the controller for tracking the example motion through model predictive control over a time window from the current state of the character to a near future state. The repeated update of the optimal controller and the stochastic control make it possible to effectively explore various states that the character may have while mimicking the example motion and collect useful training data for supervised learning. Once all the training data is generated, the latter component normalizes the data to remove the disparity for magnitude and units inherent in the data and trains an artificial neural network with a simple architecture for a controller. The experimental results for walking and running motions demonstrate how effectively and fast the proposed framework produces physics-based motion controllers.

상태 표현 방식에 따른 심층 강화 학습 기반 캐릭터 제어기의 학습 성능 비교

  • Son, Chae-Jun;Lee, Yun-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.14-15
    • /
    • 2021
  • 물리 시뮬레이션 기반의 캐릭터 동작 제어 문제를 강화학습을 이용하여 해결해 나가는 연구들이 계속해서 진행되고 있다. 이에 따라 이 문제를 강화학습을 이용하여 풀 때, 영향을 미치는 요소에 대한 연구도 계속해서 진행되고 있다. 우리는 지금까지 이뤄지지 않았던 상태 표현 방식에 따른 강화학습에 미치는 영향을 분석하였다. 첫째로, root attached frame, root aligned frame, projected aligned frame 3 가지 좌표계를 정의하였고, 이에 대해 표현된 상태를 이용하여 강화학습에 미치는 영향을 분석하였다. 둘째로, 동역학적 상태를 나타내는 캐릭터 관절의 위치, 각도에 따라 학습에 어떠한 영향을 미치는지 분석하였다.

  • PDF

On-line Trajectory Optimization Based on Automatic Time Warping (자동 타임 워핑에 기반한 온라인 궤적 최적화)

  • Han, Daseong;Noh, Junyong;Shin, Joseph S.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.105-113
    • /
    • 2017
  • This paper presents a novel on-line trajectory optimization framework based on automatic time warping, which performs the time warping of a reference motion while optimizing character motion control. Unlike existing physics-based character animation methods where sampling times for a reference motion are uniform or fixed during optimization in general, our method considers the change of sampling times on top of the dynamics of character motion in the same optimization, which allows the character to effectively respond to external pushes with optimal time warping. In order to do so, we formulate an optimal control problem which takes into account both the full-body dynamics and the change of sampling time for a reference motion, and present a model predictive control framework that produces an optimal control policy for character motion and sampling time by repeatedly solving the problem for a fixed-span time window while shifting it along the time axis. Our experimental results show the robustness of our framework to external perturbations and the effectiveness on rhythmic motion synthesis in accordance with a given piece of background music.

On-line Motion Synthesis Using Analytically Differentiable System Dynamics (분석적으로 미분 가능한 시스템 동역학을 이용한 온라인 동작 합성 기법)

  • Han, Daseong;Noh, Junyong;Shin, Joseph S.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.133-142
    • /
    • 2019
  • In physics-based character animation, trajectory optimization has been widely adopted for automatic motion synthesis, through the prediction of an optimal sequence of future states of the character based on its system dynamics model. In general, the system dynamics model is neither in a closed form nor differentiable when it handles the contact dynamics between a character and the environment with rigid body collisions. Employing smoothed contact dynamics, researchers have suggested efficient trajectory optimization techniques based on numerical differentiation of the resulting system dynamics. However, the numerical derivative of the system dynamics model could be inaccurate unlike its analytical counterpart, which may affect the stability of trajectory optimization. In this paper, we propose a novel method to derive the closed-form derivative for the system dynamics by properly approximating the contact model. Based on the resulting derivatives of the system dynamics model, we also present a model predictive control (MPC)-based motion synthesis framework to robustly control the motion of a biped character according to on-line user input without any example motion data.

Virtual Marionette Simulation Using Haptic Interfaces (햅틱 인터페이스 기반의 가상 마리오넷 시뮬레이션)

  • Kim, Su-Jeong;Zhang, Xin-Yu;Kim, Young-J.
    • Journal of the Korea Computer Graphics Society
    • /
    • v.11 no.4
    • /
    • pp.39-44
    • /
    • 2005
  • 인터랙티브 컴퓨터 게임과 컴퓨터 애니메이션에서, 유관절체의 움직임을 직관적으로 제어하도록 하는 것은 어려운 문제로 인식되고 있다. 이런 분야에서는 대부분 움직임의 대상이 되는 캐릭터가 많은 관절로 연결되어 있는데, 이때 각 관절을 사용자의 의도대로 쉽게 조종할 수 있도록 해주는 인터페이스를 디자인하기가 어렵기 때문이다. 본 논문에서는 자유도(DOF)가 높은 캐릭터의 움직임을 제어하기 위해 오랫동안 인형극에서 사용되고 있는 마리오넷 조종 기법[5]을 응용한 마리오넷 시스템을 제안하고자 한다. 우리는 가상 마리오넷 시스템을 물리기반 모델링과 햅틱 인터페이스를 기반으로 구현하였고, 이 시스템을 통해 높은 자유도를 가지는 유관절체 캐릭터의 복잡한 움직임을 쉽게 생성해낼 수 있었다. 그리고 사용자에게 햅틱 포스 피드백을 줌으로써 더욱 정교한 마리오넷을 조작이 가능하도록 하였다. 이 시스템을 일반적인 유관절체에 적용한다면 다양한 움직임을 쉽고 빠르게 생성할 수 있을 것이다.

  • PDF

Efficient Path Tracking of Non-Player Character with Controlling NavMesh Based on Smoothed Heaviside Step Function (부드러운 헤비사이드 계단 함수 기반의 NavMesh 제어 기법을 이용한 효율적인 NPC의 경로 추적)

  • Kim, Jong-Hyun;Kim, Soo Kyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.339-340
    • /
    • 2022
  • 본 논문에서는 사용자의 다양한 물리적 속성 중 부드러운 헤비사이드 계단 함수와 다양한 물리적 속성(속도, 시점 등)을 활용하여 가중치 맵을 계산하고 이로부터 논플레이어 캐릭터(Non-player character, NPC)의 경로를 효율적으로 제어할 수 있는 NavMesh 제어 기법을 제시한다. 게임과 같은 가상환경에서 NPC는 일반적으로 네비게이션 메쉬(Navigation mesh, NavMesh)를 이용하여 이동한다. 하지만, NavMesh는 정적인 형태이기 때문에 사용자에 의해 디자인되어야 하고, 이러한 문제를 완화하고자 자동으로 NavMesh를 업데이트하는 기술이 연구되고 있지만, 메쉬 복원을 자동화할 뿐 실제 NPC 행동 제어라고 하기에는 힘든 접근법이다. 본 논문에서는 동적 네비게이션 프레임워크를 유지한 채, 사용자의 시점과 물리적 특성을 통해 NPC를 효율적이고 정확하게 경로 제어할 수 있는 방법을 제안하고, NavMesh의 형태에만 의존하던 NPC의 움직임을 완화하여 좀 더 사실적인 경로 제어를 보여준다.

  • PDF

Animating Reactive Motions for Physics-Based Character Animation (물리기반 캐릭터 애니메이션을 위한 반응 모션 생성 기법)

  • Jee, Hyun-Ho;Han, Jung-Hyun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.420-425
    • /
    • 2008
  • The technique for synthesizing reactive motion in real-time is important in many applications such as computer games and virtual reality. This paper presents a dynamic motion control technique for creating reactive motions in a physically based character animation system. The leg to move in the next step is chosen using the direction of external disturbance forces and states of human figures and then is lifted though joint PD control. We decide the target position of the foot to balance the body without leg cross. Finally, control mechanism is used to generate reactive motion. The advantage of our method is that it is possible to generate reactive animations without example motions.

  • PDF

Comparison of learning performance of character controller based on deep reinforcement learning according to state representation (상태 표현 방식에 따른 심층 강화 학습 기반 캐릭터 제어기의 학습 성능 비교)

  • Sohn, Chaejun;Kwon, Taesoo;Lee, Yoonsang
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.55-61
    • /
    • 2021
  • The character motion control based on physics simulation using reinforcement learning continue to being carried out. In order to solve a problem using reinforcement learning, the network structure, hyperparameter, state, action and reward must be properly set according to the problem. In many studies, various combinations of states, action and rewards have been defined and successfully applied to problems. Since there are various combinations in defining state, action and reward, many studies are conducted to analyze the effect of each element to find the optimal combination that improves learning performance. In this work, we analyzed the effect on reinforcement learning performance according to the state representation, which has not been so far. First we defined three coordinate systems: root attached frame, root aligned frame, and projected aligned frame. and then we analyze the effect of state representation by three coordinate systems on reinforcement learning. Second, we analyzed how it affects learning performance when various combinations of joint positions and angles for state.

Inductive Inverse Kinematics Algorithm for the Natural Posture Control (자연스러운 자세 제어를 위한 귀납적 역운동학 알고리즘)

  • Lee, Bum-Ro;Chung, Chin-Hyun
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.367-375
    • /
    • 2002
  • Inverse kinematics is a very useful method for control]ing the posture of an articulated body. In most inverse kinematics processes, the major matter of concern is not the posture of an articulated body itself but the position and direction of the end effector. In some applications such as 3D character animations, however, it is more important to generate an overall natural posture for the character rather than place the end effector in the exact position. Indeed, when an animator wants to modify the posture of a human-like 3D character with many physical constraints, he has to undergo considerable trial-and-error to generate a realistic posture for the character. In this paper, the Inductive Inverse Kinematics(IIK) algorithm using a Uniform Posture Map(UPM) is proposed to control the posture of a human-like 3D character. The proposed algorithm quantizes human behaviors without distortion to generate a UPM, and then generates a natural posture by searching the UPM. If necessary, the resulting posture could be compensated with a traditional Cyclic Coordinate Descent (CCD). The proposed method could be applied to produce 3D-character animations based on the key frame method, 3D games and virtual reality.