Browse > Article
http://dx.doi.org/10.15701/kcgs.2021.27.3.13

Motion Generation of a Single Rigid Body Character Using Deep Reinforcement Learning  

Ahn, Jewon (Dept. of Intelligence Convergence, Hanyang University)
Gu, Taehong (Dept. of Computer and Software, Hanyang University)
Kwon, Taesoo (Dept. of Computer and Software, Hanyang University)
Abstract
In this paper, we proposed a framework that generates the trajectory of a single rigid body based on its COM configuration and contact pose. Because we use a smaller input dimension than when we use a full body state, we can improve the learning time for reinforcement learning. Even with a 68% reduction in learning time (approximately two hours), the character trained by our network is more robust to external perturbations tolerating an external force of 1500 N which is about 7.5 times larger than the maximum magnitude from a previous approach. For this framework, we use centroidal dynamics to calculate the next configuration of the COM, and use reinforcement learning for obtaining a policy that gives us parameters for controlling the contact positions and forces.
Keywords
deep reinforcement learning; centroidal dynamics models; single rigid body; physics-based model; center of mass;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Duan, Yan, et al. "Benchmarking deep reinforcement learning for continuous control." International conference on machine learning. PMLR, pp. 1329-1338, 2016.
2 Xie, Zhaoming, et al. "GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with a Centroidal Model." arXiv preprint arXiv:2104.09771, 2021.
3 Schulman, John, et al. "Proximal policy optimization algorithms." arXiv preprint arXiv:1707.06347, 2017.
4 da Silva, Marco, Yeuhi Abe, and Jovan Popovic. "Interactive simulation of stylized human locomotion." ACM SIGGRAPH 2008 papers, pp. 1-10, 2008.
5 Kwon, Taesoo, and Jessica K. Hodgins. "Momentum-mapped inverted pendulum models for controlling dynamic human motions." ACM Transactions on Graphics(TOG), vol. 36, no. 1, pp. 1-14, 2017.
6 Lee, Yongjoon, et al. "Motion fields for interactive character locomotion." ACM SIGGRAPH Asia 2010 papers, pp. 1-8, 2010.
7 Wang, Jack M., et al. "Optimizing locomotion controllers using biologically-based actuators and objectives." ACM Transactions on Graphics (TOG), vol. 31, no. 4, pp. 1-11, 2012.
8 Lee, Yoonsang, et al. "Locomotion control for many-muscle humanoids." ACM Transactions on Graphics (TOG), vol. 33, no. 6, pp. 1-11, 2014.
9 Wampler, Kevin, Zoran Popovic, and Jovan Popovic. "Generalizing locomotion style to new animals with inverse optimal regression." ACM Transactions on Graphics (TOG), vol. 33, no. 4, pp. 1-11, 2014.
10 Hamalainen, Perttu, Joose Rajamaki, and C. Karen Liu. "Online control of simulated humanoids using particle belief propagation." ACM Transactions on Graphics (TOG), vol. 34, no. 4, pp. 1-13, 2015.
11 Tassa, Yuval, Tom Erez, and Emanuel Todorov. "Synthesis and stabilization of complex behaviors through online trajectory optimization." 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 4906-4913, 2012.
12 Coros, Stelian, Philippe Beaudoin, and Michiel Van de Panne. "Generalized biped walking control." ACM Transactions on Graphics (TOG), vol. 29, no. 4, pp. 1-9, 2010.
13 Ye, Yuting, and C. Karen Liu. "Optimal feedback control for character animation using an abstract model." ACM SIGGRAPH 2010 papers, pp. 1-9, 2010.
14 Brockman, Greg, et al. "Openai gym." arXiv preprint arXiv:1606.01540, 2016.
15 Mordatch, Igor, Emanuel Todorov, and Zoran Popovic. "Discovery of complex behaviors through contact-invariant optimization." ACM Transactions on Graphics (TOG), vol. 31, no. 4, pp. 1-8, 2012.
16 Rajeswaran, Aravind, et al. "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations." arXiv preprint arXiv:1709.10087, 2017.
17 Dai, Hongkai, Andres Valenzuela, and Russ Tedrake. "Whole-body motion planning with centroidal dynamics and full kinematics." 2014 IEEE-RAS International Conference on Humanoid Robots. IEEE, pp. 295-302, 2014.
18 Winkler, Alexander W., et al. "Gait and trajectory optimization for legged systems through phase-based end-effector parameterization." IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1560-1567, 2018.   DOI
19 Levine, Sergey, et al. "Continuous character control with low-dimensional embeddings." ACM Transactions on Graphics (TOG), vol. 31, no. 4, pp. 1-10, 2012.   DOI
20 Peng, Xue Bin, Glen Berseth, and Michiel Van de Panne. "Dynamic terrain traversal skills using reinforcement learning." ACM Transactions on Graphics (TOG), vol. 34, no. 4, pp. 1-11, 2015.
21 Liu, Libin, and Jessica Hodgins. "Learning to schedule control fragments for physics-based characters using deep q-learning." ACM Transactions on Graphics (TOG), vol. 36, no. 3, pp. 1-14, 2017.
22 Teh, Yee Whye, et al. "Distral: Robust multitask reinforcement learning." arXiv preprint arXiv:1707.04175, 2017.
23 Orin, David E., Ambarish Goswami, and Sung-Hee Lee. "Centroidal dynamics of a humanoid robot." Autonomous robots, vol. 35, no. 2, pp. 161-176, 2013.   DOI
24 Kwon, Taesoo, Yoonsang Lee, and Michiel Van De Panne. "Fast and flexible multilegged locomotion using learned centroidal dynamics." ACM Transactions on Graphics (TOG), vol. 39, no. 4, pp. 1-46, 2020.
25 Abe, Yeuhi, Marco Da Silva, and Jovan Popovic. "Multiobjective control with frictional contacts." Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation, pp. 249-258, 2007.
26 Coros, Stelian, Philippe Beaudoin, and Michiel Van de Panne. "Robust task-based control policies for physics-based characters." ACM SIGGRAPH Asia 2009 papers, pp. 1-9, 2009.
27 Peng, Xue Bin, et al. "Deepmimic: Example-guided deep reinforcement learning of physics-based character skills." ACM Transactions on Graphics (TOG), vol. 37, no. 4, pp. 1-14, 2018.
28 Ha, Sehoon, and C. Karen Liu. "Iterative training of dynamic skills inspired by human coaching techniques." ACM Transactions on Graphics (TOG), vol. 34, no. 1, pp. 1-11, 2014.
29 Agrawal, Shailen, Shuo Shen, and Michiel van de Panne. "Diverse motion variations for physics-based character animation." Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp. 37-44, 2013.
30 Da Silva, Marco, Yeuhi Abe, and Jovan Popovic. "Simulation of human motion data using short-horizon model-predictive control." Computer Graphics Forum. Oxford, UK. Blackwell Publishing Ltd, vol. 27, no. 2, pp. 371-380, 2008.
31 Ellis, Jane, et al. "CDM: Taking stock and looking forward." Energy policy, vol. 35, no. 1, pp. 15-28, 2007.   DOI
32 Yin, KangKang, Kevin Loken, and Michiel Van de Panne. "Simbicon: Simple biped locomotion control." ACM Transactions on Graphics(TOG), vol. 26, no. 3, pp. 105-es, 2007.   DOI
33 Peng, Xue Bin, Glen Berseth, and Michiel Van de Panne. "Terrain-adaptive locomotion skills using deep reinforcement learning." ACM Transactions on Graphics (TOG), vol. 35, no. 4, pp. 1-12, 2016.
34 Lee, Yoonsang, Sungeun Kim, and Jehee Lee. "Data-driven biped control." ACM SIGGRAPH 2010 papers, pp. 1-8, 2010.