Browse > Article
http://dx.doi.org/10.15701/kcgs.2021.27.5.73

A Comparison of Deep Neural Network Structures for Learning Various Motions  

Park, Soohwan (Seoul National University)
Lee, Jehee (Seoul National University)
Abstract
Recently, in the field of computer animation, a method for generating motion using deep learning has been studied away from conventional finite-state machines or graph-based methods. The expressiveness of the network required for learning motions is more influenced by the diversity of motion contained in it than by the simple length of motion to be learned. This study aims to find an efficient network structure when the types of motions to be learned are diverse. In this paper, we train and compare three types of networks: basic fully-connected structure, mixture of experts structure that uses multiple fully-connected layers in parallel, recurrent neural network which is widely used to deal with seq2seq, and transformer structure used for sequence-type data processing in the natural language processing field.
Keywords
Deep Learning; Motion Generation; Computer Animation; Simulation;
Citations & Related Records
연도 인용수 순위
  • Reference
1 M. Buttner and S. Clavet, "Motion matching - the road to next gen animation," in Proc. of Nucl.ai 2015, 2015.
2 S. Clavet, "Motion matching and the road to next-gen animation," in Proc. of GDC 2016, 2016.
3 D. Holden, T. Komura, and J. Saito, "Phase-functioned neural networks for character control," ACM Trans. Graph., vol. 36, no. 4, 2017.
4 D. Holden, O. Kanoun, M. Perepichka, and T. Popa, "Learned motion matching," ACM Trans. Graph., vol. 39, no. 4, 2020.
5 A. P. Georgopoulos, J. F. Kalaska, and J. T. Massey, "Spatial trajectories and reaction times of aimed movements: effects of practice, uncertainty, and change in target location," Journal of neurophysiology, vol. 46, no. 4, 1981.
6 S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.   DOI
7 S. Lee, S. Lee, Y. Lee, and J. Lee, "Learning a family of motor skills from a single motion clip," ACM Trans. Graph., vol. 40, no. 4, 2021.
8 L. Kovar, M. Gleicher, and F. Pighin, "Motion graphs," ACM Trans. Graph., vol. 21, no. 3, pp. 473-482, July 2002.   DOI
9 A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998-6008.
10 J. Lee, J. Chai, P. S. A. Reitsma, J. K. Hodgins, and N. S. Pollard, "Interactive control of avatars animated with human motion data," ACM Trans. Graph., vol. 21, no. 3, pp. 491-500, 2002.   DOI
11 K. Lee, S. Lee, and J. Lee, "Interactive character animation by learning multi-objective control," ACM Transactions on Graphics (TOG), vol. 37, no. 6, pp. 1-10, 2018.
12 H. Zhang, S. Starke, T. Komura, and J. Saito, "Mode-adaptive neural networks for quadruped motion control," ACM Trans. Graph., vol. 37, no. 4, 2018.
13 M. Flanders and P. Cordo, "Kinesthetic and visual control of a bimanual task: specification of direction and amplitude," Journal of Neuroscience, vol. 9, no. 2, 1989.
14 X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, "Least squares generative adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2794-2802.