Browse > Article
http://dx.doi.org/10.15701/kcgs.2021.27.5.33

Motion Style Transfer using Variational Autoencoder  

Ahn, Jewon (Dept. of Intelligence Convergence, Hanyang University)
Kwon, Taesoo (Dept. of Computer and Software, Hanyang University)
Abstract
In this paper, we propose a framework that transfers the information of style motions to content motions based on a variational autoencoder network combined with a style encoding in the latent space. Because we transfer a style to a content motion that is sampled from a variational autoencoder, we can increase the diversity of existing motion data. In addition, we can improve the unnatural motions caused by decoding a new latent variable from style transfer. That improvement was achieved by additionally using the velocity information of motions when generating next frames.
Keywords
variational autoencoder; style autoencoder; motion style transfer; generative models; velocity integrate;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Bruderlin, Armin, and Lance Williams. "Motion signal processing." Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 97-104, 1995.
2 Gatys, Leon A., Alexander S. Ecker, and Matthias Bethge. "A neural algorithm of artistic style." arXiv preprint arXiv:1508.06576, 2015.
3 Fragkiadaki, Katerina, et al. "Recurrent network models for human dynamics." Proceedings of the IEEE International Conference on Computer Vision, pp. 4346-4354, 2015.
4 Brand, Matthew, and Aaron Hertzmann. "Style machines." Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pp. 183-192, 2000.
5 Min, Jianyuan, and Jinxiang Chai. "Motion graphs++ a compact generative model for semantic motion analysis and synthesis." ACM Transactions on Graphics (TOG), vol. 31, no. 6, pp. 1-12, 2012.
6 Holden, Daniel, et al. "Fast neural style transfer for motion data." IEEE computer graphics and applications, vol. 37, no. 4, pp. 42-49, 2017.   DOI
7 Unuma, Munetoshi, Ken Anjyo, and Ryozo Takeuchi. "Fourier principles for emotion-based human figure animation." Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 91-96, 1995.
8 Pullen, Katherine, and Christoph Bregler. "Motion capture assisted animation: Texturing and synthesis." Proceedings of the 29th annual conference on Computer graphics and interactive techniques, pp. 501-508, 2002.
9 Yumer, M. Ersin, and Niloy J. Mitra. "Spectral style transfer for human motion between independent actions." ACM Transactions on Graphics (TOG), vol. 35, no. 4, pp. 1-8, 2016.
10 Min, Jianyuan, Huajun Liu, and Jinxiang Chai. "Synthesis and editing of personalized stylistic human motion." Proceedings of the 2010 ACM SIGGRAPH symposium on Interactive 3D Graphics and Games, pp. 39-46, 2010.
11 Du, Han, et al. "Stylistic Locomotion Modeling with Conditional Variational Autoencoder." Eurographics (Short Papers), pp. 9-12, 2019.
12 Bowden, Richard. "Learning statistical models of human motion." IEEE Workshop on Human Modeling, Analysis and Synthesis, CVPR, vol. 2000, 2000.
13 Motegi, Yuichiro, Yuma Hijioka, and Makoto Murakami. "Human motion generative model using variational autoencoder." International Journal of Modeling and Optimization, vol. 8, no. 1, 2018.
14 Habibie, Ikhsanul, et al. "A recurrent variational autoencoder for human motion synthesis." 28th British Machine Vision Conference, 2017.
15 Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114, 2013.
16 Holden, Daniel, et al. "Learning motion manifolds with convolutional autoencoders." SIGGRAPH Asia 2015 Technical Briefs, pp. 1-4, 2015.