Browse > Article

Data-driven Facial Expression Reconstruction for Simultaneous Motion Capture of Body and Face  

Park, Sang Il (Department of Digital Contents, Sejong University)
Abstract
In this paper, we present a new method for reconstructing detailed facial expression from roughly captured data with a small number of markers. Because of the difference in the required capture resolution between the full-body capture and the facial expression capture, they hardly have been performed simultaneously. However, for generating natural animation, a simultaneous capture for body and face is essential. For this purpose, we provide a method for capturing the detailed facial expression only with a small number of markers. Our basic idea is to build a database for the facial expressions and apply the principal component analysis for reducing the dimensionality. The dimensionality reduction enables us to estimate the full data from a part of the data. We justify our method by applying it to dynamic scenes to show the viability of the method.
Keywords
3D Character Animation; Motion Capture; Facial Expression Capture; Facial Expression Reconstruction;
Citations & Related Records
Times Cited By KSCI : 1  (Citation Analysis)
연도 인용수 순위
1 M. Brand, "Voice puppertry," in Proceedings of ACM SIGGRAPH 92, 1992, pp. 21-28.
2 Z. Deng and U. Neumann, "efase: expressive facial animation synthesis and editing with phoneme-isomap controls," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2006, pp. 251-260.
3 S. Kshirsagar and N. Magnenat-Thalmann, "Visyllable based speech animation," Computer Graphics Forum, vol. 22, no. 3, pp. 632-640, 2003.
4 E. Chuang and C. Bregler, "Mood swings: expressive speech animation," ACM Transactions on Graphics, vol. 24, no. 2, pp. 331-347, 2005.   DOI
5 P. Joshi, W. C. Tien, M. Desbrun, and F. Pighin, "Learning controls for blend shape based realistic facial animation," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003, pp. 187-192.
6 H. Pyun, Y. Kim, W. Chae, H. W. Kang, and S. Y. Shin, "An example-based approach for facial expression cloning," in Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 2003, pp. 167-176.
7 E. Ju and J. Lee, "Expressive facial gestures from motion capture data," Computer Graphics Forum, vol. 27, no. 2, pp. 381-388, 2008.   DOI
8 F. Pighin, J. Hecker, D. Lischinski, D. Salesin, and R. Szeliski, "Synthesizing realistic facial expressions," in Proceedings of ACM SIGGRAPH 98, 1998, pp. 75-84.
9 W. Ma, A. Jones, J. Chiang, T. Hawkins, S. Frederiksen, P. Peers, M. Vukovic, M. Ouhyoung, and P. Debevec, "Facial performance synthesis using deformation-driven polynomial displacement maps," ACM Transactions on Graphics, vol. 27, no. 5, p. 121, 2008.
10 J. R. Tena, F. De la Torre, and I. Matthews, "Interactive region-based linear 3d face models," ACM Transactions on Graphics, vol. 30, no. 4, pp. 76:1-76:10, July 2011.
11 T. Weise, S. Bouaziz, H. Li, and M. Pauly, "Realtime performance-based facial animation," ACM Transactions on Graphics, vol. 30, no. 4, pp. 77:1-77:10, July 2011.
12 박상일, "광학식 동작 포착 장비를 이용한 노이즈에 강건한 얼굴 애니메이션 제작", 한국게임저널," vol. 10, no. 5, pp. 103-114, 2010.
13 I. Jolliffe, Principal Component Analysis. Springer, 2010.
14 B. Horn, "Closed-form solution of absolute orientation using unit quaternions," Journal of the Optical Society of America, vol. A, no. 4, pp. 629-642, 1987.