Browse > Article
http://dx.doi.org/10.3745/KIPSTB.2010.17B.5.355

Robust AAM-based Face Tracking with Occlusion Using SIFT Features  

Eom, Sung-Eun (카네기멜론대학 로봇연구소)
Jang, Jun-Su (LG전자)
Abstract
Face tracking is to estimate the motion of a non-rigid face together with a rigid head in 3D, and plays important roles in higher levels such as face/facial expression/emotion recognition. In this paper, we propose an AAM-based face tracking algorithm. AAM has been widely used to segment and track deformable objects, but there are still many difficulties. Particularly, it often tends to diverge or converge into local minima when a target object is self-occluded, partially or completely occluded. To address this problem, we utilize the scale invariant feature transform (SIFT). SIFT is an effective method for self and partial occlusion because it is able to find correspondence between feature points under partial loss. And it enables an AAM to continue to track without re-initialization in complete occlusions thanks to the good performance of global matching. We also register and use the SIFT features extracted from multi-view face images during tracking to effectively track a face across large pose changes. Our proposed algorithm is validated by comparing other algorithms under the above 3 kinds of occlusions.
Keywords
Face tracking; AAM(Active Appearance Model); SIFT(Scale Invariant Feature Transform); Occlusion Problem; 3D Pose Estimation; Online Feature Registration;
Citations & Related Records
연도 인용수 순위
  • Reference
1 CMU Graphics Lab Motion Capture, http://mocap.cs.cmu.edu.
2 Vicon, http://www.vicon.com
3 P. Mittrapiyanuruk, G. N. EdSouza, A. C. Kak, “Accurate 3D Tracking of Rigid Objects with Occlusion Using Active Appearance Models,” Proc. of the IEEE Workshop on Motion and Video Computing, pp.90-95, 2005.
4 J. Sung, T. Kanade, D. Kim, “Pose robust face tracking by combining active appearance models and cylinder head models,” Int. J. Comput. Vis., Vol.80, No.2, pp.260-274, 2008.   DOI
5 J. Xiao, T. Kanade and J. Cohn, “Robust full-motion recovery of head by dynamic templates and re-registration techniques,” Proc. International Conference on Automatic Face and Gesture Recognition, pp.156-162, 2002.
6 D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” IJCV, Vol.2, No.60, pp.91-110, 2004.
7 J. Jang and T. Kanade, “Robust 3D Head Tracking by Online Feature Registration,” Proc. International Conference on Automatic Face and Gesture Recognition, 2008.
8 M. Black and Y. Yacoob, “Recognizing facial expressions in image sequences using local parameterized models of image motion,” IJCV, Vol.25, No.1, pp.23-48, 1997.   DOI
9 S. Basu, I. Essa and A. Pentland, “Motion regularization for model-based head tracking,” in ICPR, pp. 611-616, 1996.
10 M. La Cascia, S. Sclaroff and V. Athitsos, “Fast, reliable head tracking under varying illumination: An approach based on robust registration of texture-mapped 3D models,” IEEE Trans. PAMI, 2000.
11 L. Lu, X.-T. Dai, G. Hager, “A particle filter without dynamics for robust 3D face tracking,” in CVPRW, pp.70, 2004.
12 X. Li, C. Chang, S. Chang, “Face Alive Icons,” Journal of Visual Languages and Computing, Vol.18 , No.4, pp.440-453, 2007.   DOI   ScienceOn
13 R. M. Murray, Z. Li, and S. S. Sastry, A Mathematical introduction to robotic manipulation, CRC Press, 1994.
14 G. Aggarwal, A. Veeraraghavan, and R. Chellappa. “3D facial pose tracking in uncalibrated videos,” in PRMI, pp. 515-520, 2005.
15 Y. Du, X. Lin, “Mapping emotional status to facial expressions,” Proceedings of 16th International Conference on Pattern Recognition, pp.524-527, August 2002.
16 C. H. Lee, J. Wetzel, C. Y. Jang, Y. T, Shen, T. H. Chen, T. Selker, “Attention Meter: A Vision-based Input Toolkit for Interation Designers,” Conference on Human Factors in Computing Systems (CHI), pp.1007-1012, Montreal, Quebec, Canada, 2006.
17 P. Ekman, “Facial expressions of emotion: an old controversy and new findings,” Philosophical Transactions: Biological Sciences, Vol.335, No.1273, pp.63-69, 1992.   DOI   ScienceOn
18 P. Ekman, W. Friesen, and J. Hager, “Facial Action Coding System,” Tech. Report, Research Nexus, Network Research Information, Salt Lake City, UT, 2002.
19 J. Cohn, T. Kanade, T. Moriyama, Z. Ambadar, J. Xiao, J. Gao, and H. Imamura, “A Comparative Study of Alternative Faces Coding Algorithms,” Tech. Report CMU-RI-TR-02-06, Robotics Institute, Carnegie Mellon University, Nov. 2001.
20 T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Anal. Mach. Intell., Vol.23, No.6, pp.681-685, Jun. 2001.   DOI   ScienceOn
21 I. Matthews and S. Baker, “Active appearance models revisited,” Int. J. Comput. Vis., Vol.60, No.2, pp.135–164, 2004.   DOI
22 J. Xiao, S. Baker, I. Matthews, and T. Kanade, “Real-time combined 2D+3D active appearance models,” CVPR, 2004.
23 B. Fasel and J. Luettin, “Automatic Facial Expression Analysis: A Survey,” Pattern Recognition, Vol.36, pp. 259-275, 2003.   DOI   ScienceOn
24 R. Gross, I. Matthews, and S. Baker, “Constructing and fitting active appearance models with occlusion,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog. Workshops, Vol.5, pp.72, 2004.
25 B. Theobald, I. Matthews, and S. Baker, “Evaluating error functions for robust active appearance model,” Proc. International Conference on Automatic Face and Gesture Recognition, pp.149-154, 2006.
26 X. Gao, Y. Su, X. Li, and D. Tao, “A review of active appearance models,” IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.40, No.2, Dec. 2010.
27 A. Samal and P. Iyengar, "Automatic Recognition and Analysis of Human Faces and Facial Expression: A Survey," Pattern Recognition, vol. 25(1), pp. 65-77, 1992.   DOI   ScienceOn