Segmentation of Pointed Objects for Service Robots

서비스 로봇을 위한 지시 물체 분할 방법

  • Received : 2009.02.27
  • Accepted : 2009.04.02
  • Published : 2009.05.29

Abstract

This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

Keywords

References

  1. Nebojsa Jojic, Barry Brumitt, Brian Meyers, Steve Harris and Thomas Huang, "Detection and Estimation of Pointing Gestures in Dense Disparity Maps,"International Conference on Automatic Face and Gesture Recognition, pp. 468-475, 2000.
  2. Yu Yamamoto, Ikushi Yoda, Katsuhiko Sakaue, "Armpointing Gesture Interface Using Surrounded Stereo Cameras System,"Proceedings of International Conference on Pattern Recognition, vol.4, pp.965-970, 2004.
  3. Sebastien Carbini, Jean Emmanuel Viallet and Olivier Bernier, "Pointing Gesture Visual Recognition by Body Feature Detection and Tracking," K. Wojciechowski et al. (eds.): Computer Vision and Graphics, pp. 203-208, 2006.
  4. A.D. Wilson and A.F. Bobick. "Recognition and Interpretation of Parametric Gesture," Intl. Conference on Computer Vision, pp. 329-336, 1998.
  5. Kai Nickel, Rainer Stiefelhagen, "Pointing Gesture Recognition based on 3DTracking of Face, Hands and Head Orientation,"Fifth International Conference on Multimodal Interfaces, pp. 140-146, 2003
  6. Paul Viola, Michael Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features," Computer Vision and Pattern Recognition, Vol. 1, pp. 511-518, 2001
  7. C.G. Harris and M.J. Stephens, "A combined corner and edge detector", Proc. Fourth Alvey Vision Conf., Manchester, pp 147-151, 1988.
  8. V. Paquin and P. Cohen, "A Vision-Based Gestural Guidance Interface for Mobile Robotic Platforms,"In In Proc. of the Workshop on HCI, Computer Vision in Human-Computer Interaction, ECCV, volume 3058 of LNCS, pages 39-47, 2004.
  9. R. Kahn, M. Swain, P. Prokopowicz, and R. Firby, "Gesture recognition using the Perseus architecture,"In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 734-741, 1996.
  10. http://www.videredesign.com/
  11. J. Shi and J. Malik, "Normalized Cuts and Image Segmentation,""IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888-905, Aug. 2000. https://doi.org/10.1109/34.868688
  12. Canny, J., "A Computational Approach to Edge Detection", IEEE Trans. Pattern. Analysis and Machine Intelligence, 8:679-714, November 1986. https://doi.org/10.1109/TPAMI.1986.4767851
  13. Joerg C. Wolf, Guido Bugmann, "Linking Speech and Gesture in Multimodal Instruction Systems," Proc. of the 15th IEEE Intl. Symp. on Robot and Human Interactive Communication (RO-MAN06) Hatfield, pp. 141-144, 2006.
  14. Jorg Rett and Jorge Dias, "Gesture Recognition Using a Marionette Model and Dynamic Bayesian Networks (DBNs)," ICIAR 2006, LNCS 4142, pp. 69-80, 2006.
  15. C. Harris and M.J. Stephens. A combined corner and edge detector. In Alvey Vision Conference, pages 147-152, 1988.