DOI QR코드

DOI QR Code

A Portable Mediate Interface 'Handybot' for the Rich Human-Robot Interaction

인관과 로봇의 다양한 상호작용을 위한 휴대 매개인터페이스 ‘핸디밧’

  • 황정훈 (한국과학기술원 기계공학과) ;
  • 권동수 (한국과학기술원 기계공학과)
  • Published : 2007.08.01

Abstract

The importance of the interaction capability of a robot increases as the application of a robot is extended to a human's daily life. In this paper, a portable mediate interface Handybot is developed with various interaction channels to be used with an intelligent home service robot. The Handybot has a task-oriented channel of an icon language as well as a verbal interface. It also has an emotional interaction channel that recognizes a user's emotional state from facial expression and speech, transmits that state to the robot, and expresses the robot's emotional state to the user. It is expected that the Handybot will reduce spatial problems that may exist in human-robot interactions, propose a new interaction method, and help creating rich and continuous interactions between human users and robots.

Keywords

References

  1. T. Fong, 'Collaborative control: A robot-centric model for vehicle teleoperation,' Robotics Institute, Doctor of Philosophy, 2001
  2. H. K. Keskinpala, J. A. Adams, and K. Kawamura, 'PDA-based human-robotic interface,' in the proceedings of IEEE International Corference on Systems, Man and Cybernetics, pp. 3931-3936, 2003
  3. D. Perzanowski, A. C. Schultz, W. Adams, E. Marsh, and M. Bugajska, 'Building a multimodal human-robot interface,' IEEE Journal on Intelligent Systems, vol. 16, pp. 16-21, 2001
  4. G Chronis and M. Skubic, 'Sketch-based navigation for mobile robots,' in the proceedings of The 12th IEEE International Corference on Fuzzy Systems, St. Louis, pp. 284-289, 2003
  5. R. A. Bolt, 'Put-that-there: voice and gesture at the graphics interface,' in the proceedings of The 7th annual conference on Computer graphics and interactive techniques, Seattle, pp. 262-270, 1980
  6. R. Bischoff and V. Graefe, 'Dependable multimodal communication and Interaction with robotic assistants,' in the proceedings of IEEE International Workshop on Robot and Human Interactive Communication, Berlin, pp. 300-305, 2002
  7. J. J. Pfeiffer, 'Altaira: A rule-based visual language for small mobile robots,' Journal of Visual Languages and Computing, vol.9, pp.127-150, 1998 https://doi.org/10.1006/jvlc.1998.0078
  8. P. T. Cox, C. C. Risley, and T. J. Smedley, 'Toward concrete representation in visual languages for robot control,' Journal of Visual Languages and Computing, vol. 9, pp. 211-239, 1998 https://doi.org/10.1006/jvlc.1998.0077
  9. L. Leifer, M. Van der Loos, and D. Lees, 'Visual language programming for robot command/control in unstructured environments,' in the proceedings of Fifth International Conference on Advanced Robotics, Pisa, Italy, pp. 31-36, 1991
  10. S. Hamad, 'The symbol grounding problem,' Physica D: Nonlinear Phenomena, vol. 42, pp. 335-346, 1990 https://doi.org/10.1016/0167-2789(90)90087-6
  11. T. Nomura, T. Kanda, and T. Suzuki, 'Experimental investigation into influence of negative attitudes toward robots on human-robot interaction,' in the proceedings of The 3rd Workshop on Social Intelligence Design, pp. 125-135, 2004
  12. T. Mitsui, T. Shibata, K. Wada, A. Touda, and K. Tanie, 'Psychophysiological effects by interaction with mental commit robot,' in the proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, pp. 1189-1194, 2001
  13. H. Huttenrauch, A. Green, M. Norman, L. Oestreicher, and K. S. Eklundh, 'Involving users in the design of a mobile office robot,' IEEE Transactions on Systems, Man and Cybernetics, Part C, Applications and Reviews, vol. 30, pp. 113-124, 2004
  14. R. Gockley, J. Forlizzi, and R. Simmons, 'Interactions with a moody robot,' in the proceedings of Human-Robot Interaction, Salt Lake City, Utah, pp. 186-193, 2006
  15. J. D. Williams and S. Young, 'Partially observable markov decision processes for spoken dialog systems,' Computer Speech and Language, vol. 21, pp. 393-422, 2007 https://doi.org/10.1016/j.csl.2006.06.008
  16. S. W Lee, Kim, D.-J., Kim, Y. S., & Bien, Z., 'Training of feature extractor via new cluster validity-Application to adaptive facial expression recognition,' Lecture Notes in Computer Science, vol. 3684, pp. 542-548, 2005 https://doi.org/10.1007/11554028_75
  17. D.-S. Kwon, W.-C. Yoon, Z. Bien, Y. K. Kwak, K.-H. Hong, K.-S. Park, J.-Y. Yang, J.-H. Hwang, K.-B. Lee, J.-C. Park, Y.-C. Kim, Y-J. Kwon, S.-W. Lee, and K.-H. Hyun, 'Multimodal cognitive and emotional interaction module for human moot interacation,' Presented at The 5th Technical Workshop of Center for Intelligent Robotics, 2006
  18. E. H. Kim, K. H. Hyun, and Y. K. Kwak, 'Robust emotion recognition feature, frequency range of meaningful signal,' in the proceedings of IEEE International Workshop on Robot and Human Interactive Communication, Nashville, pp. 667-671, 2005
  19. D.-S. Kwon, J.-H. Hwang, S.-H. Park, Z. Bien, D.-J. Kim, S.-W. Lee, Y. J. Kwon, K.-H. Hyun, Y. K Kwak, G. H. Lee, and J.-H. Kim, 'Mediated interface technologies using multi-modality,' Presented at The Technical Workshop of Center for Intelligent Robotics, 2004