DOI QR코드

DOI QR Code

Design of HCI System of Museum Guide Robot Based on Visual Communication Skill

  • Qingqing Liang (School of Humanities, Art and Design, Guangxi University of Science and Technology)
  • Received : 2022.09.28
  • Accepted : 2023.06.06
  • Published : 2024.06.30

Abstract

Visual communication is widely used and enhanced in modern society, where there is an increasing demand for spirituality. Museum robots are one of many service robots that can replace humans to provide services such as display, interpretation and dialogue. For the improvement of museum guide robots, the paper proposes a human-robot interaction system based on visual communication skills. The system is based on a deep neural mesh structure and utilizes theoretical analysis of computer vision to introduce a Tiny+CBAM mesh structure in the gesture recognition component. This combines basic gestures and gesture states to design and evaluate gesture actions. The test results indicated that the improved Tiny+CBAM mesh structure could enhance the mean average precision value by 13.56% while maintaining a loss of less than 3 frames per second during static basic gesture recognition. After testing the system's dynamic gesture performance, it was found to be over 95% accurate for all items except double click. Additionally, it was 100% accurate for the action displayed on the current page.

Keywords

References

  1. Y. Lou, J. Wei, and S. Song, "Design and optimization of a joint torque sensor for robot collision detection," IEEE Sensors Journal, vol. 19, no. 16, pp. 6618-6627, 2019. https://doi.org/10.1109/JSEN.2019.2912810 
  2. D. Brscic, T. Ikeda, and T. Kanda, "Do you need help? a robot providing information to people who behave atypically," IEEE Transactions on Robotics, vol. 33, no. 2, pp. 500-506, 2017. https://doi.org/10.1109/TRO.2016.2645206 
  3. G. Doisy, J. Meyer, and Y. Edan, "The impact of human-robot interface design on the use of a learning robot system," IEEE Transactions on Human-Machine Systems, vol. 44, no. 6, pp. 788-795, 2014. https://doi.org/10.1109/THMS.2014.2331618 
  4. S. Haghzad Klidbary, S. Bagheri Shouraki, and S. Sheikhpour Kourabbaslou, "Path planning of modular robots on various terrains using Q-learning versus optimization algorithms," Intelligent Service Robotics, vol. 10, pp. 121-136, 2017. https://doi.org/10.1007/s11370-017-0217-x 
  5. A. Sahai, E. Caspar, A. De Beir, O. Grynszpan, E. Pacherie, and B. Berberian, "Modulations of one's sense of agency during human-machine interactions: a behavioural study using a full humanoid robot," Quarterly Journal of Experimental Psychology, vol. 76, no. 3, pp. 606-620, 2023. https://doi.org/10.1177/17470218221095841 
  6. D. Ruhlmann, J. P. Fouassier, and F. Wieder, "Relations structure-proprietes dans les photoamorceurs de polymerisation-5. Effet de l'introduction d'un groupement thioether," European Polymer Journal, vol. 28, no. 12, pp. 1577-1582, 1992. https://doi.org/10.1016/0014-3057(92)90154-T 
  7. A. R. Habib, G. Crossland, H. Patel, E. Wong, K. Kong, H. Gunasekera, et al., "An artificial intelligence computer-vision algorithm to triage otoscopic images from Australian Aboriginal and Torres Strait Islander children," Otology & Neurotology, vol. 43, no. 4, pp. 481-488, 2022. https://doi.org/110.1097/MAO.0000000000003484 
  8. W. Fang, L. Ding, H. Luo, and P. E. Love, "Falls from heights: a computer vision-based approach for safety harness detection," Automation in Construction, vol. 91, pp. 53-61, 2018. https://doi.org/10.1016/j.autcon.2018.02.018 
  9. K. Brkic, T. Hrkac, and Z. Kalafatic, "Protecting the privacy of humans in video sequences using a computer vision-based de-identification pipeline," Expert Systems with Applications, vol. 87, pp. 41-55, 2017. https://doi.org/10.1016/j.eswa.2017.05.067 
  10. J. A. Garcia-Pulido, G. Pajares, S. Dormido, and J. M. de la Cruz, "Recognition of a landing platform for unmanned aerial vehicles by using computer vision-based techniques," Expert Systems with Applications, vol. 76, pp. 152-165, 2017. https://doi.org/10.1016/j.eswa.2017.01.017 
  11. W. Shuai and X. P. Chen, "KeJia: towards an autonomous service robot with tolerance of unexpected environmental changes," Frontiers of Information Technology & Electronic Engineering, vol. 20, no. 3, pp. 307-317, 2019. https://doi.org/10.1631/FITEE.1900096 
  12. Y. Wang, F. Zhou, Y. Zhao, M. Li, and L. Yin, "Iterative learning control for path tracking of service robot in perspective dynamic system with uncertainties," International Journal of Advanced Robotic Systems, vol. 17, no. 6, article no. 1729881420968528, 2020. https://doi.org/10.1177/1729881420968528 
  13. G. Sawadwuthikul, T. Tothong, T. Lodkaew, P. Soisudarat, S. Nutanong, P. Manoonpong, and N. Dilokthanakul, "Visual goal human-robot communication framework with few-shot learning: a case study in robot waiter system," IEEE Transactions on Industrial Informatics, vol. 18, no. 3, pp. 1883-1891, 2022. https://doi.org/10.1109/TII.2021.3049831 
  14. J. L. Xu, C. Riccioli, and D. W. Sun, "Comparison of hyperspectral imaging and computer vision for automatic differentiation of organically and conventionally farmed salmon," Journal of Food Engineering, vol. 196, pp. 170-182, 2017. https://doi.org/10.1016/j.jfoodeng.2016.10.021 
  15. T. Toulouse, L. Rossi, A. Campana, T. Celik, and M. A. Akhloufi, "Computer vision for wildfire research: an evolving image dataset for processing and analysis," Fire Safety Journal, vol. 92, pp. 188-194, 2017. https://doi.org/10.1016/j.firesaf.2017.06.012