DOI QR코드

DOI QR Code

Estimating Interest Levels based on Visitor Behavior Recognition Towards a Guide Robot

안내 로봇을 향한 관람객의 행위 인식 기반 관심도 추정

  • Ye Jun Lee (Human-Robot Interaction R&D Center, KIRO (Korea Institute of Robotics & Technology Convergence), Department of Future Automotive and IT Convergence, Kyungpook National University) ;
  • Juhyun Kim (Human-Robot Interaction R&D Center, KIRO (Korea Institute of Robotics & Technology Convergence)) ;
  • Eui-Jung Jung (Human-Robot Interaction R&D Center, KIRO (Korea Institute of Robotics & Technology Convergence)) ;
  • Min-Gyu Kim (Human-Robot Interaction R&D Center, KIRO (Korea Institute of Robotics & Technology Convergence))
  • Received : 2023.06.09
  • Accepted : 2023.07.22
  • Published : 2023.11.30

Abstract

This paper proposes a method to estimate the level of interest shown by visitors towards a specific target, a guide robot, in spaces where a large number of visitors, such as exhibition halls and museums, can show interest in a specific subject. To accomplish this, we apply deep learning-based behavior recognition and object tracking techniques for multiple visitors, and based on this, we derive the behavior analysis and interest level of visitors. To implement this research, a personalized dataset tailored to the characteristics of exhibition hall and museum environments was created, and a deep learning model was constructed based on this. Four scenarios that visitors can exhibit were classified, and through this, prediction and experimental values were obtained, thus completing the validation for the interest estimation method proposed in this paper.

Keywords

Acknowledgement

This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00842, Development of Cloud Robot Intelligence for Continual Adaptation to User Reactions in Real Service Environments)

References

  1. S. Yim and M. Kim, "A Study on Attitudes toward Man's Appearance Management and Cosmetics Purchasing Behavior," Journal of the Korea Fashion & Costume Design Association, vol. 16, no. 4, pp. 79-98, Dec., 2014, [Online], https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART001943726. 
  2. J. W. Jung and S. Y. Cho, "The Effects of High School Students' Environmental Concerns and Subjective Norms on Purchase Intentions of Eco-Friendly Products : Mediating Effect of Attitude toward Eco-Friendly Products and Services," Korean Journal of Environmental Education, vol. 32, no. 4, pp. 475-487, Dec., 2019, [Online], https://www.dbpia.co.kr/Journal/articleDetail?nodeId=NODE09298315.  https://doi.org/10.17965/KJEE.2019.32.4.475
  3. H. Eun, J. Moon, J. Park, C. Jung, and C. Kim, "Learning to Discriminate Information for Online Action Detection," Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, USA, pp. 809-818, 2020, DOI: 10.1109/CVPR42600.2020.00089. 
  4. J. Chen, G. Mittal, Y. Yu, Y. Kong, and M. Chen, "GateHUB: Gated History Unit With Background Suppression for Online Action Detection," Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, USA, pp. 19925-19934, 2022, DOI: 10.1109/CVPR52688.2022.01930. 
  5. V. Mazzia, S. Angarano, F. Salvetti, F. Angelini, and M. Chiaberge, "Action Transformer: A Self-Attention Model for Short-Time Pose-Based Human Action Recognition," Pattern Recognition, vol. 124, Apr., 2022, DOI: 10.1016/j.patcog.2021.108487. 
  6. R. Vrskova, R. Hudec, P. Kamencay, and P. Sykora, "Human Activity Classification Using the 3DCNN Architecture," Applied Science, vol. 12, no. 2, Jan., 2022, DOI: 10.3390/app12020931. 
  7. H. Salam, O. Celiktutan, I. Hupont, H. Gunes, and M. Chetouani, "Fully Automatic Analysis of Engagement and Its Relationship to Personality in Human-Robot Interactions," IEEE Access, vol. 5, pp. 705-721, Sept., 2016, DOI: 10.1109/ACCESS.2016.2614525. 
  8. B. Fernando, E. Gavves, M. J. Oramas M., A. Ghodrati, and T. Tuytelaars, "Modeling Video Evolution for Action Recognition," Conference on Computer Vision and Pattern Recognition (CVPR), Boston, USA, pp. 5378-5387, 2015, DOI: 10.1109/CVPR.2015.7299176. 
  9. Z. Gao, Y. Zhang, H. Zhang, Y. B. Xue, and G. P. Xu, "Multidimensional human action recognition model based on image set and group sparsity," Neurocomputing, vol. 215, pp. 138-149, Nov., 2016, DOI: 10.1016/j.neucom.2016.01.113. 
  10. G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, Y. Kwon, K. Michael, TaoXie, J. Fang, imyhxy, Lorna, Z. Yifu, C. Wong, A. V. D. Montes, Z. Wang, C. Fati, J. Nadar, Laughing, UnglvKitDe, V. Sonck, tkianai, yxNONG, P. Skalski, A. Hogan, D. Nair, M. Strobel, and M. Jain, ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation, [Online], https://zenodo.org/record/7347926#.ZDziMnZByuc, Accessed: Jan. 06, 2023. 
  11. Y. Zhang, C. Wang, X. Wang, W. Zeng, and W. Liu, "FairMOT: On the Fairness of Detection and Re-Identification in Multiple Object Tracking," International Journal of Computer Vision, vol. 129, no. 11, pp. 3069-3087, Sept., 2021, DOI: 10.1007/s11263-021-01513-4.