DOI QR코드

DOI QR Code

혼잡 환경에서 강인한 딥러닝 기반 인간 추적 프레임워크

A Robust Deep Learning based Human Tracking Framework in Crowded Environments

  • Oh, Kyungseok (School of Electronic Engineering, Kumoh National Institute of Technology) ;
  • Kim, Sunghyun (School of Electronic Engineering, Kumoh National Institute of Technology) ;
  • Kim, Jinseop (School of Electronic Engineering, Kumoh National Institute of Technology) ;
  • Lee, Seunghwan (School of Electronic Engineering, Kumoh National Institute of Technology)
  • 투고 : 2021.09.15
  • 심사 : 2021.11.09
  • 발행 : 2021.11.30

초록

This paper presents a robust deep learning-based human tracking framework in crowded environments. For practical human tracking applications, a target must be robustly tracked even in undetected or overcrowded situations. The proposed framework consists of two parts: robust deep learning-based human detection and tracking while recognizing the aforementioned situations. In the former part, target candidates are detected using Detectron2, which is one of the powerful deep learning tools, and their weights are computed and assigned. Subsequently, a candidate with the highest weight is extracted and is utilized to track the target human using a Kalman filter. If the bounding boxes of the extracted candidate and another candidate are overlapped, it is regarded as a crowded situation. In this situation, the center information of the extracted candidate is compensated using the state estimated prior to the crowded situation. When candidates are not detected from Detectron2, it means that the target is completely occluded and the next state of the target is estimated using the Kalman prediction step only. In two experiments, people wearing the same color clothes and having a similar height roam around the given place by overlapping one another. The average error of the proposed framework was measured and compared with one of the conventional approaches. In the error result, the proposed framework showed its robustness in the crowded environments.

키워드

과제정보

This research was supported by Kumoh National Institute of Technology (2019-104-036)

참고문헌

  1. L. Alzubaidi, J. Zhang, A. J. Humaidi, A. Al-Dujaili, Y. Duan, O. Al-Shamma, J. Santamaria, M. A. Fadhel, M. Al-Amidie, and L. Farhan, "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions," Journal of Big Data, vol. 8, no. 53, pp. 1-74, 2021, DOI: 10.1186/s40537-021-00444-8.
  2. K. Y. Oh, S. H. Kim, J. S. Kim, Y. Kwon, and S. H. Lee, "An Improved Deep Learning based Human Tracking Framework using Kalman Filter," KIIT Conference, Jeju, Korea, pp. 520-522, 2021, [Online], https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE10569022.
  3. R. Algabri and M. Choi, "Deep-Learning-Based Indoor Human Following of Mobile Robot Using Color Feature," Sensors, vol. 20, no. 9, pp. 1-19, May, 2020, DOI: 10.3390/s20092699.
  4. C. Schlegel, J. Illmann, H. Jaberg, M. Schuster, and R. Worz, "Vision Based Person Tracking with a Mobile Robot," British Machine Vision Conference, Southampton, UK, pp. 418-427, Jan., 1998, DOI: 10.5244/C.12.42.
  5. N. An, S. Y. Sun, X. G. Zhao, and Z. G. Hou, "Online context-based person re-identification and biometric-based action recognition for service robots," 2017 29th Chiness Control and Decision Conference, Istanbul, Turkey, pp. 3369-3374, Oct., 2017, DOI: 10.1109/CCDC.2017.7979088.
  6. N. Bellotto and H. Hu, "Multisensor-Based Human Detection and Tracking for Mobile Service Robots," IEEE Transactions on Systems Man, and Cybernetics, vol. 39, pp. 167-181, Feb., 2009, DOI: 10.1109/TSMCB.2008.2004050.
  7. M. T. Choi, J. S. Yeom, Y. H. Shin, and I. J. Park, "Robot-Assisted ADHD Screening in Diagnostic Process," Journal of Inteelligent and Robotic Systems, vol. 95, pp. 351-363, Jun., 2018, DOI: 10.1007/s10846-018-0890-9.
  8. C. J. Seo, "A Study on Multi Target Tracking using HOG and Kalman Filter," Korean Institute of Electrical Engineers, vol. 64, no. 3, pp. 187-192, Sep., 2015, DOI: 10.5370/KIEEP.2015.64.3.187.
  9. S. B. Han, S. Y. Na, S. Y. Kim, and D. H. Kim, "Real-time Human Tracking based on Distance and Color," Korean Institute of Information Scientists and Engineers, Jeju, Korea, pp. 1964-1966, 2018, [Online], https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE07503558.
  10. S. Y. Park and D. H. Lee, "A Development of Visual Human Tracking System for Path Guidance Robot," Information and Control Symposium, Gunsan, Korea, pp. 39-40, 2019, [Online], https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE09262782.
  11. J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger", 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 6517-6525, 2016, DOI: 10.1109/CVPR. 2017.690.
  12. S. Noor, M. Waqas, M. I. Saleem, and H. N. Minhas, "Automatic Object Tracking and Segmentation Using Unsupervised SiamMask," IEEE Access, vol. 9, pp. 106550-106559, July, 2021, DOI: 10.1109/ACCESS.2021.3101054.
  13. T. Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature Pyramid Networks for Object Detection," 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 2117-2125, 2017, DOI: 10.1109/CVPR. 2017.106.
  14. E. Rublee, V. Rabaud, K. Konolige, and G. Bradski "ORB: an efficient alternative to SIFT or SURF," IEEE International Conference on Computer Vision, Barcelona, Spain, pp. 2564-2571, 2011, DOI: 10.1109/ICCV.2011.6126544.