Browse > Article
http://dx.doi.org/10.9717/kmms.2020.24.3.345

A Dangerous Situation Recognition System Using Human Behavior Analysis  

Park, Jun-Tae (Dept. of Computer Engineering, Kumoh National Institute of Technology)
Han, Kyu-Phil (Dept. of Computer Engineering, Kumoh National Institute of Technology)
Park, Yang-Woo (Dept. of Aeronautics & Software Engineering, Kyungwoon University)
Publication Information
Abstract
Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.
Keywords
Object detection; Object tracking; Deep learning; Pose estimation; Action recognition;
Citations & Related Records
연도 인용수 순위
  • Reference
1 Imagery Library for Intelligent Detection Systems (i-LIDS) User Guide, A Standard for Testing Video Based Detection Systems, Home Office Scientific Development Branch, United Kingdom(2011). https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/143875/ilidsuser-guide.pdf (accessed Sept. 20, 2020).
2 R. Girshick, J. Donahue, T. Darrell1, and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580-587, 2014.
3 J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-time Object Detection," Proceedings of the IEEE Conference on Computer Vision and P attern Recognition, pp. 779-788, 2016.
4 A. Tealab, "Time Series Forecasting Using Artificial Neural Networks Methodologies: A Systematic Review," Future Computing and Informatics Journal, Vol. 3, No. 2, pp. 334-340, 2018.   DOI
5 N. Dalal and B. Triggs, "Histograms of Oriented Gradients for Human Detection," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 886-893, 2005.
6 I. Ansari, Y. Lee, Y. Jeong, and J. Shim, "Recognition of Car Manufacturers using Faster R-CNN and Perspective Transformation," Journal of Korea Multimedia Society, Vol. 21, No. 8, pp. 349-356, 2018.
7 J. Redmon and A. Farhadi, "YOLO9000: Better, Faster, Stronger," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263-7291, 2017.
8 J. Redmon and A. Farhadi, "Yolov3: An incremental improvement," arXiv preprint arXiv: 1804.02767(2018). https://arxiv.org/abs/1804.02767 (accessed Sept. 20, 2020).
9 A Service of the Smart CCTV Performance Certification, Korea Internet & Security Agency (2016). https://www.kisa.or.kr/business/infor/inforcert_5.jsp (accessed Sept. 20, 2020).
10 Z. Cao, T. Simon, S. E. Wei, and Y. Sheikh, "Realtime Multi-person 2D Pose Estimation Using Part Affinity Fields," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291-7299, 2017.
11 H. S. Fang, S. Xie, Y. W. Tai, and C. Lu, "Rmpe: Regional Multi-Person Pose Estimation," Proceedings of the IEEE International Conference on Computer Vision, pp. 2334-2343, 2017.
12 A. Shahroudy, J. Liu, T. T. Ng, and G. Wang, "NTU RGB+D: A Large Scale Dataset for 3D Human Activity Analysis," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1010-1019, 2016.