DOI QR코드

DOI QR Code

Precision Evaluation of Expressway Incident Detection Based on Dash Cam

차량 내 영상 센서 기반 고속도로 돌발상황 검지 정밀도 평가

  • Sanggi Nam (Dept. of Urban Planning and Eng., Univ. of Yeungnam) ;
  • Younshik Chung (Dept. of Urban Planning and Eng., Univ. of Yeungnam)
  • 남상기 (영남대학교 도시공학과) ;
  • 정연식 (영남대학교 도시공학과)
  • Received : 2023.11.03
  • Accepted : 2023.12.12
  • Published : 2023.12.31

Abstract

With the development of computer vision technology, video sensors such as CCTV are detecting incident. However, most of the current incident have been detected based on existing fixed imaging equipment. Accordingly, there has been a limit to the detection of incident in shaded areas where the image range of fixed equipment is not reached. With the recent development of edge-computing technology, real-time analysis of mobile image information has become possible. The purpose of this study is to evaluate the possibility of detecting expressway emergencies by introducing computer vision technology to dash cam. To this end, annotation data was constructed based on 4,388 dash cam still frame data collected by the Korea Expressway Corporation and analyzed using the YOLO algorithm. As a result of the analysis, the prediction accuracy of all objects was over 70%, and the precision of traffic accidents was about 85%. In addition, in the case of mAP(mean Average Precision), it was 0.769, and when looking at AP(Average Precision) for each object, traffic accidents were the highest at 0.904, and debris were the lowest at 0.629.

컴퓨터 비전(Computer Vision: CV) 기술 발전으로 폐쇄회로 TV(Closed-Circuit television: CCTV)와 같은 영상 센서로 돌발상황을 검지하고 있다. 그러나 현재 이러한 기술은 대부분 고정식 영상 센서를 기반으로 한다. 따라서 고정식 장비의 영상 범위가 닿지 않는 음영지역의 돌발상황 검지에는 한계가 존재해왔다. 최근 엣지 컴퓨팅(Edge-computing) 기술의 발전으로 이동식 영상정보의 실시간 분석이 가능해졌다. 본 연구는 차량 내 설치된 이동식 영상 센서(dashboard camera 혹은 dash cam)에 컴퓨터 비전 기술을 도입하여 고속도로에서 실시간으로 돌발상황 검지 가능성에 대해 평가하는 것이 목적이다. 이를 위해 한국도로공사 순찰차량에 장착된 dash cam에서 수집된 4,388건의 스틸 프레임 데이터 기반으로 학습데이터를 구축하였으며, YOLO(You Only Look Once) 알고리즘을 활용하여 분석하였다. 분석 결과 객체 모두 예측 정밀도가 70% 이상으로 나타났고, 교통사고는 약 85%의 정밀도를 보였다. 또한 mAP(mean Average Precision)의 경우 0.769로 나타났고, 객체별 AP(Average Precision)를 보면 교통사고가 0.904로 가장 높게 나타났고, 낙하물이 0.629로 가장 낮게 나타났다.

Keywords

Acknowledgement

이 연구는 2023년도 영남대학교 학술연구조성비에 의한 것으로, 2023년 한국ITS학회 추계학술대회에서 발표되었던 논문을 수정·보완하여 작성하였습니다.

References

  1. Adewopo, A. and Elsayed, N.(2023), Smart City Transportation: Deep Learning Ensemble Approach for Traffic Accident Detection, arXiv:2310.10038, pp.1-12.
  2. Elgendy, M.(2021), Deep Learning for Vision Systems, Hanbit Media. (in Korean)
  3. Ghahremannezhad, H., Shi, H. and Liu, C.(2022), Real-Time Accident Detection in Traffic Surveillance Using Deep Learning, arXiv:2208.0461, pp.1-6.
  4. Ghosh, S., Sunny, S. J. and Roney, R.(2019), "Accident Detection Using Convolutional Neural Networks", 2019 International Conference on Data Science and Communication(IconDSC), pp.1-6.
  5. Gunawan, T. S., Ismail I. M. M., Kartiwi, M. and Ismail, N.(2022), "Performance Comparison of Various YOLO Architectures on Object Detection of UAV Images", 2022 IEEE 8th International Conference on Smart Instrumentation, Measurement and Application(ICSIMA), pp.257-261.
  6. Kim, H. E., Lee, Y. W., Yoon, B. H., Park, E. S. and Kim, H. K.(2016), "On-road Object Detection Using Deep Neural Network", 2016 IEEE International Conference on Consumer Electrnics-Asia(ICCE-Asia), pp.1-4.
  7. Korea Expressway Corporation(2021), A Study on the Introduction of Advanced High-Density(AHD) Traffic Control on Highways Through AI-based Image Analysis, pp.77, 168.
  8. Lu, J., Ma, C., Li, L., Xing, X., Zhang, Y., Wang, Z. and Xu, J.(2018), "A Vehicle Detection Method for Aerial Image Based on YOLO", Journal of Computer and Communications, vol. 6, no. 11, pp.98-107.
  9. MLIT(Ministry of Land, Infrastructure and Transport), https://www.law.go.kr/LSW/admRulLsInfoP.do?admRulSeq=2100000191820, 2023.11.29.
  10. Moylan, E., Chand, S. and Waller, T.(2018), "Framework for Estimating the impact of Caemera-based Intelligent Transportation Systems(ITS) Technology on Incident Duration", Transportation Research Record, vol. 2672, no. 19, pp.25-33. https://doi.org/10.1177/0361198118775870
  11. Naidenov, A. and Sysoev, A.(2021), "Developing Car Accident Detecting System Based on Machine Learning Algorithms Applied to Video Recordings Data", In Advances in industrial internet of things, engineering and management (pp.75-85), Springer.
  12. Nam, S. G., Chung, Y. S., Kim, H. K. and Kim, W. G.(2023), "Estimation of Incident Detection Time on Expressways Based on Market Penetration Rate of Connected Vehicles", Journal of Korea Intelligent Transportation System, vol. 22, no. 3, pp.38-50. (in Korean) https://doi.org/10.12815/kits.2023.22.3.38
  13. Pavani, K. and Sriramya, P.(2022), "Comparison of KNN, ANN, CNN and YOLO Algorithms for Detecting the Accurate Traffic Flow and Build an Intelligent Transportation System", 2022 2nd International Conference on Innovative Practices in Technology and Management(ICIPTM), pp.23-25.
  14. Qin, L., Shi, Y., He, Y., Zhang, J., Zhang, X., Li, Y., Deng, T. and Yan, H.(2022), "ID-YOLO: Real-Time Salient Object Detection Based on the Driver's Fixation Region", IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp.15898-15908. https://doi.org/10.1109/TITS.2022.3146271
  15. Redmon, J., Divvala, S., Girshick, R. and Farhadi, A.(2016), "You Only Look Once: Unified, Real-Time Object Detection", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.779-788.
  16. Ren, S., He, K., Girshick, R., Zhang, X. and Sun, J.(2017), "Object Detection Networks on Convolutional Feature Maps", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 7, pp.1476-1481. https://doi.org/10.1109/TPAMI.2016.2601099
  17. Wang, C., Dai, Y., Zhou, W. and Geng, Y.(2020), "A Vision-Based Video Crash Detection Framework for Mixed Traffic Flow Environment Considering Low-Visibility Condition", Journal of Advance4d Transportation, vol. 2020, pp.1-11. https://doi.org/10.1155/2020/9194028