DOI QR코드

DOI QR Code

Location Tracking and Visualization of Dynamic Objects using CCTV Images

CCTV 영상을 활용한 동적 객체의 위치 추적 및 시각화 방안

  • Park, Sang-Jin (LX Spatial Information Research Institute) ;
  • Cho, Kuk (LX Spatial Information Research Institute) ;
  • Im, Junhyuck (LX Spatial Information Research Institute) ;
  • Kim, Minchan (LX Spatial Information Research Institute)
  • Received : 2021.04.23
  • Accepted : 2021.06.28
  • Published : 2021.06.30

Abstract

C-ITS(Cooperative Intelligent Transport System) that pursues traffic safety and convenience uses various sensors to generate traffic information. Therefore, it is necessary to improve the sensor-related technology to increase the efficiency and reliability of the traffic information. Recently, the role of CCTV in collecting video information has become more important due to advances in AI(Artificial Intelligence) technology. In this study, we propose to identify and track dynamic objects(vehicles, people, etc.) in CCTV images, and to analyze and provide information about them in various environments. To this end, we conducted identification and tracking of dynamic objects using the Yolov4 and Deepsort algorithms, establishment of real-time multi-user support servers based on Kafka, defining transformation matrices between images and spatial coordinate systems, and map-based dynamic object visualization. In addition, a positional consistency evaluation was performed to confirm its usefulness. Through the proposed scheme, we confirmed that CCTVs can serve as important sensors to provide relevant information by analyzing road conditions in real time in terms of road infrastructure beyond a simple monitoring role.

국내·외적으로 수행되고 있는 다양한 C-ITS 관련 도로 인프라 구축 사업들은 다양한 센서 기술들을 융합적으로 활용하고 있으며, 도로 인프라의 효율성과 신뢰성을 높이기 위해 센서 관련 기술 향상에 많은 노력을 하고 있다. 최근에는 인공지능 기술의 발전으로 영상정보를 수집하는 CCTV의 역할은 더욱 중요해지고 있다. CCTV는 현재 도로 상태 및 상황, 보안 등의 이유로 많은 양이 구축되어 운영되고 있으나, 단순한 영상 모니터링에 주로 활용되고 있어 자율주행 측면에서 센서들에 비해 활용도가 부족한 실정이다. 본 연구에서는 기구축된 CCTV영상에서 이동체(차량·사람 등)들을 식별·추적하고, 이들의 정보를 다양한 환경에서 활용할 수 있도록 분석·제공하는 방안을 제안한다. 이를 위해 Yolov4와 Deep sort 알고리즘을 활용한 이동체 식별·추적과 Kafka 기반의 실시간 다중 사용자 지원 서버 구축, 영상과 공간 좌표계 간의 변환 행렬 정의, 그리고 정밀도로지도, 항공맵 등을 활용한 맵기반 이동체 시각화를 진행하였으며, 유용성을 확인하기 위한 위치 정합도 평가를 수행하였다. 제안된 방안을 통해 CCTV가 단순한 모니터링 역할을 넘어 도로 인프라 측면에서 도로 상황을 실시간으로 분석하여 관련 정보를 제공할 수 있는 중요한 센서로써의 역할을 할 수 있음을 확인하였다.

Keywords

Acknowledgement

본 연구는 산업통상자원부와 한국산업기술진흥원의 "새만금지역 상용차 자율주행 테스트베드 구축사업"(과제번호 P0013841)으로 수행된 연구결과입니다.

References

  1. Kim BH. Design of Image Tracking System Using Location Determination Technology. The Society of Digital Policy&Management. p.143-148.
  2. Kim JT. 2018. A Study on the R&D fo the Operation System and Transportation Infrastructure for Road Driving of Self-driving Cars. The Road Traffic Authority. 2018-12.
  3. Park JL, Kim HH. 2019. Autonomous driving Technology. 2019(16).
  4. Seo HD, Kim EM. 2020. Estimation of Traffic Volume Using Deep Learning in Stereo CCTV Image. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography. 38(3):269-279. https://doi.org/10.7848/KSGPC.2020.38.3.269
  5. Seung TY, Kwon GC et al. 2016. An Estimation Method for Location Coordinate of Object in Image Using Single Camera and GPS. Journal of Korea Multimedia Society. 10(2):112-121. https://doi.org/10.9717/kmms.2016.19.2.112
  6. Yang IC, Jeon WH. 2019. Development of an Integrated Traffic Object Detection Framework for Traffic Data Collection. The Korea Institute Of Intelligent Transport Systems. 18(6):191-201. https://doi.org/10.12815/kits.2019.18.6.191
  7. ACM. 2017. American Center For Mobility: https://www.acmwillowrun.org/.
  8. Kiela K, et al. 2020. Review of V2X-IoT Standards and Frameworks for ITS Applications. MDPI applied sciences. 10(12):2-23.
  9. Bezzina D, Sayer J. 2015. Safety Pilot Model Deployment:Test Conductor Team Report. NHTSA; [accessed 2015 Jun]. http://www.nhtsa.gov/.
  10. Bochkovskiy A, et al. 2020. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.
  11. CV Pilot. 2017. Connected Vehicle Pilot Deployment Program: United States Department of Transportation(ITS): https://www.its.dot.gov/pilots/.
  12. Farag W, Saleh Z. 2018. Road lane-lines detection in real-time for advanced driving assistance systems. IEEE on 3ICT. p.1-8.
  13. Kotsi A. 2020. Overview of C-ITS Deployment Projects in Europe and USA: [accessed 2020 Oct 14]: https://arxiv.org/abs/2010.07299/.
  14. Liu W, et al. 2016. Ssd: Single shot multibox detector. European conference on computer vision. p.21-37.
  15. Mehboob Fozia, et al. 2017. Glyhp-based video visualization on Google Map for surveillance in smart cities. Journal on Images and Video Processing. 2017-28.
  16. Prince S.J.D, et al. 2002. Augmented reality camera tracking with homographies. IEEE Computer Graphics and Application. 22(6):39-45. https://doi.org/10.1109/MCG.2002.1046627
  17. Ren S, et al. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Projpsal Networks. Neual Information Processing Systems (NIPS).
  18. Redmon J, et al. 2016. You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on CVPR. p.779-788.
  19. Sankaranarayanan A, et al. 2008. Object Detection, Tracking and Recognition for Multiple Samrt Cameras. Proceedings for the IEEE. 96(10): 1606-1624. https://doi.org/10.1109/JPROC.2008.928758
  20. Wojke N, et al. 2017. Simple online and realtime tracking with a deep association metric. IEEE on ICIP. p.3645-3649.