DOI QR코드

DOI QR Code

Estimation of Traffic Volume Using Deep Learning in Stereo CCTV Image

스테레오 CCTV 영상에서 딥러닝을 이용한 교통량 추정

  • Seo, Hong Deok (Department of Spatial Information Engineering, Namseoul University) ;
  • Kim, Eui Myoung (Department of Spatial Information Engineering, Namseoul University)
  • Received : 2019.12.30
  • Accepted : 2020.03.09
  • Published : 2020.06.30

Abstract

Traffic estimation mainly involves surveying equipment such as automatic vehicle classification, vehicle detection system, toll collection system, and personnel surveys through CCTV (Closed Circuit TeleVision), but this requires a lot of manpower and cost. In this study, we proposed a method of estimating traffic volume using deep learning and stereo CCTV to overcome the limitation of not detecting the entire vehicle in case of single CCTV. COCO (Common Objects in Context) dataset was used to train deep learning models to detect vehicles, and each vehicle was detected in left and right CCTV images in real time. Then, the vehicle that could not be detected from each image was additionally detected by using affine transformation to improve the accuracy of traffic volume. Experiments were conducted separately for the normal road environment and the case of weather conditions with fog. In the normal road environment, vehicle detection improved by 6.75% and 5.92% in left and right images, respectively, than in a single CCTV image. In addition, in the foggy road environment, vehicle detection was improved by 10.79% and 12.88% in the left and right images, respectively.

교통량 산정은 주로 교통량조사시스템, 차량검지시스템, 통행료징수시스템 등과 같은 조사 장비와 CCTV를 통한 인력 조사를 병행하고 있으나 이는 많은 인력과 비용이 발생한다. 본 연구에서는 단일 CCTV의 경우 전체 차량을 탐지하지 못하는 한계를 극복하기 위해서, 딥러닝과 스테레오 CCTV를 이용하여 교통량을 산정하는 방법을 제안하였다. 차량을 탐지하기 위한 딥러닝 모델을 학습하기 위해 COCO 데이터셋을 사용하고, 실시간으로 좌우 CCTV 영상에서 각각 차량을 탐지하였다. 그리고 나서, 각 영상에서 추출하지 못한 차량을 부등각사상변환을 이용하여 추가적으로 차량을 탐지하여 교통량 산정의 정확도를 개선하였다. 실험은 평상시 도로 환경과 안개가 발생한 기상 상황의 경우에 대해서 각각 수행하였다. 평상시 도로 환경의 경우 단일 CCTV 영상을 사용할 때보다 좌우 영상에서 각각 6.75%, 5.92%의 차량 탐지의 개선효과가 있었다. 또한, 안개가 발생한 도로 환경의 경우 좌우 영상에서 각각 10.79%, 12.88%의 차량 탐지의 개선효과가 있었다.

Keywords

References

  1. Ammour, N., Alhichri, H., Bazi, Y., Benjdira, B., Alajlan, N., and Zuair, M. (2017), Deep learning approach for car detection in UAV imagery. Remote Sensing, Vol. 9, No. 4, pp. 312-326. https://doi.org/10.3390/rs9040312
  2. Anindra, F., Soeparno, H., and Napitupulu, T. A. (2018), CCTV traffic congestion analysis at pejompongan using case based reasoning. In 2018 International Conference on Information and Communications Technology (ICOIACT), 6-7 March, Yogyakarta, Indonesia, pp. 861-865.
  3. Barthelemy, J., Verstaevel, N., Forehead, H., and Perez, P. (2019), Edge-computing video analytics for real-time traffic monitoring in a smart city, Sensors, Vol. 19, No. 9, pp. 2048-2078. https://doi.org/10.3390/s19092048
  4. Benjdira, B., Khursheed, T., Koubaa, A., Ammar, A., and Ouni, K. (2019), Car detection using unmanned aerial vehicles: Comparison between faster r-cnn and yolov3. In 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), IEEE, 5-7 February, Muscat, Oman, pp. 1-6.
  5. Choi, I.K. and Yoo, J.S. (2017), Object detection in road environment CCTV images using deep learning. The Institute of Electronics and Information Engineers, 24-25 November, Incheon, Korea, pp. 627-629.
  6. Du, X., Ang, M.H., and Rus, D. (2017), Car detection for autonomous vehicle: LIDAR and vision fusion approach through deep learning framework. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 24-28 September, Vancouver, Canada, pp. 749-754.
  7. Han, S.H., Shin, Y.S., and Lee, J.Y. (2019), A study on the evaluation technique of intelligent security technology based on spatial information : multi-CCTV collaboration technology. The Journal of the Korea Academia-Industrial cooperation Society, Vol. 20, No. 7, pp. 111-118. (in Korean with English abstract)
  8. Hong, G.S., Eom, T.J., and Kim, B.G. (2011), Development of vision-based monitering system technology for traffic analysis and surveillance. Jouranl of Information and Security, Vol. 11, No. 4, pp. 59-66.
  9. Huh, M.H., Shin, S.Y., and Lee, Y.W. (2013), Traffic measurement : moving vehicle method using CCTV. Journal of the Korea Institute of Information and Communication Engineering, Vol. 17, No. 11, pp. 2575-2580. (in Korean with English abstract) https://doi.org/10.6109/jkiice.2013.17.11.2575
  10. Jeong, D.H. and Jeong, W.T. (2019), Prediction of rolling noise based on machine learning technique using rail surface roughness data, Journal of the Korean Society for Railway, Vol. 22. No. 3, pp. 209-217. (in Korean with English abstract) https://doi.org/10.7782/jksr.2019.22.3.209
  11. Jo, S.H., Kim, C.G., Lim, H.Y., and Shin, Y.T. (2018), A study on the traffic flow analysis method based on change detection for traffic video data. Journal of Information Technology and Architecture, Vol. 15, No. 3, pp. 373-382. (in Korean with English abstract)
  12. Kim, J.H. and Choi, D.H. (2019), Implementation of a vehicle traffic and speed estimation system using faster R-CNN. The Journal of Korean Institute of Communications and Information Sciences, Vol. 44, No. 9, pp. 1754-1758. (in Korean with English abstract) https://doi.org/10.7840/kics.2019.44.9.1754
  13. Kim, S.S., Jung, J.H., Kim, E.M., Yoo, H.H., and Sohn, H.G. (2008), Geocoding of low altitude UAV imagery using affine transformation model. Journal of Korean Society for Geospatial Information Science, Vol. 16, No. 4, pp. 79-87. (in Korean with English abstract)
  14. Kim, Y.M., Lee, J.Y., Yoon, I.L., Han. T.J., and Kim, C.Y. (2018), CCTV object detection with background subtraction and convolutional neural network. The Korean Institute of Information Scientists and Engineers, Vol. 24, No. 3, pp. 151-156. (in Korean with English abstract)
  15. Lee, G.W. and Yom, J.H. (2018), Design and implementation of web-based automatic preprocessing system of remote sensing imagery for machine learning modeling. The Journal of Korean Society for Geospatial Information Science, Vol. 26, No. 1, pp. 61-67. (in Korean with English abstract) https://doi.org/10.7319/kogsis.2018.26.1.061
  16. Lee, T.H., Kim, K.J., Yun, K.S., Kim, K.J., and Choi, D.H. (2018), A method of counting vehicle and pedestrian using deep learning based on CCTV. The Korean Institute of Intelligent Systems, Vol. 28, No. 3, pp. 219-224. (in Korean with English abstract) https://doi.org/10.5391/JKIIS.2018.28.3.219
  17. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P., and Zitnick, C.L. (2014), Microsoft coco: Common objects in context. In European conference on computer vision, Springer, Cham, 6-12 September, Zurich, Switzerland, pp. 740-755.
  18. Mundhenk, T. N., Konjevod, G., Sakla, W. A., and Boakye, K. (2016), A large contextual dataset for classification, detection and counting of cars with deep learning. In European conference on computer vision, Springer, Cham, 8-16 October, Amsterdam, Netherlands, pp. 785-800.
  19. Park, G.M. and Bae, Y.C. (2019), Performance comparison of machine learning in the various kind of prediction. The Journal of the Korea Institute of Electronic Communication Sciences, Vol. 14, No. 1, pp. 169-178. (in Korean with English abstract) https://doi.org/10.13067/JKIECS.2019.14.1.169
  20. Park, S.Y., Lee, J.B., Park, Y.J., and Yu, K.Y. (2009), The study on coordinate transformation for updating of digital map from construction drawing data. Journal of Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 27, No. 2, pp. 281-288. (in Korean with English abstract)
  21. Peppa, M.V., Bell, D., Komar, T., and Xiao, W. (2018), Urban traffic flow analysis based on deep learning car detection from CCTV image series. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, Vol. 42, No. 4, pp. 499-506.
  22. Redmon, J. and Farhadi, A. (2017), YOLO9000 : Better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, 21-26 July, Honolulu, USA, pp. 7263-7271.
  23. Redmon, J. and Farhadi, A. (2018), YOLO V3: An incremental improvement, arXiv preprint arXiv:1804.02767, pp. 1-6.
  24. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016), You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 27-30 June, Las Vegas, USA, pp. 779-788.
  25. Sirirattanapol, C., Nagai, M., Witayangkurn, A., Pravinvongvuth, S., and Ekpanyapong, M. (2019), Bangkok CCTV image through a road environment extraction system using multi-label convolutional neural network classification. ISPRS International Journal of Geo-Information, Vol. 8, No. 3, pp. 128-143. https://doi.org/10.3390/ijgi8030128
  26. Traffic Monitoring System. (2018), Road traffic investigation, Ministry of Land, Infrastructure and Transport, URL:http://www.road.re.kr/(last date accessed: 22 December 2019).
  27. Tung, C., Kelleher, M.R., Schlueter, R.J., Xu, B., Lu, Y.H., Thiruvathukal, G.K., Chen, Y.K., and Lu, Y. (2019), Largescale object detection of images from network cameras in variable ambient lighting conditions. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), IEEE, 28-30 March, California, USA, pp. 393-398.
  28. Xu, Y., Wu, L., Xie, Z., and Chen, Z. (2018), Building extraction in very high resolution remote sensing imagery using deep learning and guided filters. Remote Sensing, Vol. 10, No. 1, pp. 144-161. https://doi.org/10.3390/rs10010144
  29. Young, T., Hazarika, D., Poria, S., and Cambria, E. (2018). Recent trends in deep learning based natural language processing. IEEE Computational IntelligenCe Magazine, Vol. 13, No. 3, pp. 55-75. https://doi.org/10.1109/mci.2018.2840738
  30. Yu, J.H., Han, Y.J., and Hahn, H.S. (2019), Improving performance of YOLO network using multi-layer overlapped windows for detecting correct position of small dense objects. Journal of The Korea Society of Computer and Information, Vol. 24, No. 3, pp. 19-27. https://doi.org/10.9708/JKSCI.2019.24.03.019

Cited by

  1. CCTV 영상을 활용한 동적 객체의 위치 추적 및 시각화 방안 vol.51, pp.1, 2020, https://doi.org/10.22640/lxsiri.2021.51.1.53