DOI QR코드

DOI QR Code

Updating Obstacle Information Using Object Detection in Street-View Images

스트리트뷰 영상의 객체탐지를 활용한 보행 장애물 정보 갱신

  • Park, Seula (Institute of Engineering Research, Seoul National University) ;
  • Song, Ahram (School of Convergence & Fusion System Engineering, Kyungpook National University)
  • Received : 2021.12.19
  • Accepted : 2021.12.23
  • Published : 2021.12.31

Abstract

Street-view images, which are omnidirectional scenes centered on a specific location on the road, can provide various obstacle information for the pedestrians. Pedestrian network data for the navigation services should reflect the up-to-date obstacle information to ensure the mobility of pedestrians, including people with disabilities. In this study, the object detection model was trained for the bollard as a major obstacle in Seoul using street-view images and a deep learning algorithm. Also, a process for updating information about the presence and number of bollards as obstacle properties for the crosswalk node through spatial matching between the detected bollards and the pedestrian nodes was proposed. The missing crosswalk information can also be updated concurrently by the proposed process. The proposed approach is appropriate for crowdsourcing data as the model trained using the street-view images can be applied to photos taken with a smartphone while walking. Through additional training with various obstacles captured in the street-view images, it is expected to enable efficient information update about obstacles on the road.

스트리트뷰(Street-view) 영상은 도로의 특정 위치를 중심으로 한 전방위 영상을 제공하며, 보행 환경에 대한 다양한 장애물 정보를 포함한다. 보행자용 길안내 서비스에 활용하기 위한 보행 네트워크(Pedestrian network) 데이터는 교통약자를 비롯한 보행자의 이동 편의성을 보장하기 위하여 보행 장애물에 대한 최신 정보를 반영해야 한다. 본 연구에서는 스트리트뷰 영상과 딥러닝 기반의 객체탐지 알고리즘을 활용하여 서울 전역에 위치한 주요 보행 장애물인 볼라드(Bollard)를 학습하였다. 또한, 탐지된 볼라드 정보와 보행 네트워크 간의 공간매칭을 통해 횡단보도 노드를 대상으로 볼라드의 유무와 개수 정보를 장애물 속성으로 입력하고, 동시에 누락된 횡단보도 정보를 갱신하기 위한 프로세스를 정의하였다. 스트리트뷰 영상으로 학습된 모델은 보행 상황에서 스마트폰으로 촬영한 사진에 대해서도 적용이 가능하며, 향후 스트리트뷰 영상에 포함된 다양한 보행 장애물에 대한 추가 학습을 통해 효율적인 보행 장애 정보 갱신이 가능할 것으로 기대된다.

Keywords

Acknowledgement

이 논문은 2021년도 정부(교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업임(NRF-2021R1A6A3A01086427)

References

  1. Ban, J.H., Lee, T.M., and Yoo, J.H. (2019), Safe2Walk4Blind: DNN-based walking assistance system for the blind, Journal of Institute of Control, Robotics and Systems, Vol. 25, No. 6, pp. 656-571. (in Korean with English abstract)
  2. Ban, M.Y., Ryu, S.K., Ji, W.S., Kim, C.M., and Kim, J.S. (2012), User-Friendly Traffic Safety Facilities, Issues & Diagnosis 2012, Vol. 43, Gyeonggi Research Institute, pp. 1-25. (in Korean)
  3. Boeing, G. (2017), OSMnx: New methods for acquiring, constructing, analyzing, and visualizing complex street networks, Computers, Environment and Urban Systems, Vol. 65, pp. 126-139. https://doi.org/10.1016/j.compenvurbsys.2017.05.004
  4. Erwig, M. (2000), The graph Voronoi diagram with applications, Networks, Vol. 36, No. 3, pp. 156-163. https://doi.org/10.1002/1097-0037(200010)36:3<156::AID-NET2>3.0.CO;2-L
  5. GM, V., Pereira, B., and Little, S. (2021), Urban footpath image dataset to assess pedestrian mobility, Proceedings of the 1st International Workshop on Multimedia Computing for Urban Data, 20 October, Virtual Event, China, pp. 23-30.
  6. Jocher, G., Stoken, A., Borovec, J., Christopher, S.T.A.N., and Laughing, L.C. (2021), Ultralytics/yolov5: V4. 0-Nn. Silu( ) activations weights & biases logging Pytorch hub integration, Zenodo, https://doi.org/10.5281/zenodo.4418161. (last date accessed: 28 December 2021)
  7. Kang, J.M., Yoon, H.C., Park, J.K., and Kim, Y,G. (2008), Application of QuickBird imagery for the production of digital map, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 26, No. 1, pp. 63-71. (in Korean with English abstract)
  8. Kim, H. and Oh, I.P. (2017), Support for the outdoor walking of people with low vision using visual filter and augmented reality, Archives of Design Research, Vol. 30, No. 4, pp. 71-84. (in Korean with English abstract) https://doi.org/10.15187/adr.2017.11.30.4.71
  9. Kim, Y.W., Jang, W.J., and Park, Y.S. (2018), Walkway spatial information collection technologies for disadvantage pedestrians, Transportation Technology and Policy, Vol. 15, No. 2, pp. 23-30. (in Korean)
  10. Lim, S.B., Seo, C.W., and Yun, H.C. (2015), Digital map updates with UAV photogrammetric methods, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, Vol. 33, No. 5, pp. 397-405. (in Korean with English abstract) https://doi.org/10.7848/KSGPC.2015.33.5.397
  11. Oh, I.P., Baek, A.R., Kwon, J.A., Park, H.J., Son, S.O., and Choi, H.J. (2016), A study on improvement of assistive device for low vision: focused on the use of smart assistive device and mobile application, Proceedings of HCI KOREA 2016, 27-29 January, Gangwon-do, Korea, pp. 198-205. (in Korean with English abstract)
  12. Ou, S.B. and Lee, J.W. (2019), Implementation of a bollard recognition system for safe walking of the visually impaired, Proceedings of Korea Computer Congress 2019, 26-28 Jun, Jeju, Korea, pp. 901-903. (in Korean)
  13. Pathak, A. R., Pandey, M., and Rautaray, S. (2018), Application of deep learning for object detection, Procedia Computer Science, Vol. 132, pp. 1706-1717. https://doi.org/10.1016/j.procs.2018.05.144
  14. Qin, H., Curtin, K. M., and Rice, M. T. (2018), Pedestrian network repair with spatial optimization models and geocrowdsourced data, GeoJournal, Vol. 83, No. 2, pp. 347-364. https://doi.org/10.1007/s10708-017-9775-x
  15. Ting, L., Baijun, Z., Yongsheng, Z., and Shun, Y. (2021), Ship detection algorithm based on improved YOLO V5, In 2021 6th International Conference on Automation, Control and Robotics Engineering (CACRE), 15-17 July, Dalian, China, pp. 483-487.
  16. Ye, N., Wang, B., Kita, M., Xie, M., and Cai, W. (2019), Urban commerce distribution analysis based on street view and deep learning, IEEE Access, Vol. 7, pp. 162841-162849. https://doi.org/10.1109/access.2019.2951294
  17. Yoon, D.Y., Lee, K.J., Yoom, S.I., Noh, G.E., Lee, H.B., Kim, S.H., and Kang, B.G. (2021), Smart assistive device based on CNN for the visually impaired with real-time video processing, Proceedings of KIIT conference, 25-27 November, Jeju, Korea, pp. 665-669. (in Korean with English abstract)
  18. Zhang, W., Witharana, C., Li, W., Zhang, C., Li, X., and Parent, J. (2018), Using deep learning to identify utility poles with crossarms and estimate their locations from google street view images, Sensors, Vol. 18, No. 8, 2484. https://doi.org/10.3390/s18082484
  19. Zhao, Z. Q., Zheng, P., Xu, S. T., and Wu, X. (2019), Object detection with deep learning: A review, IEEE Transactions on Neural Networks and Learning Systems, Vol. 30, No. 11, pp. 3212-3232. https://doi.org/10.1109/tnnls.2018.2876865