DOI QR코드

DOI QR Code

Quantitative Evaluations of Deep Learning Models for Rapid Building Damage Detection in Disaster Areas

재난지역에서의 신속한 건물 피해 정도 감지를 위한 딥러닝 모델의 정량 평가

  • Ser, Junho (Dept. of Geography, Dongguk University) ;
  • Yang, Byungyun (Dept. of Geography Education, Dongguk University)
  • Received : 2022.07.20
  • Accepted : 2022.09.13
  • Published : 2022.10.31

Abstract

This paper is intended to find one of the prevailing deep learning models that are a type of AI (Artificial Intelligence) that helps rapidly detect damaged buildings where disasters occur. The models selected are SSD-512, RetinaNet, and YOLOv3 which are widely used in object detection in recent years. These models are based on one-stage detector networks that are suitable for rapid object detection. These are often used for object detection due to their advantages in structure and high speed but not for damaged building detection in disaster management. In this study, we first trained each of the algorithms on xBD dataset that provides the post-disaster imagery with damage classification labels. Next, the three models are quantitatively evaluated with the mAP(mean Average Precision) and the FPS (Frames Per Second). The mAP of YOLOv3 is recorded at 34.39%, and the FPS reached 46. The mAP of RetinaNet recorded 36.06%, which is 1.67% higher than YOLOv3, but the FPS is one-third of YOLOv3. SSD-512 received significantly lower values than the results of YOLOv3 on two quantitative indicators. In a disaster situation, a rapid and precise investigation of damaged buildings is essential for effective disaster response. Accordingly, it is expected that the results obtained through this study can be effectively used for the rapid response in disaster management.

본 연구는 AI 기법 중에 최근 널리 사용되고 있는 딥러닝 모델들을 비교하여 재난으로 인해 손상된 건물의 신속한 감지에 가장 적합한 모델을 선정하는 데 목적이 있다. 먼저, 신속한 객체감지에 적합한 1단계 기반 검출기 중 주요 딥러닝 모델인 SSD-512, RetinaNet, YOLOv3를 후보 모델로 선정하였다. 이 방법들은 1단계 기반 검출기 방식을 적용한 모델로서 객체 인식 분야에 널리 이용되고 있다. 이 모델들은 객체 인식 처리방식의 구조와 빠른 연산의 장점으로 인해 객체 인식 분야에 널리 사용되고 있으나 재난관리에서의 적용은 초기 단계에 머물러 있다. 본 연구에서는 피해감지에 가장 적합한 모델을 찾기 위해 다음과 같은 과정을 거쳤다. 먼저, 재난에 의한 건물의 피해 정도 감지를 위해 재난에 의해 손상된 건물로 구성된 xBD 데이터셋을 활용하여 초고해상도 위성영상을 훈련시켰다. 다음으로 모델 간의 성능을 비교·평가하기 위하여 모델의 감지 정확도와 이미지 처리속도를 정량적으로 분석하였다. 학습 결과, YOLOv3는 34.39%의 감지 정확도와 초당 46개의 이미지 처리속도를 기록하였다. RetinaNet은 YOLOv3보다 1.67% 높은 36.06%의 감지 정확도를 기록하였으나, 이미지 처리속도는 YOLOv3의 3분의 1에 그쳤다. SSD-512는 두 지표에서 모두 YOLOv3보다 낮은 수치를 보였다. 대규모 재난에 의해 발생한 피해 정보에 대한 신속하고 정밀한 수집은 재난 대응에 필수적이다. 따라서 본 연구를 통해 얻은 결과는 신속한 지리정보 취득이 요구되는 재난관리에 효과적으로 활용될 수 있을 것이라 기대한다.

Keywords

Acknowledgement

본 연구는 과학기술정보통신부 및 정보통신기획평가원의 글로벌핵심인재양성지원사업의 연구결과로 수행되었음(S-2022-A0496-00172).

References

  1. Alfarrarjeh, A., Trivedi, D., Kim, S. H., and Shahabi, C. (2018). A deep learning approach for road damage detection from smartphone images. In 2018 IEEE International Conference on Big Data (Big Data), IEEE, pp. 5201-5204. https://doi.org/10.1109/bigdata.2018.8621899
  2. Baker, S. (1989). San Francisco in ruins: The 1906 serial photographs of George R. Lawrence. Landscape (Berkeley, Calif.), Vol. 30, No. 2, pp. 9-14.
  3. Bowman, J. and Yang, L. (2021). Few-shot Learning for Postdisaster Structure Damage Assessment. In Proceedings of the 4th ACM SIGSPATIAL International Workshop on AI for Geographic Knowledge Discovery, pp. 27-32. https://doi.org/10.1145/3486635.3491071
  4. Carranza-Garcia, M., Torres-Mateo, J., Lara-Benitez, P., and Garcia-Gutierrez, J. (2020). On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data. Remote Sensing, Vol 13, No. 1, 89p. https://doi.org/10.3390/rs13010089
  5. Clark, D. G., Ford, J. D., and Tabish, T. (2018). What role can unmanned aerial vehicles play in emergency response in the Arctic: A case study from Canada. PLoS One, Vol. 13, No. 12, e0205299. https://doi.org/10.1371/journal.pone.0205299
  6. Deng, L. and Yu, D. (2014). Deep learning: methods and applications. Foundations and trends in signal processing, Vol. 7, No. 3-4, pp. 197-387. https://doi.org/10.1561/9781601988157
  7. Ding, J., Zhang, J., Zhan, Z., Tang, X., and Wang, X. (2022). A Precision Efficient Method for Collapsed Building Detection in Post-Earthquake UAV Images Based on the Improved NMS Algorithm and Faster R-CNN. Remote Sensing, Vol. 14, No. 3, 663p. https://doi.org/10.3390/rs14030663
  8. Gang, S. M., Kim, D. R., Choung, Y. J., Park, J. S., Kim, J. M., and Jo, M. H. (2016). A plan for a prompt disaster response system using a 3D disaster management system based on high-capacity geographic and disaster information. Journal of the Korean Association of Geographic Information Studies, Vol. 19, No. 1, pp. 180-196. (in Korean with English abstract) https://doi.org/10.11108/kagis.2016.19.1.180
  9. Ghaffarian, S., Kerle, N., Pasolli, E., and Jokar Arsanjani, J. (2019). Post-disaster building database updating using automated deep learning: An integration of pre-disaster OpenStreetMap and multi-temporal satellite data. Remote sensing, Vol. 11, No. 20, 2427p. https://doi.org/10.3390/rs11202427
  10. Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2015). Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, Vol. 38, No. 1, pp. 142-158. https://doi.org/10.1109/tpami.2015.2437384
  11. Groener, A., Chern, G., and Pritt, M. (2019). A comparison of deep learning object detection models for satellite imagery. In 2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), IEEE, pp. 1-10. https://doi.org/10.1109/aipr47015.2019.9174593
  12. Gupta, R., Hosfelt, R., Sajeev, S., Patel, N., Goodman, B., Doshi, J., and Gaston, M. (2019). xbd: A dataset for assessing building damage from satellite imagery. arXiv preprint arXiv:1911.09296. https://doi.org/10.48550/arXiv.1911.09296
  13. Jang, E., Kang, Y., Im, J., Lee, D. W., Yoon, J., and Kim, S. K. (2019). Detection and monitoring of forest fires using Himawari-8 geostationary satellite data in South Korea. Remote Sensing, Vol. 11, No. 3, 271p. https://doi.org/10.3390/rs11030271
  14. Jha, M. N., Levy, J., and Gao, Y. (2008). Advances in remote sensing for oil spill disaster management: state-of-the-art sensors technology for oil spill surveillance. Sensors, Vol. 8, No. 1, pp. 236-255. https://doi.org/10.3390/s8010236
  15. Ji, M., Liu, L., and Buchroithner, M. (2018). Identifying collapsed buildings using post-earthquake satellite imagery and convolutional neural networks: A case study of the 2010 Haiti earthquake. Remote Sensing, Vol. 10, No. 11, 1689p. https://doi.org/10.3390/rs10111689
  16. Johnson, R. D. (1994). Change Vector Analysis for disaster assessment: a case study of Hurricane Andrew. Geocarto International, Vol. 9, No. 1, pp. 41-45. https://doi.org/10.1080/10106049409354440
  17. Joyce, K. E., Belliss, S. E., Samsonov, S. V., McNeill, S. J., and Glassey, P. J. (2009). A review of the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters. Progress in physical geography, Vol. 33, No. 2, pp. 183-207. https://doi.org/10.1177/0309133309339563
  18. Kalantar, B., Ueda, N., Al-Najjar, H. A., and Halin, A. A. (2020). Assessment of convolutional neural network architectures for earthquake-induced building damage detection based on pre-and post-event orthophoto images. Remote Sensing, Vol. 12, No. 21, 3529p. https://doi.org/10.3390/rs12213529
  19. Kim, J., Jeon, H., and Kim, D. J. (2020). Extracting flooded areas in southeast asia using SegNet and U-Net. Korean Journal of Remote Sensing, Vol. 36, No. 5_3, pp. 1095-1107. (in Korean with English abstract) https://doi.org/10.7780/kjrs.2020.36.5.3.8
  20. Kim, Y., Lee, S., Kim, J., and Park, Y. (2017). Disaster management using high resolution optical satellite imagery and case analysis. Journal of the Korean Society of Hazard Mitigation, Vol. 17, No. 3, pp. 117-124. https://doi.org/10.9798/KOSHAM.2017.17.3.117
  21. Lee, J., Im, J., Cha, D. H., Park, H., and Sim, S. (2019). Tropical cyclone intensity estimation using multi-dimensional convolutional neural networks from geostationary satellite data. Remote Sensing, Vol. 12, No. 1, 108p. https://doi.org/10.3390/rs12010108
  22. Li, P., Xu, H., and Song, B. (2011). A novel method for urban road damage detection using very high resolution satellite imagery and road map. Photogrammetric Engineering & Remote Sensing, Vol. 77, No. 10, pp. 1057-1066. https://doi.org/10.14358/pers.77.10.1057
  23. Lin, T. Y., Goyal, P., Girshick, R., He, K., and Dollar, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988. https://doi.org/10.1109/iccv.2017.324
  24. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., and Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision, Springer, Cham, pp. 21-37. https://doi.org/10.1007/978-3-319-46448-0_2
  25. Ma, L., Li, M., Ma, X., Cheng, L., Du, P., and Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 130, pp. 277-293. https://doi.org/10.1016/j.isprsjprs.2017.06.001
  26. Perez, L. and Wang, J. (2017). The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621. https://doi.org/10.1109/mlbdbi54094.2021.00134
  27. Pesaresi, M., Gerhardinger, A., and Haag, F. (2007). Rapid damage assessment of built-up structures using VHR satellite data in tsunami-affected areas. International Journal of Remote Sensing, Vol. 28, No. 13-14, pp. 3013-3036. https://doi.org/10.1080/01431160601094492
  28. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788. https://doi.org/10.1109/cvpr.2016.91
  29. Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767. https://doi.org/10.48550/arXiv.1804.02767
  30. Scientific Research Working Group for High Resolution Satellite Remote Sensing/RSSJ. (2011). High Resolution Satellite Remote Sensing Concerning the 2011 off the Pacific Coast of Tohoku Earthquake and Tsunami Disaster. Journal of The Remote Sensing Society of Japan, Vol. 31, No. 3, pp. 344-367. https://doi.org/10.11440/rssj.31.344
  31. Tatham, P. (2009). An investigation into the suitability of the use of unmanned aerial vehicle systems (UAVS) to support the initial needs assessment process in rapid onset humanitarian disasters. International journal of risk assessment and management, Vol. 13, No. 1, pp. 60-78. https://doi.org/10.1504/ijram.2009.026391
  32. Van Etten, A., Hogan, D., Manso, J. M., Shermeyer, J., Weir, N., and Lewis, R. (2021). The multi-temporal urban development spacenet dataset. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6398-6407. https://doi.org/10.1109/cvpr46437.2021.00633
  33. Xu, J. Z., Lu, W., Li, Z., Khaitan, P., and Zaytseva, V. (2019). Building damage detection in satellite imagery using convolutional neural networks. arXiv preprint arXiv:1910.06444. https://doi.org/10.48550/arXiv.1910.06444
  34. Xue, B., Huang, B., Wei, W., Chen, G., Li, H., Zhao, N., and Zhang, H. (2021). An Efficient Deep-Sea Debris Detection Method Using Deep Neural Networks. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14, pp. 12348-12360. https://doi.org/10.1109/jstars.2021.3130238
  35. Yamazaki, F., Kouchi, K. I., Matsuoka, M., Kohiyama, M., and Muraoka, N. (2004). Damage detection from highresolution satellite images for the 2003 Boumerdes, Algeria earthquake. In 13th World Conference on Earthquake Engineering, International Association for Earthquake Engineering, Vancouver, British Columbia, Canada, 13p.
  36. Yang, B. (2016). GIS based 3-D landscape visualization for promoting citizen's awareness of coastal hazard scenarios in flood prone tourism towns. Applied Geography, Vol. 76, pp. 85-97. https://doi.org/10.1016/j.apgeog.2016.09.006
  37. Yang, B. and Jahan, I. (2018). Comprehensive assessment for post-disaster recovery process in a tourist town. Sustainability, Vol. 10, No. 6, 1842p. https://doi.org/10.3390/su10061842
  38. Yang, Y., Xie, G., and Qu, Y. (2021). Real-time Detection of Aircraft Objects in Remote Sensing Images Based on Improved YOLOv4. In 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), IEEE, Vol. 5, pp. 1156-1164. https://doi.org/10.1109/iaeac50856.2021.9390673
  39. Zou, Z., Shi, Z., Guo, Y., and Ye, J. (2019). Object detection in 20 years: A survey. arXiv preprint arXiv:1905.05055. https://doi.org/10.48550/arXiv.1905.05055