DOI QR코드

DOI QR Code

Ensemble-based deep learning for autonomous bridge component and damage segmentation leveraging Nested Reg-UNet

  • Abhishek Subedi (Lyles School of Civil Engineering, Purdue University) ;
  • Wen Tang (Lyles School of Civil Engineering, Purdue University) ;
  • Tarutal Ghosh Mondal (Department of Civil, Architectural and Environmental Engineering, Missouri University of Science and Technology) ;
  • Rih-Teng Wu (Department of Civil Engineering, National Taiwan University) ;
  • Mohammad R. Jahanshahi (Lyles School of Civil Engineering, Purdue University)
  • Received : 2022.10.26
  • Accepted : 2023.02.02
  • Published : 2023.04.25

Abstract

Bridges constantly undergo deterioration and damage, the most common ones being concrete damage and exposed rebar. Periodic inspection of bridges to identify damages can aid in their quick remediation. Likewise, identifying components can provide context for damage assessment and help gauge a bridge's state of interaction with its surroundings. Current inspection techniques rely on manual site visits, which can be time-consuming and costly. More recently, robotic inspection assisted by autonomous data analytics based on Computer Vision (CV) and Artificial Intelligence (AI) has been viewed as a suitable alternative to manual inspection because of its efficiency and accuracy. To aid research in this avenue, this study performs a comparative assessment of different architectures, loss functions, and ensembling strategies for the autonomous segmentation of bridge components and damages. The experiments lead to several interesting discoveries. Nested Reg-UNet architecture is found to outperform five other state-of-the-art architectures in both damage and component segmentation tasks. The architecture is built by combining a Nested UNet style dense configuration with a pretrained RegNet encoder. In terms of the mean Intersection over Union (mIoU) metric, the Nested Reg-UNet architecture provides an improvement of 2.86% on the damage segmentation task and 1.66% on the component segmentation task compared to the state-of-the-art UNet architecture. Furthermore, it is demonstrated that incorporating the Lovasz-Softmax loss function to counter class imbalance can boost performance by 3.44% in the component segmentation task over the most employed alternative, weighted Cross Entropy (wCE). Finally, weighted softmax ensembling is found to be quite effective when used synchronously with the Nested Reg-UNet architecture by providing mIoU improvement of 0.74% in the component segmentation task and 1.14% in the damage segmentation task over a single-architecture baseline. Overall, the best mIoU of 92.50% for the component segmentation task and 84.19% for the damage segmentation task validate the feasibility of these techniques for autonomous bridge component and damage segmentation using RGB images.

Keywords

References

  1. Berman, M., Triki, A.R. and Blaschko, M.B. (2018), "The lovasz-softmax loss: A tractable surrogate for the opti- mization of the intersection-over-union measure in neural networks", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4413-4421.
  2. Braun, T., Spiliopoulos, S., Veltman, C., Hergesell, V., Passow, A., Tenderich, G., Borggrefe, M. and Koerner, M.M. (2020), "Detection of myocardial ischemia due to clinically asymptomatic coronary artery stenosis at rest using supervised artificial intelligence-enabled vectorcardiography-a five-fold cross validation of accuracy", J. Electrocardiol., 59, 100-105. https://doi.org/10.1016/j.jelectrocard.2019.12.018
  3. Chen, F.-C. and Jahanshahi, M.R. (2017), "NB-CNN: Deep learning-based crack detection using convolutional neural network and naive bayes data fusion", IEEE Transact. Industr. Electron., 65(5), 4392-4400. https://doi.org/10.1109/TIE.2017.2764844
  4. Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H. (2018), "Encoder-decoder with atrous separable convolution for semantic image segmentation", Proceedings of the European Conference on Computer Vision (ECCV), pp. 801-818.
  5. Czerniawski, T. and Leite, F. (2020), "Automated segmentation of RGB-D images into a comprehensive set of building components using deep learning", Adv. Eng. Inform., 45, 101131. https://doi.org/10.1016/j.aei.2020.101131
  6. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K. and Fei-Fei, L. (2009), "ImageNet: A large-scale hierarchical image database." 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255.
  7. Ghosh Mondal, T., Jahanshahi, M.R., Wu, R.-T. and Wu, Z.Y. (2020), "Deep learning-based multi-class damage detection for autonomous post-disaster reconnaissance", Struct. Control Health Monitor., 27(4), e2507. https://doi.org/10.1002/stc.2507
  8. Gorka, J.G. and Armstrong, D.E. (2021), "Application of machine learning to estimate fireball characteristics and their uncertainty from infrared spectral data", Proceedings of Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imaging XXVII, Vol. 11727, pp. 155-166.
  9. Goyal, P., Dollar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y. and He, K. (2017), "Accurate, large minibatch SGD: training imagenet in 1 hour", CoRR, abs/1706.02677. http://arxiv.org/abs/1706.02677
  10. He, K., Gkioxari, G., Dollar, P. and Girshick, R. (2017), "Mask RCNN", Proceedings of the IEEE International Conference on Computer Vision, pp. 2961-2969.
  11. He, F., Liu, T. and Tao, D. (2019), "Control batch size and learning rate to generalize well: Theoretical and empirical evidence", In: H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch'e-Buc, E. Fox, & R. Garnett (Eds.), Advances in Neural Information Processing Systems. Curran Associates, Inc. https://proceedings.neurips.cc/paper/2019/file/dc6a70712a252123c40d2adba6a11d84-Paper.pdf
  12. Hoskere, V., Narazaki, Y., Hoang, T.A. and Spencer Jr, B. (2020), "Madnet: Multi-task semantic segmentation of multiple types of structural materials and damage in images of civil infrastructure", J. Civil Struct. Health Monitor., 10(5), 757-773. https://doi.org/10.1007/s13349-020-00409-0
  13. Hoskere, V., Amer, F., Friedel, D., Yang, W., Tang, Y., Narazaki, Y., Smith, M.D., Golparvar-Fard, M. and Spencer Jr, B.F. (2021), "Instadam: Open-source platform for rapid semantic segmentation of structural damage", Appl. Sci., 11(2), 520. https://doi.org/10.3390/app11020520
  14. Hoskere, V., Narazaki, Y. and Spencer Jr, B.F. (2022), "Physics-based graphics models in 3D synthetic environments as autonomous vision-based inspection testbeds", Sensors, 22(2), 532. https://doi.org/10.3390/s22020532
  15. Hou, S., Dong, B., Wang, H. and Wu, G. (2020), "Inspection of surface defects on stay cables using a robot and transfer learning", Automat. Constr., 119, 103382. https://doi.org/10.1016/j.autcon.2020.103382
  16. Kingma, D.P. and Ba, J. (2014), "Adam: A method for stochastic optimization", arXiv preprint arXiv:1412.6980.
  17. Li, X., Xia, Y., Long, X., Li, Z. and Li, S. (2021), "Exploring text-transformers in AAAI 2021 shared task: Covid-19 fake news detection in English", In: International Workshop on Combating Online Hostile Posts in Regional Languages during Emergency Situation, pp. 106-115.
  18. Lin, G., Milan, A., Shen, C. and Reid, I. (2017), "Refinenet: Multipath refinement networks for high-resolution semantic segmentation", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1925-1934.
  19. Mondal, T.G. and Jahanshahi, M.R. (2020), "Autonomous vision-based damage chronology for spatiotemporal con- dition assessment of civil infrastructure using unmanned aerial vehicle", Smart Struct. Syst., Int. J., 25(6), 733-749. https://doi.org/10.12989/sss.2020.25.6.733
  20. Narazaki, Y., Hoskere, V., Yoshida, K., Spencer, B.F. and Fujino, Y. (2021), "Synthetic environments for vision-based structural condition assessment of japanese high-speed railway viaducts", Mech. Syst. Signal Process., 160, 107850. https://doi.org/10.1016/j.ymssp.2021.107850
  21. Pan, Y. and Zhang, L. (2022), "Dual attention deep learning network for automatic steel surface defect segmentation", Comput.-Aided Civil Infrastr. Eng., 37(11), 1468-1487. https://doi.org/10.1111/mice.12792
  22. Pozzer, S., Rezazadeh Azar, E., Dalla Rosa, F. and Chamberlain Pravia, Z.M. (2021), "Semantic segmentation of defects in infrared thermographic images of highly damaged concrete structures", J. Perform. Constr. Facil., 35(1), 04020131. https://doi.org/10.1061/(ASCE)CF.1943-5509.0001541
  23. Radosavovic, I., Kosaraju, R.P., Girshick, R., He, K. and Dollar, P. (2020), "Designing network design spaces." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10428-10436.
  24. Ronneberger, O., Fischer, P. and Brox, T. (2015), "U-net: Convolutional networks for biomedical image segmentation", In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234-241. https://doi.org/10.1007/978-3-319-24574-4_28
  25. Rubio, J.J., Kashiwa, T., Laiteerapong, T., Deng, W., Nagai, K., Escalera, S., Nakayama, K., Matsuo, Y. and Prendinger, H. (2019), "Multi-class structural damage segmentation using fully convolutional networks", Comput. Indust., 112, 103121. https://doi.org/10.1016/j.compind.2019.08.002
  26. Satria, A., Sitompul, O.S. and Mawengkang, H. (2021), "5-fold cross validation on supporting k-nearest neighbour accuration of making consimilar symptoms disease classification", Proceedings of 2021 International Conference on Computer Science and Engineering (IC2SE), Padang, Indonesia, November, pp. 1-5.
  27. Smith, L.N. (2017), "Cyclical learning rates for training neural networks", Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 464-472.
  28. Smith, S.L., Kindermans, P. and Le, Q.V. (2018), "Don't decay the learning rate, increase the batch size", In: International Conference on Learning Representations (ICLR), abs/1711.00489. http://arxiv.org/abs/1711.00489
  29. Spencer Jr, B.F., Hoskere, V. and Narazaki, Y. (2019), "Advances in computer vision-based civil infrastructure inspection and monitoring", Engineering, 5(2), 199-222. https://doi.org/10.1016/j.eng.2018.11.030
  30. Tang, W., Wu, R.-T. and Jahanshahi, M.R. (2022), "Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and bayesian data fusion", Smart Struct. Syst., Int. J., 29(1), 221-235. https://doi.org/10.12989/sss.2022.29.1.221
  31. Wang, N., Zhao, X., Zou, Z., Zhao, P. and Qi, F. (2020), "Autonomous damage segmentation and measurement of glazed tiles in historic buildings via deep learning", Comput.-Aided Civil Infrastr. Eng., 35(3), 277-291. https://doi.org/10.1111/mice.12488
  32. Xia, T., Yang, J. and Chen, L. (2022), "Automated semantic segmentation of bridge point cloud based on local descriptor and machine learning", Automat. Constr., 133, 103992. https://doi.org/10.1016/j.autcon.2021.103992
  33. Yasuno, T., Michihiro, N. and Kazuhiro, N. (2020), "Per-pixel classification rebar exposures in bridge eye inspection", arXiv preprint arXiv:2004.12805.
  34. Zhou, S. and Song, W. (2021), "Crack segmentation through deep convolutional neural networks and heterogeneous image fusion", Automat. Constr., 125, 103605. https://doi.org/10.1016/j.autcon.2021.103605
  35. Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N. and Liang, J. (2018), "Unet++: A nested u-net architecture for medical image segmentation", In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3-11. https://doi.org/10.1007/978-3-030-00889-5_1