DOI QR코드

DOI QR Code

철근콘크리트 손상 특성 추출을 위한 최적 컨볼루션 신경망 백본 연구

A Study on Optimal Convolutional Neural Networks Backbone for Reinforced Concrete Damage Feature Extraction

  • 투고 : 2023.01.11
  • 심사 : 2023.05.24
  • 발행 : 2023.08.01

초록

철근콘크리트 손상 감지를 위한 무인항공기와 딥러닝 연계에 대한 연구가 활발히 진행 중이다. 컨볼루션 신경망은 객체 분류, 검출, 분할 모델의 백본으로 모델 성능에 높은 영향을 준다. 사전학습 컨볼루션 신경망인 모바일넷은 적은 연산량으로 충분한 정확도가 확보 될 수 있어 무인항공기 기반 실시간 손상 감지 백본으로 효율적이다. 바닐라 컨볼루션 신경망과 모바일넷을 분석 한 결과 모바일넷이 바닐라 컨볼루션 신경망의 15.9~22.9% 수준의 낮은 연산량으로도 6.0~9.0% 높은 검증 정확도를 가지는 것으로 평가되었다. 모바일넷V2, 모바일넷V3Large, 모바일넷 V3Small은 거의 동일한 최대 검증 정확도를 가지는 것으로 나타났으며 모바일넷의 철근콘트리트 손상 이미지 특성 추출 최적 조건은 옵티마이저 RMSprop, 드롭아웃 미적용, 평균풀링인 것으로 분석되었다. 본 연구에서 도출된 모바일넷V2 기반 7가지 손상 감지 최대 검증 정확도 75.49%는 이미지 축적과 지속적 학습으로 향상 될 수 있다.

Research on the integration of unmanned aerial vehicles and deep learning for reinforced concrete damage detection is actively underway. Convolutional neural networks have a high impact on the performance of image classification, detection, and segmentation as backbones. The MobileNet, a pre-trained convolutional neural network, is efficient as a backbone for an unmanned aerial vehicle-based damage detection model because it can achieve sufficient accuracy with low computational complexity. Analyzing vanilla convolutional neural networks and MobileNet under various conditions, MobileNet was evaluated to have a verification accuracy 6.0~9.0% higher than vanilla convolutional neural networks with 15.9~22.9% lower computational complexity. MobileNetV2, MobileNetV3Large and MobileNetV3Small showed almost identical maximum verification accuracy, and the optimal conditions for MobileNet's reinforced concrete damage image feature extraction were analyzed to be the optimizer RMSprop, no dropout, and average pooling. The maximum validation accuracy of 75.49% for 7 types of damage detection based on MobilenetV2 derived in this study can be improved by image accumulation and continuous learning.

키워드

참고문헌

  1. Abdeljaber, O., Avci, O., Kiranyaz, S., Gabbouj, M. and Inman, D. J. (2017). "Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks." Journal of Sound and Vibration, Elsevier, Vol. 388, pp. 154-170, https://doi.org/10.1016/j.jsv.2016.10.043.
  2. Cha, Y. J., Choi, W. and Buyukozturk, O. (2017). "Deep learning-based crack damage detection using convolutional neural network." Computer-Aided Civil and Infrastructure Engineering, Wiley, Vol. 32, No. 5, pp. 361-378, https://doi.org/10.1111/mice.12263.
  3. Gao, Y. and Mosalam, K. M. (2018). "Deep transfer learning for image-based structural damage recognition." Computer-Aided Civil and Infrastructure Engineering, Vol. 33, No. 9, pp. 748-768. https://doi.org/10.1111/mice.12363
  4. Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V. and Adam, H. (2019). "Searching for MobileNetV3." arXiv preprint, https://arxiv.org/abs/1905.02244v5.
  5. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M. and Adam, H. (2017). "MobileNets : efficient convolutional neural networks for mobile vision applications." arXiv preprint, https://arxiv.org/abs/1704.04861v1.
  6. Kingma, D. P. and Ba, J. (2014). "ADAM: A method for stochastic optimization." Proceedings of 3rd International Conference for Learning Representations, San Diego, USA, 2015, arXiv preprint, https://arxiv.org/abs/1412.6980.
  7. Krizhevsky, A., Sutskever, I. and Hinton, G. E. (2012). "Imagenet classification with deep convolutional neural networks." Advances in Neural Information Processing Systems, MIT Press, Vol. 5, pp. 1097-1105.
  8. Kurbiel, T. and Khaleghian, S. (2017). "Training of deep neural networks based on distance measures using RMSProp." arXiv preprint, https://arxiv.org/abs/1708.01911.
  9. Lin, Y., Nie, Z. and Ma, H. (2017). "Structural damage detection with automatic feature-extraction through deep learning." Computer-Aided Civil and Infrastructure Engineering, Wiley, Vol. 32, No. 12, pp. 1025-1046, https://doi.org/10.1111/mice.12313.
  10. Nam, W. S., Jung, H., Park, K. H., Kim, C. M. and Kim, G. S. (2022). "Development of deep learning-based damage detection prototype for concrete bridge condition evaluation." KSCE Journal of Civil and Environmental Engineering Research, KSCE, Vol. 42, No. 1, pp. 107-116, https://doi.org/10.12652/Ksce.2022.42.1.0107 (in Korean).
  11. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.-C. (2018). "Mobilenetv2 : Inverted residuals and linear bottlenecks." https://arxiv.org/abs/1801.04381.
  12. Scherer, D., Muller, A. and Behnke, S. (2010). "Evaluation of pooling operations in convolutional architectures for object recognition." Proceedings of 20th International Conference on Artificial Neural Networks (ICANN), Thessaloniki, Greece, pp. 92-101.
  13. Sifre, L. (2014). Rigid-motion scattering for image classification. PhD thesis, Ecole Polytechnique, CMAP, New York.
  14. Soukup, D. and Huber-Mork, R. (2014). "Convolutional neural networks for steel surface defect detection from photometric stereo images." Proceedings of 10th International Symposium on Visual Computing, Las Vegas, NV, pp. 668-677.
  15. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014). "Dropout : A simple way to prevent neural networks from overfitting." Journal of Machine Learning Research, JMLR.org, Vol. 15, No. 1, pp. 1929-1958.
  16. Vetrivel, A., Gerke, M., Kerl, N., Nex, F. and Vosselman, G. (2017). "Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images and multiple-kernel-learning." ISPRS Journal of Photogrammetry and Remote Sensing, Elsevier, Vol. 140, pp. 45-59, https://doi.org/10.1016/j.isprsjprs.2017.03.001.
  17. Yeum, C. M. and Dyke, S. J. (2015). "Vision-based automated crack detection for bridge inspection." Computer-Aided Civil and Infrastructure Engineering, Wiley, Vol. 30, No. 10, pp. 759-770, https://doi.org/10.1111/mice.12141.