DOI QR코드

DOI QR Code

딥러닝 기반 교량 구성요소 자동 분류

Automatic Classification of Bridge Component based on Deep Learning

  • 이재혁 (충북대학교 토목공학부) ;
  • 박정준 (한국철도기술연구원 첨단궤도토목본부 철도구조연구팀) ;
  • 윤형철 (충북대학교 토목공학부)
  • 투고 : 2019.11.20
  • 심사 : 2020.03.02
  • 발행 : 2020.04.01

초록

최근 BIM (Building Information Modeling)이 건설 산업계에서 폭넓게 활용되고 있다. 하지만 과거에 시공이 된 구조물에 경우 대부분 BIM이 구축되어 있지 않다. BIM이 구축되지 않은 구조물의 경우, 카메라로부터 얻은 2D 이미지에 SfM (Structure from Motion) 기법을 활용하면 3D 모델의 점군 데이터(Point cloud)를 생성하고 BIM을 구축할 수 있다. 하지만 이렇게 생성된 점군 데이터는 의미론적 정보가 포함되어 있지 않기 때문에, 수작업으로 구조물의 어떤 요소인지 분류해 주어야 한다. 따라서 본 연구에서는 구조물 구성요소를 분류하는 과정을 자동화하기 위하여 딥러닝을 적용하였다. 딥러닝 네트워크 구축에는 CNN (Convolutional Neural Network) 구조의 Inception-ResNet-v2를 사용하였고, 전이학습을 통하여 교량 구조물의 구성요소를 학습하였다. 개발된 시스템을 검증하기 위하여 수집한 데이터를 이용하여 구성요소를 분류한 결과, 교량의 구성요소를 96.13 %의 정확도로 분류할 수 있었다.

Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.

키워드

참고문헌

  1. Bay, H., Tuytelaars, T. and Van Gool, L. (2006). "Surf: Speeded up robust features." In European conference on computer vision, Springer, Berlin, Heidelberg, pp. 404-417.
  2. Dalal, N. and Triggs, B. (2005). "Histograms of oriented gradients for human detection." 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA.
  3. Hartley, R. and Zisserman, A. (2003). Multiple view geometry in computer vision, Cambridge university press, Cambridge, UK.
  4. He, K., Zhang, X., Ren, S. and Sun, J. (2016). "Deep residual learning for image recognition." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, San Juan, Puerto Rico, USA, pp. 770-778.
  5. Krizhevsky, A., Sutskever, I. and Hinton, G. E. (2012). "Imagenet classification with deep convolutional neural networks." In Advances in neural information processing systems, Lake Tahoe, Nevada, USA, pp. 1097-1105.
  6. Lee, I. (2015a). Bridge maintenance strategies for service life 100years, Korea Expressway Corporation, 2015-36-534, 9607 (in Korean).
  7. Lee, S. (2019). "Lessons from the collapse of the morandi bridge in Italy." Magazine of the Korea Institute for Structural Maintenance and Inspection, Vol. 23, No. 2, pp. 51-57 (in Korean).
  8. Lee, T. (2015b). Maintenance status and prospect of deteriorated bridges, Ssangyong Construction Technology Research Institute, v.71, pp. 48-55 (in Korean).
  9. Lee, Y. I., Kim, B. H. and Cho, S. J. (2018). "Image-based spalling detection of concrete structures using deep learning." Journal of the Korea Concrete Institute, Vol. 30, No. 1, pp. 91-99. https://doi.org/10.4334/JKCI.2018.30.1.091
  10. Lowe, D. G. (2004). "Distinctive image features from scale-invariant keypoints." International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  11. Moon, H., Won, J. and Shin, J. (2018). BIM roadmap and activation strategies for public SOC projects, Korea Institute of Construction Technology, 2018-029 (in Korean).
  12. Simonyan, K. and Zisserman, A. (2014). "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv: 1409.1556, International Conference on Learning Representations, ICLR, San Diego, CA.
  13. Szegedy, C., Ioffe, S., Vanhoucke, V. and Alemi, A. A. (2017). "Inception-v4, inception-resnet and the impact of residual connections on learning." In Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California USA.
  14. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke V. and Rabinovich A. (2015). "Going deeper with convolutions." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, San Juan, Puerto Rico, USA, pp. 1-9.
  15. Viola, P. and Jones, M. (2001). "Rapid object detection using a boosted cascade of simple features." In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, CVPR 2001, Vol. 1, pp. I-I.