DOI QR코드

DOI QR Code

시나리오 기반 상·하수도 관로의 실시간 결함검출 기술 개발

Development of real-time defect detection technology for water distribution and sewerage networks

  • 박동채 (경상국립대학교 건설시스템공학과) ;
  • 최영환 (경상국립대학교 건설시스템공학과)
  • Park, Dong, Chae (Department of Civil and Infrastructure Engineering, Gyeongsang National University) ;
  • Choi, Young Hwan (Department of Civil and Infrastructure Engineering, Gyeongsang National University)
  • 투고 : 2022.07.21
  • 심사 : 2022.09.22
  • 발행 : 2022.12.31

초록

상·하수도 시스템은 사람들에게 안전하고 깨끗한 물을 공급해주는 사회기반시설이며, 특히 상·하수도 관로는 지중에 매설되어 있기 때문에 시스템의 결함검출이 매우 어렵다. 이러한 이유로 상·하수도 관로의 진단은 관로 내부에 카메라 및 드론을 통한 촬영을 하여 사후에 촬영된 영상을 바탕으로 시스템 진단하는 등의 사후 결함검출로 제한되기 때문에, 작업자의 업무 효율 증대와 진단의 신속성을 위해서는 관로의 실시간 탐지기술이 필요하다. 최근 첨단장비 및 인공지능 기법을 활용한 시설물 진단 기술이 개발되고 있지만, 인공지능기반 결함검출 기술은 결함 데이터의 종류 및 형태, 수가 검출 성능에 영향을 주기 때문에 다양한 학습데이터가 필요하다. 따라서, 본 연구에서는 상·하수도 관로의 결함검출 시 탐지 성능 향상을 위해 다양한 결함 시나리오를 3D 프린트를 이용하여 구현하고 이를 수집된 결함 데이터와 함께 학습데이터로 사용한다. 이후 수집된 이미지는 위험도에 따른 분류 및 객체의 라벨링 등의 전처리 작업이 수행되고 실시간 결함탐지를 수행한다. 제안된 기법은 상·하수도시스템 결함검출 시 실시간 피드백을 제공함으로써, 작업자의 진단 누락 가능성을 최소화하며 기존의 상·하수도관 진단업무 처리능력을 향상할 수 있다.

The water and sewage system is an infrastructure that provides safe and clean water to people. In particular, since the water and sewage pipelines are buried underground, it is very difficult to detect system defects. For this reason, the diagnosis of pipelines is limited to post-defect detection, such as system diagnosis based on the images taken after taking pictures and videos with cameras and drones inside the pipelines. Therefore, real-time detection technology of pipelines is required. Recently, pipeline diagnosis technology using advanced equipment and artificial intelligence techniques is being developed, but AI-based defect detection technology requires a variety of learning data because the types and numbers of defect data affect the detection performance. Therefore, in this study, various defect scenarios are implemented using 3D printing model to improve the detection performance when detecting defects in pipelines. Afterwards, the collected images are performed to pre-processing such as classification according to the degree of risk and labeling of objects, and real-time defect detection is performed. The proposed technique can provide real-time feedback in the pipeline defect detection process, and it would be minimizing the possibility of missing diagnoses and improve the existing water and sewerage pipe diagnosis processing capability.

키워드

과제정보

본 연구는 한국연구재단의 연구비지원(NRF-2021R1G1A1003295)에 의해 수행되었습니다.

참고문헌

  1. Cha, Y.J., Choi, W., and Buyukozturk, O. (2017). "Deep learningbased crack damage detection using convolutional neural networks." Computer - Aided Civil and Infrastructure Engineering, Vol. 32, No. 5, pp. 361-378. https://doi.org/10.1111/mice.12263
  2. Duran, O., Althoefer, K., and Seneviratne, L.D. (2007). "Automated pipe defect detection and categorization using camera/laserbased profiler and artificial neural network." IEEE Transactions on Automation Science and Engineering, Vol. 4, No. 1, pp. 118-126. https://doi.org/10.1109/TASE.2006.873225
  3. Gillins, M.N. (2016). Unmanned aircraft systems for bridge inspection: Testing and developing end-to-end operational workflow. Masters Thesis, Oregon State University, Corvallis, OR, U.S.
  4. Girshick, R. (2015). "Fast R-CNN." 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, pp. 1440-1448. doi: 10.1109/ICCV.2015.169.
  5. Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014). "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, U.S., pp. 580-587.
  6. Hassan, S.I., Dang, L.M., Im, S.H., Min, K.B., Nam, J.Y., and Moon, H.J. (2018). "Damage detection and classification system for sewer inspection using convolutional neural networks based on deep learning." Journal of the Korea Institute of Information and Communication Engineering, Vol. 22, No. 3, pp. 451-457. https://doi.org/10.6109/JKIICE.2018.22.3.451
  7. He, K., Zhang, X., Ren, S., and Sun, J. (2016). "Deep residual learning for image recognition." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, U.S., pp. 770-778
  8. Kim, H.Y., Choi, K.A., and Lee, I.P. (2018). "Drone image-based facility inspection-focusing on automatic process using reference images." Journal of the Korean Society for Geospatial Information Science, Vol. 26, No. 2, pp. 21-32. https://doi.org/10.7319/kogsis.2018.26.2.021
  9. Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, Vol. 60, No. 6, pp. 84-90. https://doi.org/10.1145/3065386
  10. K-water (2018). Application of deep-learning techniques to in-line inspection data.
  11. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). "Ssd: Single shot multibox detector." European Conference on Computer Vision, Springer, Amsterdam, The Netherlands, pp. 21-37.
  12. Magalhaes, S.A., Castro, L., Moreira, G., Dos Santos, F.N., Cunha, M., Dias, J., and Moreira, A.P. (2021). "Evaluating the singleshot multibox detector and YOLO deep learning models for the detection of tomatoes in a greenhouse." Sensors, Vol. 21, No. 10, 3569.
  13. Mashford, J.S., Rahilly, M., and Davis, P. (2008). "An approach using mathematical morphology and support vector machines to detect features in pipe images." 2008 Digital Image Computing: Techniques and Applications, IEEE, Washington DC, U.S., pp. 84-89.
  14. Ministry of Land, Infrastructure and Transport (MOLIT) (2015). A study on institutionalization of social infrastructure maintenance.
  15. Moselhi, O., and Shehab-Eldeen, T. (1999). "Automated detection of defects in underground sewer and water pipes." Automation in Construction, Vol. 8, No. 5, pp. 581-588. https://doi.org/10.1016/S0926-5805(99)00007-2
  16. Nam, W.S., Kim, G.S., and Jung, H.J. (2018). "Trends of inspection technology for concrete structures based on AI (Artificial Intelligence)." Proceedings of the Korea Concrete Institute, Vol. 30, No. 1, pp. 771-772.
  17. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). "You only look once: Unified, real-time object detection." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, U.S., pp. 779-788.
  18. Ren, S., He, K., Girshick, R., and Sun, J. (2015). "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems, 28: Proceeding of the Annual Conference on Neural Information Processing Systems 2015, Quebec, Canada.
  19. Sajjanar, S., Mankani, S.K., Dongrekar, P.R., Kumar, N.S., and Aradhya, H.R. (2016). "Implementation of real time moving object detection and tracking on FPGA for video surveillance applications." 2016 IEEE Distributed Computing, VLSI, Electrical Circuits and Robotics (DISCOVER), Surathkal, India, pp. 289-295.
  20. Sinha, S.K., Fieguth, P.W., and Polak, M.A. (2003). "Computer vision techniques for automatic structural assessment of underground pipes." Computer - Aided Civil and Infrastructure Engineering, Vol. 18, No. 2, pp. 95-112. https://doi.org/10.1111/1467-8667.00302