Browse > Article
http://dx.doi.org/10.12989/sss.2022.30.5.501

Deep learning approach to generate 3D civil infrastructure models using drone images  

Kwon, Ji-Hye (School of Civil and Environmental Engineering, Yonsei University)
Khudoyarov, Shekhroz (SISTech Co., LTD)
Kim, Namgyu (Research Strategic Planning Department, Korea Institute of Civil Engineering and Building Technology)
Heo, Jun-Haeng (School of Civil and Environmental Engineering, Yonsei University)
Publication Information
Smart Structures and Systems / v.30, no.5, 2022 , pp. 501-511 More about this Journal
Abstract
Three-dimensional (3D) models have become crucial for improving civil infrastructure analysis, and they can be used for various purposes such as damage detection, risk estimation, resolving potential safety issues, alarm detection, and structural health monitoring. 3D point cloud data is used not only to make visual models but also to analyze the states of structures and to monitor them using semantic data. This study proposes automating the generation of high-quality 3D point cloud data and removing noise using deep learning algorithms. In this study, large-format aerial images of civilian infrastructure, such as cut slopes and dams, which were captured by drones, were used to develop a workflow for automatically generating a 3D point cloud model. Through image cropping, downscaling/upscaling, semantic segmentation, generation of segmentation masks, and implementation of region extraction algorithms, the generation of the point cloud was automated. Compared with the method wherein the point cloud model is generated from raw images, our method could effectively improve the quality of the model, remove noise, and reduce the processing time. The results showed that the size of the 3D point cloud model created using the proposed method was significantly reduced; the number of points was reduced by 20-50%, and distant points were recognized as noise. This method can be applied to the automatic generation of high-quality 3D point cloud models of civil infrastructures using aerial imagery.
Keywords
automatic model generation; deep learning algorithm; noise reduction; point cloud; semantic segmentation;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H. (2018), "Encoder-decoder with atrous separable convolution for semantic image segmentation", Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, September.
2 Rahaman, H. and Champion, E. (2019), "To 3D or not 3D: Choosing a photogrammetry workflow for cultural heritage groups", Heritage, 2(3), 1835-1851. https://doi.org/10.3390/heritage2030112   DOI
3 Tang, S., Zhang, Y., Li, Y., Yuan, Z., Wang, Y., Zhang, X., Li, X., Zhang, Y., Guo, R. and Wang, W. (2019), "Fast and automatic reconstruction of semantically rich 3D indoor maps from lowquality RGB-D sequences", Sensors, 19(3), 533. https://doi.org/10.3390/s19030533   DOI
4 The Cityscapes Dataset (2020), https://www.cityscapes-dataset.com/
5 Yucer, K., Sorkine-Hornung, A., Wang, O. and Sorkine-Hornung, O. (2016), "Efficient 3D object segmentation from densely sampled light fields with applications to 3D reconstruction", ACM Transactions on Graphics (TOG), 35(3), 1-15. https://doi.org/10.1145/2876504   DOI
6 Zha, F., Fu, Y., Wang, P., Guo, W., Li, M., Wang, X. and Cai, H. (2020), "Semantic 3D reconstruction for robotic manipulators with an eye-in-hand vision system", Appl. Sci., 10(3), 1183. https://doi.org/10.3390/app10031183   DOI
7 Irschara, A., Kaufmann, V., Klopschitz, M., Bischof, H. and Leberl, F. (2010), "Towards fully automatic photogrammetric reconstruction using digital images taken from UAVs", Proc. Int. Soc. Photogramm. Remote Sens., 38(7A), 65-70.
8 Jiang, Y., Bai, Y. and Han, S. (2020), "Determining ground elevations covered by vegetation on construction sites determining ground elevations covered by vegetation on construction sites using drone-based orthoimage and convolutional neural network", J. Comput. Civil. Eng., 34(6). https://doi.org/10.1061/(ASCE)CP.1943-5487.0000930   DOI
9 Agisoft Metashape Standard Edition (2021), https://www.agisoft.com/downloads/user-manuals/
10 Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L. (2014), "Semantic image segmentation with deep convolutional nets and fully connected crfs", arXiv preprint arXiv:1412.7062. https://doi.org/10.48550/arXiv.1412.7062   DOI
11 Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L. (2017a), "DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", IEEE Trans. Pattern Anal. Mach. Intell., 40(4), 834-848. https://doi.org/10.1109/TPAMI.2017.2699184   DOI
12 Chen, L.C., Papandreou, G., Schroff, F. and Adam, H. (2017b), "Rethinking atrous convolution for semantic image segmentation", arXiv preprint arXiv:1706.05587. https://doi.org/10.48550/arXiv.1706.05587   DOI
13 COLMAP (2022), https://colmap.github.io/
14 El-Omari, S. and Moselhi, O. (2008), "Integrating 3D laser scanning and photogrammetry for progress measurement of construction work", Automat. Constr., 18(1), 1-9. https://doi.org/10.1016/j.autcon.2008.05.006   DOI
15 He, T., Yang, Y., Shi, Y., Liang, X., Fu, S., Xie, G., Liu, B. and Liu, Y. (2022), "Quantifying spatial distribution of interrill and rill erosion in a loess at different slopes using structure from motion (SfM) photogrammetry", Int. Soil Water Conserv. Res., 10(3), 393-406. https://doi.org/10.1016/j.iswcr.2022.01.001   DOI
16 Liu, Z., Brigham, R., Long, E.R., Wilson, L., Frost, A., Orr, S.A. and Grau-Bove, J. (2022), "Semantic segmentation and photogrammetry of crowdsourced images to monitor historic facades", Heritage Science, 10(1), 1-17. https://doi.org/10.1186/s40494-022-00664-y   DOI
17 Pix4D (2021), https://support.pix4d.com/hc/en-us/sections/360003718992-Manual
18 Popescu, C., Taljsten, B., Blanksvard, T. and Elfgren, L. (2019), "3D reconstruction of existing concrete bridges using optical methods", Struct. Infrastruct. Eng., 15(7), 912-924. https://doi.org/10.1080/15732479.2019.1594315   DOI
19 Khaloo, A., Lattanzi, D., Cunningham, K., Dell'Andrea, R. and Riley, M. (2018), "Unmanned aerial vehicle inspection of the Placer River Trail Bridge through image-based 3D modelling", Struct. Infrastruct. Eng., 14(1), 124-136. https://doi.org/10.1080/15732479.2017.1330891   DOI
20 Liu, T. and Abd-Elrahman, A. (2018), "Deep convolutional neural network training enrichment using multi-view object-based analysis of Unmanned Aerial Systems imagery for wetlands classification", ISPRS J. Photogramm. Remote Sens., 139, 154-170. https://doi.org/10.1016/j.isprsjprs.2018.03.006   DOI
21 Menegoni, N., Giordan, D., Perotti, C. and Tannant, D.D. (2019), "Detection and geometric characterization of rock mass discontinuities using a 3D high-resolution digital outcrop model generated from RPAS imagery - Ormea rock slope, Italy", Eng. Geol., 252, 145-163. https://doi.org/10.1016/j.enggeo.2019.02.028   DOI
22 Minaee, S., Boykov, Y.Y., Porikli, F., Plaza, A.J., Kehtarnavaz, N. and Terzopoulos, D. (2021), "Image segmentation using deep learning: A survey", IEEE Trans. Pattern Anal. Mach. Intell., 44(7), 3523-3542. https://doi.org/10.1109/TPAMI.2021.3059968   DOI
23 Mustafa, A., Volino, M., Kim, H., Guillemaut, J.Y. and Hilton, A. (2021), "Temporally coherent general dynamic scene reconstruction", Int. J. Comput. Vision, 129(1), 123-141. https://doi.org/10.48550/arXiv.1907.08195   DOI
24 Omar, H., Mahdjoubi, L. and Kheder, G. (2018), "Towards an automated photogrammetry-based approach for monitoring and controlling construction site activities", Comput. Ind., 98, 172-182. https://doi.org/10.1016/j.compind.2018.03.012   DOI
25 Inzerillo, L., Di Mino, G. and Roberts, R. (2018), "Automation in construction image-based 3D reconstruction using traditional and UAV datasets for analysis of road pavement distress", Autom. Constr., 96, 457-469. https://doi.org/10.1016/j.autcon.2018.10.010   DOI