Browse > Article
http://dx.doi.org/10.12989/sss.2022.29.1.237

Synthetic data augmentation for pixel-wise steel fatigue crack identification using fully convolutional networks  

Zhai, Guanghao (Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign)
Narazaki, Yasutaka (Zhejiang University - University of Illinois at Urbana-Champaign Institute, Zhejiang University)
Wang, Shuo (Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign)
Shajihan, Shaik Althaf V. (Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign)
Spencer, Billie F. Jr. (Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign)
Publication Information
Smart Structures and Systems / v.29, no.1, 2022 , pp. 237-250 More about this Journal
Abstract
Structural health monitoring (SHM) plays an important role in ensuring the safety and functionality of critical civil infrastructure. In recent years, numerous researchers have conducted studies to develop computer vision and machine learning techniques for SHM purposes, offering the potential to reduce the laborious nature and improve the effectiveness of field inspections. However, high-quality vision data from various types of damaged structures is relatively difficult to obtain, because of the rare occurrence of damaged structures. The lack of data is particularly acute for fatigue crack in steel bridge girder. As a result, the lack of data for training purposes is one of the main issues that hinders wider application of these powerful techniques for SHM. To address this problem, the use of synthetic data is proposed in this article to augment real-world datasets used for training neural networks that can identify fatigue cracks in steel structures. First, random textures representing the surface of steel structures with fatigue cracks are created and mapped onto a 3D graphics model. Subsequently, this model is used to generate synthetic images for various lighting conditions and camera angles. A fully convolutional network is then trained for two cases: (1) using only real-word data, and (2) using both synthetic and real-word data. By employing synthetic data augmentation in the training process, the crack identification performance of the neural network for the test dataset is seen to improve from 35% to 40% and 49% to 62% for intersection over union (IoU) and precision, respectively, demonstrating the efficacy of the proposed approach.
Keywords
fully convolutional networks; semantic segmentation; steel fatigue crack; synthetic data;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 Xu, J.L., Dong, Y.K., Zhang, Z.H., Li, S.L., He, S.Y. and Li, H. (2016), "Full scale strain monitoring of a suspension bridge using high performance distributed fiber optic sensors", Measure. Sci. Technol., 27(12), 124017. https://doi.org/10.1088/0957-0233/27/12/124017   DOI
2 Xu, Y., Bao, Y.Q., Chen, J.H., Zuo, W.M. and Li, H. (2019), "Surface fatigue crack identification in steel box girder of bridges by a deep fusion convolutional neural network based on consumer-grade camera images", Struct. Health Monitor., 18(3), 653-674. https://doi.org/10.1177/1475921718764873   DOI
3 Abdel-Qader, I., Abudayyeh, O. and Kelly, M.E. (2003), "Analysis of edge-detection techniques for crack identification in bridges", J. Comput. Civil Eng., 17(4), 255-263. https://doi.org/10.1061/(ASCE)0887-3801(2003)17:4(255)   DOI
4 Adhikari, R.S., Moselhi, O. and Bagchi, A. (2014), "Image-based retrieval of concrete crack properties for bridge inspection", Automat. Constr., 39, 180-194. https://doi.org/10.1016/j.autcon.2013.06.011   DOI
5 Bao, Y.Q. and Li, H. (2020), "Machine learning paradigm for structural health monitoring", Struct. Health Monitor., 1475921720972416. https://doi.org/10.1177/1475921720972416   DOI
6 Bao, Y.Q., Chen, Z.C., Wei, S.Y., Xu, Y., Tang, Z.Y. and Li, H. (2019), "The state of the art of data science and engineering in structural health monitoring", Engineering, 5(2), 234-242. https://doi.org/10.1016/j.eng.2018.11.027   DOI
7 Blender (n.d.), https://www.blender.org/
8 Blender API Documentation. (n.d.), https://docs.blender.org/api/2.79/
9 Bowles, C., Chen, L., Guerrero, R., Bentley, P., Gunn, R., Hammers, A., Dickie, D.A., Valdes Hernandez, M., Wardlaw, J. and Rueckert, D. (2018), "Gan augmentation: Augmenting training data using generative adversarial networks", arXiv preprint arXiv:1810.10863.
10 ASCE's 2021 infrastructure report card (2021), Bridges; American Society of Civil Engineers, USA. https://infrastructurereportcard.org/cat-item/bridges/
11 Bray, D.E. and Stanley, R.K. (1996), Nondestructive Evaluation: A Tool in Design, Manufacturing and Service, CRC Press.
12 Bao, Y.Q., Li, J., Nagayama, T., Xu, Y., Spencer Jr., B.F. and Li, H. (2021), "The 1st International Project Competition for Structural Health Monitoring (IPC-SHM, 2020): A summary and benchmark problem", Struct. Health Monitor., 20(4), 14759217211006485. https://doi.org/10.1177/14759217211006485   DOI
13 Xu, Y., Li, S.L., Zhang, D.Y., Jin, Y., Zhang, F.J., Li, N. and Li, H. (2018), "Identification framework for cracks on a steel structure surface by a restricted Boltzmann machines algorithm based on consumer-grade camera images", Struct. Control Health Monitor., 25(2), e2075. https://doi.org/10.1002/stc.2075   DOI
14 Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. and Bengio, Y. (2014), "Generative adversarial networks", arXiv preprint arXiv: 1406.2661.
15 Farrar, C.R. and Worden, K. (2012), Structural Health Monitoring: A Machine Learning Perspective, John Wiley & Sons.
16 Mohan, A. and Poobal, S. (2018), "Crack detection using image processing: A critical review and analysis", Alexandria Eng. J., 57(2), 787-798. https://doi.org/10.1016/j.aej.2017.01.020   DOI
17 Yeum, C.M. and Dyke, S.J. (2015), "Vision-based automated crack detection for bridge inspection", Comput.-Aided Civil Infrastr. Eng., 30(10), 759-770. https://doi.org/10.1111/mice.12141   DOI
18 Zhang, A., Wang, K.C.P., Li, B.X., Yang, E.H., Dai, X.X., Peng, Y., Fei, Y., Liu, Y., Li, J.Q. and Chen, C. (2017), "Automated pixel-level pavement crack detection on 3D asphalt surfaces using a deep-learning network", Comput.-Aided Civil Infrastr. Eng., 32(10), 805-819. https://doi.org/10.1111/mice.12297   DOI
19 Burley, B. and Studios, W.D.A. (2012), "Physically-based shading at Disney", ACM SIGGRAPH, 2012, 1-7.
20 Chen, L.C., Papandreou, G., Schroff, F. and Adam, H. (2017), "Rethinking atrous convolution for semantic image segmentation", arXiv preprint arXiv:1706.05587.
21 Fisher, J.W. and Yuceoglu, U. (1978), "A survey of localized cracking in steel bridges", 1978, p. 334.
22 Frid-Adar, M., Klang, E., Amitai, M., Goldberger, J. and Greenspan, H. (2018), "Synthetic data augmentation using GAN for improved liver lesion classification", Proceedings of 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, April, pp. 289-293. https://doi.org/10.1109/ISBI.2018.8363576   DOI
23 Giakoumis, I., Nikolaidis, N. and Pitas, I. (2005), "Digital image processing techniques for the detection and removal of cracks in digitized paintings", IEEE Transact. Image Process., 15(1), 178-188. https://doi.org/10.1109/TIP.2005.860311   DOI
24 Han, Q.H., Xu, J., Carpinteri, A. and Lacidogna, G. (2015), "Localization of acoustic emission sources in structural health monitoring of masonry bridge", Struct. Control Health Monitor., 22(2), 314-329. https://doi.org/10.1002/stc.1675   DOI
25 Narazaki, Y., Hoskere, V., Yoshida, K., Spencer Jr., B.F. and Fujino, Y. (2021), "Synthetic environments for vision-based structural condition assessment of Japanese high-speed railway viaducts", Mech. Syst. Signal Process., 160, 107850. https://doi.org/10.1016/j.ymssp.2021.107850   DOI
26 Hoskere V., Narazaki, Y., Hoang, T.A. and Spencer Jr., B.F. (2018), "Towards automated post-earthquake inspections with deep learning-based condition-aware models", arXiv:1809.09195.
27 Hoskere, V., Narazaki, Y., Hoang, T.A. and Spencer Jr., B.F. (2020), "MaDnet: multi-task semantic segmentation of multiple types of structural materials and damage in images of civil infrastructure", J. Civil Struct. Health Monitor., 10, 757-773. https://doi.org/10.1007/s13349-020-00409-0   DOI
28 CC0 Textures - Free Public Domain PBR Materials. https://www.sharetextures.com/
29 Textures for 3D, graphic design and Photoshop! (n.d.). https://www.textures.com/
30 Narazaki, Y., Gomez, F., Hoskere, V., Smith, M.D. and Spencer Jr., B.F. (2020a), "Efficient development of vision-based dense three-dimensional displacement measurement algorithms using physics-based graphics models", Struct. Health Monitor., 1475921720939522. https://doi.org/10.1177/1475921720939522   DOI
31 Shader Editor - Blender Manual. https://docs.blender.org/manual/en/latest/editors/shader_editor.html
32 Jahanshahi, M.R., Kelly, J.S., Masri, S.F. and Sukhatme, G.S. (2009), "A survey and evaluation of promising approaches for automatic image-based defect detection of bridge structures", Struct. Infrastr. Eng., 5(6), 455-486. https://doi.org/10.1080/15732470801945930   DOI
33 Jahanshahi, M.R., Chen, F.C., Joffe, C. and Masri, S.F. (2017), "Vision-based quantitative assessment of microcracks on reactor internal components of nuclear power plants", Struct. Infrastr. Eng., 13(8), 1013-1026. https://doi.org/10.1080/15732479.2016.1231207   DOI
34 Prasanna, P., Dana, K.J., Gucunski, N., Basily, B.B., La, H.M., Lim, R.S. and Parvardeh, H. (2014), "Automated crack detection on concrete bridges", IEEE Transact. Automat. Sci. Eng., 13(2), 591-599. https://doi.org/10.1109/TASE.2014.2354314   DOI
35 Ros, G., Sellart, L., Materzynska, J., Vazquez, D. and Lopez, A.M. (2016), "The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3234-3243.
36 Sadler, D.J. and Ahn, C.H. (2001), "On-chip eddy current sensor for proximity sensing and crack detection", Sensors Actuators A: Phys., 91(3), 340-345. https://doi.org/10.1016/S0924-4247(01)00605-7   DOI
37 Shorten, C. and Khoshgoftaar, T.M. (2019), "A survey on image data augmentation for deep learning", J. Big Data, 6(1), 1-48. https://doi.org/10.1186/s40537-019-0197-0   DOI
38 Spencer Jr., B.F., Hoskere. V. and Narazaki, Y. (2019), "Advances in computer vision-based civil infrastructure inspection and monitoring", Engineering, 5(2), 199-222. https://doi.org/10.1016/j.eng.2018.11.030   DOI
39 Szeliski, R. (2010), Computer Vision: Algorithms and Applications, Springer Science & Business Media.
40 Liu, Y.F., Cho, S., Spencer Jr., B.F. and Fan, J.S. (2016), "Concrete crack assessment using digital image processing and 3D scene reconstruction", J. Comput. Civil Eng., 30(1), 04014124. https://doi.org/10.1061/(ASCE)CP.1943-5487.0000446   DOI
41 Long, J., Shelhamer, E. and Darrell, T. (2015), "Fully convolutional networks for semantic segmentation", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431-3440.
42 Margineantu, D.D. (2000), "When does imbalanced data require cost-sensitive learning?", Report No. WS-00-05; AAAI Workshop.
43 Narazaki, Y., Hoskere, V., Hoang, T.A., Fujino, Y., Sakurai, A. and Spencer Jr., B.F. (2020b), "Vision-based automated bridge component recognition with high-level scene consistency", Comput.-Aided Civil Infrastr Eng., 35(5), 465-482. https://doi.org/10.1111/mice.12505   DOI
44 Perlin, K. (1985), "An image synthesizer", ACM Siggraph Computer Graphics, 19(3), 287-296. https://doi.org/10.1145/325165.325247   DOI
45 Perlin, K. (2001), "Noise hardware. In Real-Time Shading", SIGGRAPH Course Notes.