Browse > Article
http://dx.doi.org/10.4313/JKEM.2020.33.3.239

Image Analysis by CNN Technique for Maintenance of Porcelain Insulator  

Choi, In-Hyuk (Korea Electric Power Corporation (KEPCO) Research Institute)
Shin, Koo-Yong (Korea Electric Power Corporation (KEPCO) Research Institute)
Koo, Ja-Bin (Korea Electric Power Corporation (KEPCO) Research Institute)
Son, Ju-Am (Korea Electric Power Corporation (KEPCO) Research Institute)
Lim, Dae-Yeon (Department of Safety Engineering, Incheon National University)
Oh, Tae-Keun (Department of Safety Engineering, Incheon National University)
Yoon, Young-Geun (Department of Safety Engineering, Incheon National University)
Publication Information
Journal of the Korean Institute of Electrical and Electronic Material Engineers / v.33, no.3, 2020 , pp. 239-244 More about this Journal
Abstract
This study examines the feasibility of the image deep learning method using convolution neural networks (CNNs) to maintain a porcelain insulator. Data augmentation is performed to prevent over-fitting, and the classification performance is evaluated by training the age, material, region, and pollution level of the insulator using image data in which the background and labelling are removed. Based on the results, it was difficult to predict the age, but it was possible to classify 76% of the materials, 60% of the pollution level, and more than 90% of the regions. From the results of this study, we identified the potential and limitations of the CNN classification for the four groups currently classified. However, it was possible to detect discoloration of the porcelain insulator resulting from physical, chemical, and climatic factors. Based on this, it will be possible to estimate the corrosion of the cap and discoloration of the porcelain caused by environmental deterioration, abnormal voltage, and lightning.
Keywords
Convolution neural network; Image deep learning; Porcelain insulator; Maintenance; Augmentation;
Citations & Related Records
연도 인용수 순위
  • Reference
1 G. H. Vaillancourt, J. P. Bellerive, M. St-Jean, and C. Jean, IEEE Trans. Power Del., 9, 208 (1994). [DOI: https://doi.org/10.1109/61.277692]   DOI
2 A. Rawat and R. S. Gorur, IEEE Trans. Dielectr. Electr. Insul., 16, 107 (2009). [DOI: https://doi.org/10.1109/TDEI.2009.4784557]   DOI
3 Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Proc. IEEE, 86, 2278 (1998). [DOI: https://doi.org/10.1109/5.726791]   DOI
4 Stanford Vision Lab, Stanford University, Princeton University, LSVRC, Available: www.image-net.org (2016).
5 A. Krizhevsky, I. Sutskever, and G. E. Hinton, Commun. ACM, 60, 84 (2017). [DOI: https://doi.org/10.1145/3065386]   DOI
6 K. He, X. Zhang, S. Ren, and J. Sun, Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, Las Vegas, USA, 2016) p. 27. [DOI: https://doi.org/10.1109/CVPR.2016.90]
7 X. Xia, C. Xu, and B. Nan, Proc. 2017 2nd International Conference on Image, Vision and Computing (ICIVC) (IEEE, Chengdu, China, 2017) p. 2. [DOI: https://doi.org/10.1109/ICIVC.2017.7984661]
8 K. Simonyan and A. Ziserman, 3rd International Conference on Learning Represntaions, Available: https://arxiv.org/abs/1409.1556 (2015).