DOI QR코드

DOI QR Code

Image Analysis by CNN Technique for Maintenance of Porcelain Insulator

자기애자의 유지 관리를 위한 CNN 기법을 이용한 이미지 분석

  • Choi, In-Hyuk (Korea Electric Power Corporation (KEPCO) Research Institute) ;
  • Shin, Koo-Yong (Korea Electric Power Corporation (KEPCO) Research Institute) ;
  • Koo, Ja-Bin (Korea Electric Power Corporation (KEPCO) Research Institute) ;
  • Son, Ju-Am (Korea Electric Power Corporation (KEPCO) Research Institute) ;
  • Lim, Dae-Yeon (Department of Safety Engineering, Incheon National University) ;
  • Oh, Tae-Keun (Department of Safety Engineering, Incheon National University) ;
  • Yoon, Young-Geun (Department of Safety Engineering, Incheon National University)
  • 최인혁 (한국전력공사 전력연구원) ;
  • 신구용 (한국전력공사 전력연구원) ;
  • 구자빈 (한국전력공사 전력연구원) ;
  • 손주암 (한국전력공사 전력연구원) ;
  • 임대연 (인천대학교 안전공학과) ;
  • 오태근 (인천대학교 안전공학과) ;
  • 윤영근 (인천대학교 안전공학과)
  • Received : 2020.02.06
  • Accepted : 2020.02.19
  • Published : 2020.05.01

Abstract

This study examines the feasibility of the image deep learning method using convolution neural networks (CNNs) to maintain a porcelain insulator. Data augmentation is performed to prevent over-fitting, and the classification performance is evaluated by training the age, material, region, and pollution level of the insulator using image data in which the background and labelling are removed. Based on the results, it was difficult to predict the age, but it was possible to classify 76% of the materials, 60% of the pollution level, and more than 90% of the regions. From the results of this study, we identified the potential and limitations of the CNN classification for the four groups currently classified. However, it was possible to detect discoloration of the porcelain insulator resulting from physical, chemical, and climatic factors. Based on this, it will be possible to estimate the corrosion of the cap and discoloration of the porcelain caused by environmental deterioration, abnormal voltage, and lightning.

Keywords

References

  1. G. H. Vaillancourt, J. P. Bellerive, M. St-Jean, and C. Jean, IEEE Trans. Power Del., 9, 208 (1994). [DOI: https://doi.org/10.1109/61.277692]
  2. A. Rawat and R. S. Gorur, IEEE Trans. Dielectr. Electr. Insul., 16, 107 (2009). [DOI: https://doi.org/10.1109/TDEI.2009.4784557]
  3. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, Proc. IEEE, 86, 2278 (1998). [DOI: https://doi.org/10.1109/5.726791]
  4. Stanford Vision Lab, Stanford University, Princeton University, LSVRC, Available: www.image-net.org (2016).
  5. A. Krizhevsky, I. Sutskever, and G. E. Hinton, Commun. ACM, 60, 84 (2017). [DOI: https://doi.org/10.1145/3065386]
  6. K. He, X. Zhang, S. Ren, and J. Sun, Proc. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, Las Vegas, USA, 2016) p. 27. [DOI: https://doi.org/10.1109/CVPR.2016.90]
  7. X. Xia, C. Xu, and B. Nan, Proc. 2017 2nd International Conference on Image, Vision and Computing (ICIVC) (IEEE, Chengdu, China, 2017) p. 2. [DOI: https://doi.org/10.1109/ICIVC.2017.7984661]
  8. K. Simonyan and A. Ziserman, 3rd International Conference on Learning Represntaions, Available: https://arxiv.org/abs/1409.1556 (2015).