Browse > Article
http://dx.doi.org/10.12673/jant.2022.26.2.113

Highlighting Defect Pixels for Tire Band Texture Defect Classification  

Rakhmatov, Shohruh (Research Department, DeltaX)
Ko, Jaepil (Department of Computer Engineering, Kumoh National Institute of Technology)
Abstract
Motivated by people highlighting important phrases while reading or taking notes we propose a neural network training method by highlighting defective pixel areas to classify effectively defect types of images with complex background textures. To verify our proposed method we apply it to the problem of classifying the defect types of tire band fabric images that are too difficult to classify. In addition we propose a backlight highlighting technique which is tailored to the tire band fabric images. Backlight highlighting images can be generated by using both the GradCAM and simple image processing. In our experiment we demonstrated that the proposed highlighting method outperforms the traditional method in the view points of both classification accuracy and training speed. It achieved up to 13.4% accuracy improvement compared to the conventional method. We also showed that the backlight highlighting technique tailored for highlighting tire band fabric images is superior to a contour highlighting technique in terms of accuracy.
Keywords
Defect detection; GradCAM; Highlighting learning strategy; Texture defect classification; Tire band texture;
Citations & Related Records
Times Cited By KSCI : 3  (Citation Analysis)
연도 인용수 순위
1 I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett, "Attention is all you need," Advances in Neural Information Processing Systems, Vol. 30, 2017.
2 Y. LeCun, K. Koray, and F. Clement, "Convolutional networks and applications," in Proceeding of IEEE Int'l Symposium on Circuits and Systems, Paris, pp. 253-256, 2010.
3 O. Badmos, A. Kopp, T. Bernthaler, and G. Schneider, "Image-based defect detection in lithium-ion battery electrode using convolutional neural networks," Journal of Intelligent Manufacturing, Vol. 31, No. 4, pp. 885-897, 2020.   DOI
4 S. Kim, Y.-K Noh, and F. Park, "Efficient neural network compression via transfer learning for machine vision inspection," Neurocomputing, Vol. 413, pp. 294-304, 2020.   DOI
5 R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization," in Proceeding of IEEE Int'l Conf. on Computer Vision (ICCV), Venice, pp. 618-626, 2017.
6 M. Lin, Q. Chen, and S. Yan, "Network In Network," arXiv: 1312.4400, 2014.
7 B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning Deep Features for Discriminative Localization," arXiv: 1512.04150, 2015.
8 D. Worrall, S. Garbin, D. Turmukhambeto, and G. Brostow, "Harmonic Networks: Deep Translation and Rotation Equvariance," arXiv:1612.04642, 2016.
9 D. Alexey, et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," arXiv 2021.
10 E. Simo-Serra, S. Iizuka, K. Sasaki, and H. Ishikawa, "Learning to Simplify: Fully Convolutional Networks for Rough Sketch Cleanup," ACM Trans. on Graphics, Vol 35. No. 4, pp. 1-11, 2016.
11 P. Isola, J. Zhu, T. Zhou, and A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," arXiv:1611.07004, 2016.
12 J. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, "Beyond Short Snippets: Deep Networks for Video Classification," in Proceeding of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Boston, pp. 4694-4702, 2015.
13 K. Simonyan, and A. Zisserman, "Two-Stream Convolutional Networks for Action Recognition in Videos," Advances in Neural Information Processing Systems, Vol. 27, 2014.
14 Z. Ren, F. Fang, N. Yang and Y. Wu, "State of the Art in Defect detection based on machine vision," International Journal of Precision Engineering and Manufacturing-Green Technology, Vol. 9, pp. 661-691, 2022.   DOI
15 Y. Wu, and X. Zhang, "Automatic fabric defect detection using cascaded mixed feature pyramid with guided localization," Sensors, Vol. 20, No. 3, pp.871-878, 2020.   DOI
16 S. Edgar, et al., "Learning to simplify: fully convolutional networks for rough sketch cleanup," ACM Trans. on Graphics, Vol. 121, pp.1-11, 2016.
17 R. Prajit, N. Parmar, A. Vaswani, I. Bello, A. Levskaya, and J. Shlens, "Stand-Alone Self-Attention in Vision Models," Advances in Neural Information Processing Systems, Vol. 32, 2019.