DOI QR코드

DOI QR Code

Analyze weeds classification with visual explanation based on Convolutional Neural Networks

  • Vo, Hoang-Trong (Department of Electronics and Computer Engineering, Chonnam National University) ;
  • Yu, Gwang-Hyun (Department of Electronics and Computer Engineering, Chonnam National University) ;
  • Nguyen, Huy-Toan (Department of Electronics and Computer Engineering, Chonnam National University) ;
  • Lee, Ju-Hwan (Department of Electronics and Computer Engineering, Chonnam National University) ;
  • Dang, Thanh-Vu (Department of Electronics and Computer Engineering, Chonnam National University) ;
  • Kim, Jin-Young (Department of Electronics and Computer Engineering, Chonnam National University)
  • 투고 : 2019.06.07
  • 심사 : 2019.09.19
  • 발행 : 2019.09.30

초록

To understand how a Convolutional Neural Network (CNN) model captures the features of a pattern to determine which class it belongs to, in this paper, we use Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize and analyze how well a CNN model behave on the CNU weeds dataset. We apply this technique to Resnet model and figure out which features this model captures to determine a specific class, what makes the model get a correct/wrong classification, and how those wrong label images can cause a negative effect to a CNN model during the training process. In the experiment, Grad-CAM highlights the important regions of weeds, depending on the patterns learned by Resnet, such as the lobe and limb on 미국가막사리, or the entire leaf surface on 단풍잎돼지풀. Besides, Grad-CAM points out a CNN model can localize the object even though it is trained only for the classification problem.

키워드

참고문헌

  1. S. Park and J.W. Kim, "Red Tide Algea Image Classification using Deep Learning based Open Source," Smart Media Journal, vol. 7, no. 2, pp. 34-39, 2018 https://doi.org/10.30693/SMJ.2018.7.2.34
  2. S.J. Kim, J.S. Lee, and H.S. Kim, "Deep learning-based Automatic Weed Detection on Onion Field," Smart Media Journal, vol. 7, no. 3, pp. 16-21, 2018 https://doi.org/10.30693/SMJ.2018.7.3.16
  3. R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," Proc. of the IEEE International Conference on Computer Vision, pp. 618-626, 2017
  4. M.D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," European conference on computer vision, pp. 818-833, 2014
  5. B. Zhou, A. Khosla, L. A. A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," Proc. of the IEEE conference on computer vision and pattern recognition, pp. 2921-2929, 2016
  6. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014
  7. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Proc. of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016
  8. K. He, X. Zhang, R. S., and J. Sun, "Identity mappings in deep residual networks," European conference on computer vision.Springer, Cham, pp. 630-645, Oct. 2016
  9. ImageNet. http://www.image-net.org/search?q=weed (accessed Sept., 30, 2019).