DOI QR코드

DOI QR Code

Pest Control System using Deep Learning Image Classification Method

  • Moon, Backsan (Graduate School of Computer Science, Dankook University) ;
  • Kim, Daewon (Department of Applied Computer Engineering, Dankook University)
  • Received : 2018.09.28
  • Accepted : 2018.12.10
  • Published : 2019.01.31

Abstract

In this paper, we propose a layer structure of a pest image classifier model using CNN (Convolutional Neural Network) and background removal image processing algorithm for improving classification accuracy in order to build a smart monitoring system for pine wilt pest control. In this study, we have constructed and trained a CNN classifier model by collecting image data of pine wilt pest mediators, and experimented to verify the classification accuracy of the model and the effect of the proposed classification algorithm. Experimental results showed that the proposed method successfully detected and preprocessed the region of the object accurately for all the test images, resulting in showing classification accuracy of about 98.91%. This study shows that the layer structure of the proposed CNN classifier model classified the targeted pest image effectively in various environments. In the field test using the Smart Trap for capturing the pine wilt pest mediators, the proposed classification algorithm is effective in the real environment, showing a classification accuracy of 88.25%, which is improved by about 8.12% according to whether the image cropping preprocessing is performed. Ultimately, we will proceed with procedures to apply the techniques and verify the functionality to field tests on various sites.

Keywords

CPTSCQ_2019_v24n1_9_f0001.png 이미지

Fig. 1. Pheromone trap for pine-wilt disease control[2]

CPTSCQ_2019_v24n1_9_f0002.png 이미지

Fig. 2. Smart trap for pine-wilt disease control

CPTSCQ_2019_v24n1_9_f0003.png 이미지

Fig. 3. Smart trap’s magnified lower detail

CPTSCQ_2019_v24n1_9_f0004.png 이미지

Fig. 4. Operation diagram of the Smart trap

CPTSCQ_2019_v24n1_9_f0005.png 이미지

Fig. 5. H/W parts used in Smart trap

CPTSCQ_2019_v24n1_9_f0006.png 이미지

Fig. 6. Operation flowchart of the Smart trap

CPTSCQ_2019_v24n1_9_f0007.png 이미지

Fig. 7. Process of the classification for pest monitoring

CPTSCQ_2019_v24n1_9_f0008.png 이미지

Fig. 8. Erosion operation using insect image

CPTSCQ_2019_v24n1_9_f0009.png 이미지

Fig. 9. Dilation operation using insect image

CPTSCQ_2019_v24n1_9_f0010.png 이미지

Fig. 10. Classification accuracy change according to change of learning frequency

CPTSCQ_2019_v24n1_9_f0011.png 이미지

Fig. 11. Experimental image data ((a) Monochamus saltuarius, (b) General insects, (c) Leaves, etc.)

CPTSCQ_2019_v24n1_9_f0012.png 이미지

Fig. 12. Result of background removal ((a),(d):Foreground image, (b),(e): Background image, (c),(f):Separated binary image)

CPTSCQ_2019_v24n1_9_f0013.png 이미지

Fig. 13. Results of noise removal through morphological transformation ((a),(b): Result of two consecutive dilations, (c)(d): Result of two consecutive erosions, (e)(f): Result of two consecutive erosions, (g)(h): Result of two consecutive dilations

CPTSCQ_2019_v24n1_9_f0014.png 이미지

Fig. 14. Result of subject area extraction algorithm ((a) Noise removal image, (b) Rectangle area information extraction, (c) Cropped original image)

CPTSCQ_2019_v24n1_9_f0015.png 이미지

Fig. 15. Results of pre-processed images taken in field test ((a) Pine wilt insect image, (b) Background image of (a), (c) Cropped image from (a), (d) General insect image, (e) Background image of (d), (f) Cropped image from (d))

Table 1. Foreground and background pixel separation algorithm

CPTSCQ_2019_v24n1_9_t0001.png 이미지

Table 2. Algorithm to find the region of an object from a binary image

CPTSCQ_2019_v24n1_9_t0002.png 이미지

Table 3. CNN Learning and Classifier Model Structure

CPTSCQ_2019_v24n1_9_t0003.png 이미지

Table 4. Acquired experimental data-set

CPTSCQ_2019_v24n1_9_t0004.png 이미지

Table 5. Classification accuracy evaluation according to various experiment groups

CPTSCQ_2019_v24n1_9_t0005.png 이미지

Table 6. Classification accuracy evaluation after cropping the subject area

CPTSCQ_2019_v24n1_9_t0006.png 이미지

Table 7. Comparison of classification accuracy before and after cropping the subject area

CPTSCQ_2019_v24n1_9_t0007.png 이미지

Table 8. Comparison of classification accuracy among the proposed method, k-NN and SVM.

CPTSCQ_2019_v24n1_9_t0008.png 이미지

References

  1. Korea Forest Service, "Pine Wilt Disease", http://www.forest.go.kr/newkfsweb/html/HtmlPage.do?pg=/conser/conser_020103.html
  2. Joongang Ilbo, "Pheromone tempted trap for Pine Wilt Desease mediators", http://news.joins.com/article/18533453
  3. Nongupin Newspaper, "Pine Wilt Insect...'Captured usingpheromone tempted trap", http://www.nongupin.co.kr/news/articleView.html?idxno=39211
  4. N. Dalal and B. Triggs, "Histograms of oriented gradients for human detection," IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 20-25 June 2005.
  5. Scikits-Image, "Histogram of Oriented Gradients", http://scikit-image.org/docs/0.7.0/auto_examples/plot_hog.html
  6. David G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints", International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, November 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94
  7. Corinna Cortes and Vladimir Vapnik, "Support-Vector Networks", Machine Learning, Vol. 20, No. 3, pp. 273-297, September 1995. https://doi.org/10.1007/BF00994018
  8. J. R. Quinlan, "Induction of Decision Trees", Machine Learning, Vol. 1, No. 1, pp. 81-106, March 1986. https://doi.org/10.1007/BF00116251
  9. Dmitry Yu. and Andrey D., "Decision Stream: Cultivating Deep Decision Trees", IEEE-ICTAI, pp. 905-912, 2017.
  10. Leo Breiman, "Random Forests", Machine Learning, Vol. 45, No. 1, pp. 5-32, October 2001. https://doi.org/10.1023/A:1010933404324
  11. Leo Breiman. "Bagging predictors," Technical Report 421, Department of Statistics, University of California at Berkeley, 1994.
  12. William Koehrsen, "Random Forest Simple Explan ation", http://medium.com/@williamkoehrsen/random-forest-simple-explanation-377895a60d2d, 2017.
  13. Limiao Deng and Renshi Yu, "Pest Recognition System Based on Bio-Inspired Filtering and LCP Features", Internatinal Computer Conference on Wavelet Active Media Technology and Information Processing, pp. 202-204, December 2015.
  14. Ronny Hansch and Olaf Hellwich, "Feature independent classification of hyperspectral images by projection-based random forests", Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, pp. 1-4, June 2015.
  15. Zhiqiang Qiao and Qinyan Zhang, "Application of SVM based on genetic algorithm in classification of cataract fundus images", IEEE International Conference on Imaging Systems and Techniques, pp. 1-5, October 2017.
  16. Tsung-Han Chan and Kui Jia, "PCANet: A Simple Deep Learning Baseline for Image Classification?", IEEE Transactions on Images Processing, Vol. 24, No. 12, pp. 5017-5032, December 2015. https://doi.org/10.1109/TIP.2015.2475625
  17. Sun-Wook Choi and Chong Ho Lee, "Hypergraph Model Based Scene Image Classification Method", Journal of Korean Institute of Intelligent Systems, Vol. 24, No. 2, pp. 166-172, April 2014. https://doi.org/10.5391/JKIIS.2014.24.2.166
  18. Raspberry Pi 3, http://www.raspberrypi.org
  19. Jean Serra, "Image Analysis and Mathematical Morphology, Volume 2: Theoretical Advances", Journal of Microscopy, Vol. 152, No. 2, pp. 597-597, November 1988. https://doi.org/10.1111/j.1365-2818.1988.tb01425.x
  20. J.R.R. Uijlings and K.E.A. van de Sande, "Selective Search for Object Recognition", International Journal of Computer Vision, Vol. 104, No. 2, pp. 154-171, September 2013. https://doi.org/10.1007/s11263-013-0620-5
  21. Pedro F. Felzenszwalb and Ross B. Girshick, "Object Detection with Discriminatively Trained Part-Based Models", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 9, pp. 1627-1645, September 2010. https://doi.org/10.1109/TPAMI.2009.167