Browse > Article

Deep Learning for Weeds' Growth Point Detection based on U-Net  

Arsa, Dewa Made Sri (Division of Electronics and Information Engineering, Jeonbuk National University)
Lee, Jonghoon (Core Research Institute of Intelligent Robots, Jeonbuk National University)
Won, Okjae (Production Technology Research Division, Rural Development Administration, National Institute of Crop Science)
Kim, Hyongsuk (Division of Electronics and Information Engineering, Jeonbuk National University)
Publication Information
Smart Media Journal / v.11, no.7, 2022 , pp. 94-103 More about this Journal
Abstract
Weeds bring disadvantages to crops since they can damage them, and a clean treatment with less pollution and contamination should be developed. Artificial intelligence gives new hope to agriculture to achieve smart farming. This study delivers an automated weeds growth point detection using deep learning. This study proposes a combination of semantic graphics for generating data annotation and U-Net with pre-trained deep learning as a backbone for locating the growth point of the weeds on the given field scene. The dataset was collected from an actual field. We measured the intersection over union, f1-score, precision, and recall to evaluate our method. Moreover, Mobilenet V2 was chosen as the backbone and compared with Resnet 34. The results showed that the proposed method was accurate enough to detect the growth point and handle the brightness variation. The best performance was achieved by Mobilenet V2 as a backbone with IoU 96.81%, precision 97.77%, recall 98.97%, and f1-score 97.30%.
Keywords
artificial intelligence; growth point; deep learning; semantic graphics; U-Net;
Citations & Related Records
Times Cited By KSCI : 6  (Citation Analysis)
연도 인용수 순위
1 W. Liu, A. Rabinovich, and A. C. Berg, "Parsenet: Looking wider to see better," arXiv preprint arXiv:1506.04579, Nov. 2015.
2 V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A deep convolutional encoder-decoder architecture for image segmentation," IEEE transactions on pattern analysis and machine intelligence, Vol. 39, No. 12, pp. 2481-2495, Jan. 2017.   DOI
3 D. L. Shaner and H. J. Beckie, "The future for weed control and technology," Pest management science, Vol. 70, No. 9, pp. 1329-1339, Sep. 2014.   DOI
4 R. Jain, P. Nagrath, G. Kataria, V. S. Kaushik, and D. J. Hemanth, "Pneumonia detection in chest x-ray images using convolutional neural networks and transfer learning," Measurement, Vol. 165, pp. 108046, Dec. 2020.   DOI
5 C. Gunavathi, K. Sivasubramanian, P. Keerthika, and C. Paramasivam, "A review on convolutional neural network based deep learning methods in gene expression data for disease diagnosis," Materials Today: Proceedings, Vol. 45, pp. 2282-2285, 2021.   DOI
6 H. Huang, J. Deng, Y. Lan, A. Yang, X. Deng, and L. Zhang, "A fully convolutional network for weed mapping of unmanned aerial vehicle (uav) imagery," PloS one, Vol. 13, No. 4, Apr. 2018.
7 S. P. Mohanty, D. P. Hughes, and M. Salathe, "Using deep learning for image-based plant disease detection," Frontiers in plant science, Vol. 7, Sep. 2016.
8 H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid scene parsing network," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881-2890, Hawaii, USA, Jul. 2017.
9 S. A. Fennimore, D. C. Slaughter, M. C. Siemens, R. G. Leon, and M. N. Saber, "Technology for automation of weed control in specialty crops," Weed Technology, Vol. 30, No. 4, pp. 823-837, Feb. 2016.   DOI
10 J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
11 K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
12 Y. Wang, Z. Xu, H. Shen, B. Cheng, and L. Yang, "Centermask: single shot instance segmentation with point representation," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9313-9321, Jun. 2020.
13 L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, "Rethinking atrous convolution for semantic image segmentation," arXiv preprint arXiv:1706.05587, Dec. 2017.
14 S. P. Adhikari, H. Yang, and H. Kim, "Learning semantic graphics using convolutional encoder-decoder network for autonomous weeding in paddy," Frontiers in plant science, Oct. 2019.
15 N. Teimouri, M. Dyrmann, P. R. Nielsen, S. K. Mathiassen, G. J. Somerville, and R. N. Jorgensen, "Weed growth stage estimator using deep convolutional neural networks," Sensors, Vol. 18, No. 5, May, 2018.
16 A. Fuentes, S. Yoon, S. C. Kim, and D. S. Park, "Arobust deep-learning- based detector for real-time tomato plant diseases and pests recognition," Sensors, Vol. 17, No. 9, pp. 2022, Sep. 2017.   DOI
17 M. Agarwal, S. K. Gupta, and K. Biswas, "Development of efficient cnn model for tomato crop disease identification," Sustainable Computing: Informatics and Systems, Vol. 28, 2020.
18 M. Xu, S. Yoon, A. Fuentes, J. Yang, and D. S. Park, "Style-consistent image translation: A novel data augmentation paradigm to improve plant disease recognition.," Frontiers in Plant Science, Vol. 12, pp. 773142-773142, Feb. 2021.
19 C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 28182826, Jun. 2016.
20 O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, Springer, pp. 234-241, Nov. 2015.
21 F. Liu and L. Wang, "Unet-based model for crack detection integrating visual explanations," Construction and Building Materials, Vol. 322, Mar. 2022.
22 N. Cinar, A. Ozcan, and M. Kaya, "A hybrid densenet121-unet model for brain tumor segmentation from mr images," Biomedical Signal Processing and Control, Vol. 76, Jul. 2022.
23 S. Minaee, Y. Y. Boykov, F. Porikli, A. J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, "Image segmentation using deep learning: A survey," IEEE transactions on pattern analysis and machine intelligence, pp.3523-3542, Feb. 2021.
24 A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, Vol. 25, 2012.
25 L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs," IEEE transactions on pattern analysis and machine intelligence, Vol. 40, No. 4, pp. 834-848, Apr. 2018.   DOI
26 T. Ilyas and H. Kim, "A deep learning based approach for strawberry yield prediction via semantic graphics," in 2021 21st International Conference on Control, Automation and Systems (ICCAS), IEEE, pp. 1835-1841, Jeju, Korea, Oct. 2021.
27 J. Wang, Y. Wang, X. Tao, Q. Li, L. Sun, J. Chen, M. Zhou, M. Hu, and X. Zhou, "Pca-u-net based breast cancer nest segmentation from microarray hyperspectral images," Fundamental Research, Vol. 1, No. 5, pp. 631-640, Sep. 2021.   DOI
28 J. Ni, Y. Chen, Y. Chen, J. Zhu, D. Ali, and W. Cao, "A survey on theories and applications for self-driving cars based on deep learning methods," Applied Sciences, Vol. 10, No. 8, pp. 2749, Apr. 2020.   DOI
29 Y. Xu, A. Hosny, R. Zeleznik, C. Parmar, T. Coroller, I. Franco, R. H. Mak, and H. J. Aerts, "Deep learning predicts lung cancer treatment response from serial medical imaging," Clinical Cancer Research, Vol. 25, No. 11, pp. 3266-3275, Jun. 2019.   DOI
30 P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, "Joint stem detection and crop-weed classification for plant-specific treatment in precision farming," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp. 8233-8238, Oct. 2018.
31 F. Schroff, D. Kalenichenko, and J. Philbin, "Facenet: A unified embedding for face recognition and clustering," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815-823, Boston, USA, Jun. 2015.
32 A. S. Al Waisy, R. Qahwaji, S. Ipson, and S. Al-Fahdawi, "Amultimodal deep learning framework using local feature representations for face recognition," Machine Vision and Applications, Vol. 29, No. 1, pp. 35- 54, Jan. 2018.   DOI
33 Z. Weng, F. Meng, S. Liu, Y. Zhang, Z. Zheng, and C. Gong, "Cattle face recognition based on a two-branch convolutional neural network," Computers and Electronics in Agriculture, Vol. 196, p. 106871, May, 2022.   DOI
34 T. Falk, D. Mai, R. Bensch, O. Cicek, A. Abdulkadir, Y. Marrakchi, A. Bohm, J. Deubner, Z. Jackel, K. Seiwald, et al. "U-net: deep learning for cell counting, detection, and morphometry," Nature methods, Vol. 16, No. 1, pp. 67-70, 2019.   DOI