DOI QR코드

DOI QR Code

A Study on the Artificial Intelligence-Based Soybean Growth Analysis Method

인공지능 기반 콩 생장분석 방법 연구

  • 전문석 (국립농업과학원 농업공학부) ;
  • 김영태 (국립농업과학원 농업생명자원부) ;
  • 정유석 (국립농업과학원 농업공학부) ;
  • 배효준 (국립농업과학원 농업공학부) ;
  • 이채원 (국립식량과학원 중부작물부) ;
  • 김송림 (국립농업과학원 농업생명자원부) ;
  • 최인찬 (국립농업과학원 농업공학부)
  • Received : 2023.09.22
  • Accepted : 2023.10.24
  • Published : 2023.10.30

Abstract

Soybeans are one of the world's top five staple crops and a major source of plant-based protein. Due to their susceptibility to climate change, which can significantly impact grain production, the National Agricultural Science Institute is conducting research on crop phenotypes through growth analysis of various soybean varieties. While the process of capturing growth progression photos of soybeans is automated, the verification, recording, and analysis of growth stages are currently done manually. In this paper, we designed and trained a YOLOv5s model to detect soybean leaf objects from image data of soybean plants and a Convolution Neural Network (CNN) model to judgement the unfolding status of the detected soybean leaves. We combined these two models and implemented an algorithm that distinguishes layers based on the coordinates of detected soybean leaves. As a result, we developed a program that takes time-series data of soybeans as input and performs growth analysis. The program can accurately determine the growth stages of soybeans up to the second or third compound leaves.

콩은 세계 5대 식량작물 중 하나로 식물성 단백질의 주요 공급원이다. 작물 특성상 기후변화에 따라 곡물 생산량에 큰 영향을 받기 때문에 국립농업과학원에서는 콩 품종별 생장 분석을 통해 작물표현형 연구를 진행중이다. 콩 품종별 생장 분석을 위한 생장 과정 사진 촬영은 자동화된 시스템으로 이루어지지만 생장 상태를 확인, 기록, 분석하는 과정은 수작업으로 진행되고 있다. 본 논문에서는 이러한 과정을 자동화 할 수 있도록 콩 작물의 영상 데이터에서 콩잎 객체를 검출하는 YOLOv5s 모델과 검출된 콩잎의 전개 여부를 판단하는 합성곱 신경망(Convolution Neural Network; CNN) 모델을 설계, 학습하였다. 두 모델을 결합하고 검출된 콩잎의 좌표데이터로 층을 구분하는 알고리즘을 구현하여 콩 작물의 시계열 데이터를 입력하여 생장을 분석하는 프로그램을 개발하였고, 그 결과 콩 작물의 제2~3복엽까지 생장 시기를 판단할 수 있었다.

Keywords

Acknowledgement

이 논문은 농촌진흥청 연구사업(과제번호: PJ01486501)의 지원에 의해 이루어진 것임.

References

  1. Aich, S. and Stavness, I. (2017). Leaf counting with deep convolutional and deconvolutional networks. In Proceedings of the IEEE international conference on computer vision workshops (pp. 2080-2089).
  2. Bay, H., Tuytelaars, T. and Van Gool, L.(2006). Surf: Speeded up robust features. Lecture notes in computer science, 3951, 404-417. https://doi.org/10.1007/11744023_32.
  3. Cho, J. H. (2006). Effect of planting date and cultivation method on soybean growth in paddy field. Korean journal of organic agriculture, 14(2), 191-204.
  4. Cho, C., Kim, D. Y., Choi, M. S., Jin, M. and Seo, M. S. (2021). Efficient isolation and gene transfer of protoplast in korean soybean (Glycine Max (L.) Merr.) cultivars. Korean journal of breeding science, 53(3), 230-239. https://doi.org/10.9787/KJBS.2021.53.3.230
  5. Gu, J., Wang, Z., Kuen, J., Ma, L., Shahroudy, A., Shuai, B., Liu, T., Wang, X., Wang, G., Cai, J. and Chen, T. (2018). Recent advances in convolutional neural networks. Pattern recognition, 77, 354-377. https://doi.org/10.1016/j.patcog.2017.10.013
  6. Jeong, Y. S., Lee, H. R., Baek, J. H., Kim, K. H., Chung, Y. S. and Lee, C. W. (2020). Deep Learning-based rice seed segmentation for phynotyping. Journal of the Korea Industrial Information Systems Research. 25(5), 23-29. https://doi.org/10.9723/JKSIIS.2020.25.5.023
  7. Ko, K. E. and Sim, K. B. (2017). Trend of object recognition and detection technology using deep learning. Journal of Control Robotics and Systems, 23(3), 17-24.
  8. Krizhevsky, A., Sutskever, I. and Hinton, G. E. (2017). Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), 84-90. https://doi.org/10.1145/3065386.
  9. Karlekar, A. and Seal, A. (2020). SoyNet: Soybean leaf diseases classification. Computers and Electronics in Agriculture, 172, 105342.
  10. Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Proceedings of the seventh IEEE international conference on computer vision, 2, 1150-1157. https://doi.org/10.1109/ICCV.1999.790410.
  11. LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. https://doi.org/10.1109/5.726791
  12. Lee, Y. H. and Kim, Y. (2020). Comparison of CNN and YOLO for Object Detection. Journal of the semiconductor & display technology, 19(1), 85-92.
  13. Lu, S., Song, Z., Chen, W., Qian, T., Zhang, Y., Chen, M. and Li, G. (2021). Counting dense leaves under natural environments via an improved deep-learning-based object detection algorithm. Agriculture, 11(10), 1003.
  14. Pratama, M. T., Kim, S., Ozawa, S., Ohkawa, T., Chona, Y., Tsuji, H. and Murakami, N. (2020, July). Deep learning-based object detection for crop monitoring in soybean fields. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1-7). IEEE.
  15. Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788. https://doi.org/10.48550/arXiv.1506.02640.
  16. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D. and Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1-9. https://doi.org/10.48550/arXiv.1409.4842.
  17. Saleem, M. H., Potgieter, J. and Arif, K. M. (2019). Plant disease detection and classification by deep learning, Plants, 8(11), 468.
  18. Teodoro, P. E., Teodoro, L. P. R., Baio, F. H. R., da Silva Junior, C. A., dos Santos, R. G., Ramos, A. P. M., Pinheiro, M. M. F., Osco, L. P., Goncalves, W. N., Carneiro, A. M., Junior, J. M., Pistori, H. and Shiratsuchi, L. S. (2021). Predicting days to maturity, plant height, and grain yield in soybean: A machine and deep learning approach using multispectral data. Remote Sensing, 13(22), 4632.