DOI QR코드

DOI QR Code

LeafNet: 합성곱 신경망을 이용한 식물체 분할

LeafNet: Plants Segmentation using CNN

  • 조정원 (군산대학교 컴퓨터정보통신공학부) ;
  • 이민혜 (군산대학교 컴퓨터정보통신공학부) ;
  • 이홍로 (군산대학교 컴퓨터정보통신공학부) ;
  • 정용석 (제주대학교 식물자원환경전공) ;
  • 백정호 (농촌진흥청 국립농업과학원) ;
  • 김경환 (농촌진흥청 국립농업과학원) ;
  • 이창우 (군산대학교 컴퓨터정보통신공학부)
  • 투고 : 2019.07.23
  • 심사 : 2019.08.12
  • 발행 : 2019.08.30

초록

식물 표현체(plant phenomics) 연구는 우수한 형질의 식물 품종과 유전적 특성을 선별하기 위해 여러 식물체의 형태적 특징을 관측하고, 획득한 영상 빅데이터를 분석하는 기술이다. 기존의 방법은 검출 대상에 따라 직접 색상 임계값을 변경해야 하기 때문에 빅데이터를 다루는 정밀검정시스템에 적용하기 어렵다. 본 논문에서는 정밀검정시스템을 위한 식물체와 배경의 자동 분할이 가능한 합성곱 신경망(Convolution neural network: CNN) 구조를 제안한다. LeafNet은 9개의 컨벌루션 계층과 식물의 유무를 판단하기 위한 시그모이드(Sigmoid) 활성화 함수로 구성된다. LeafNet을 이용한 학습 결과, 식물 모종 영상에 대하여 정밀도 98.0%, 재현율 90.3%의 결과가 도출되어 정밀검정시스템의 적용 가능성을 확인하였다.

Plant phenomics is a technique for observing and analyzing morphological features in order to select plant varieties of excellent traits. The conventional methods is difficult to apply to the phenomics system. because the color threshold value must be manually changed according to the detection target. In this paper, we propose the convolution neural network (CNN) structure that can automatically segment plants from the background for the phenomics system. The LeafNet consists of nine convolution layers and a sigmoid activation function for determining the presence of plants. As a result of the learning using the LeafNet, we obtained a precision of 98.0% and a recall rate of 90.3% for the plant seedlings images. This confirms the applicability of the phenomics system.

키워드

참고문헌

  1. Lee, Y. H., "Establishment of National Automatic Testing System for Massive Plant Phenotype," Business Report of National Institute of Agricultural Science, Oct. 2014.
  2. Lee, S. W., "Current Status on the Development of GM Plants Based on the Published Articles and Patents in Korea," Journal of Plant Biotechnology, Vol. 37, No. 4, pp. 394-399, 2010. https://doi.org/10.5010/JPB.2010.37.4.394
  3. Kim, H. S., "Open S/W Development for Plant Expression Data and Its Application Plan," Proceedings on Joint Symposium on the Revitalization of Genome Breeding for Crops, Korea National Institute of Agricultural Science, June, 2019.
  4. Choi, Y. H., Bu, K. D. and Koo, B. H., "Classification System of Fruits by Color Image Processing," Journal of the Korea Industrial Information Systems Research, Vol. 5, No. 3, pp. 65-70, 2000.
  5. Krizhevsky, A., Sutskever, I. and Hinton, G., "ImageNet Classification with Deep Convolutional Neural Networks," Communications of the ACM. Vol. 60, No. 6, pp. 84-90, 2012. https://doi.org/10.1145/3065386
  6. Noh, T. K. and Kim, D. S., "Weed Research using Plant Image Science," Weed and Turfgrass Science, pp. 285-296, 2018. https://doi.org/10.5660/WTS.2018.7.4.285
  7. Kim, J. H., Kim, S. K. and Shin, B. J., "Object Image Classification Using Hierarchical Neural Network," Journal of the Korea Industrial Information Systems Research, Vol. 11, No. 1, pp. 77-85, 2006.
  8. Park, J. Y., "Relationship Between Leaf Pigment and Inhabitation Environment in Korean Native Plants," Master Thesis, Graduate School of Changwon National University, Changwon, Korea, 2011.
  9. Lee, J. H., Kim, B. M. and Shin, Y. S., "Effects of Preprocessing and Feature Extraction on CNN-based Fire Detection Performance," Journal of the Korea Industrial Information Systems Research, Vol. 23 No. 4, pp. 41-53, 2018. https://doi.org/10.9723/JKSIIS.2018.23.4.041
  10. Kim, S. J., Lee, J. S. and Kim, H. S., "Deep Learning-based Automatic Weed Detection on Onion Field," Smart Media Journal, Vol. 7, No. 3, pp. 16-21, 2018. https://doi.org/10.30693/SMJ.2018.7.3.16
  11. Kang, E. C., Han, Y. T. and Oh, I. S., "Mushroom Image Recognition using Convolutional Neural Network and Transfer Learning," KIISE Transactions on Computing Practices, Vol. 24, No. 1, pp. 53-57, 2018. https://doi.org/10.5626/KTCP.2018.24.1.53
  12. Search Results for "Flowerpot" on Google. https://www.google.com/search?q=Flowerpot&hl=ko&source=lnms&tbm=isch&sa=X&ved=0ahUKEwiw4_Pf_fzjAhXtw4sBHZKfC3cQ_AUIESgB&biw=1351&bih=744 (Accessed on May 10th, 2019)
  13. Amaury Breheret. "PixelAnnotationTool," https://github.com/abreheret/PixelAnnotationTool (Accessed on March 15th, 2019)
  14. OpenCV, https://opencv.org/ (Accessed on March 15th, 2019)
  15. Olson, D. L. and Dursun, D., "Advanced Data Mining Techniques," Springer, Berlin, Heidelberg, 2008.