Acknowledgement
본 결과물은 농림축산식품부 및 과학기술정보통신부, 농촌진흥청의 재원으로 농림식품기술기획평가원과 재단법인 스마트팜연구개발사업단의 스마트팜다부처패키지혁신기술개발사업의 지원을 받아 연구되었음(421031-04)
References
- J. Won et al., "Study on Traveling Characteristics of Straight Automatic Steering Devices for Drivable Agricultural Machinery," Journal of Drive and Control, vol. 19, no. 4, pp. 19-28, 2022.
- B. Seong et al., "Predicting the spray uniformity of pest control drone using multi-layer perceptron," Journal of Drive and Control, vol. 20, no. 3, pp. 25-34, 2023.
- T. Kim, et. al., "Estimation of tomato maturity as a continuous index using deep neural networks", Korean Journal of Agricultural Science, Vol.49, No.4, pp.785-793, 2022.
- S. W. Kang, et. al., "Localization of ripe tomato bunch using deep neural networks and class activation mapping", Korean Journal of Agricultural Science, Vol.50, No.3, pp.357-364, 2023.
- K. S. Kim, J. I. Lee, S. W. Gwak, W. Y. Kang, D. Y. Shin, and S. H. Hwang, "Construction of Database for Deep Learning-based Occlusion Area Detection in the Virtual Environment," Journal of Drive and Control, vol. 19, no. 3, pp. 9-15, 2022.
- A. Kuznetsova, T. Maleva and V. Soloviev, "Using YOLOv3 algorithm with pre-and post-processing for apple detection in fruit-harvesting robot", Agronomy, Vol.10, No.7, pp.1016, 2020.
- W. Chen, et. al., "An apple detection method based on des-YOLO v4 algorithm for harvesting robots in complex environment", Mathematical Problems in Engineering, pp.1-12, 2021.
- L. Fu, et. al., "Faster R-CNN-based apple detection in dense-foliage fruiting-wall trees using RGB and depth features for robotic harvesting", Biosystems Engineering, Vol.197, pp.245-256, 2020.
- Y. Yu, K. Zhang, L. Yang and D. Zhang, "Fruit detection for strawberry harvesting robot in non-structural environment based on Mask-RCNN", Computers and Electronics in Agriculture, Vol.163, pp.104846, 2019.
- Y. Xiong, et. al., "An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation", Journal of Field Robotics, Vol.37, No.2, pp.202-224, 2020.
- H. Hu, et. al., "Recognition and localization of strawberries from 3D binocular cameras for a strawberry picking robot using coupled YOLO/Mask R-CNN", International Journal of Agricultural and Biological Engineering, Vol.15, No.6, pp.175-179, 2022.
- M. O. Lawal, "Tomato detection based on modified YOLOv3 framework. Scientific Reports", Vol.11, No.1, pp.1-11, 2021.
- A. Toshev and C. Szegedy, "Deeppose: Human pose estimation via deep neural networks", In Proceedings of the IEEE conference on computer vision and pattern recognition, pp.1653-1660, 2014.
- Z. Cao, et. al., "Realtime multi-person 2d pose estimation using part affinity fields", In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7291-7299, 2017.
- X. Li, et. al., "Deep cascaded convolutional models for cattle pose estimation", Computers and Electronics in Agriculture, Vol.164, pp.104885, 2019.
- T. Kim, et. al., "2D pose estimation of multiple tomato fruit-bearing systems for robotic harvesting", Computers and Electronics in Agriculture, Vol.211, pp.108004, 2023.
- Z. Wu, et. al., "A method for identifying grape stems using keypoints", Computers and Electronics in Agriculture, Vol.209, pp.107825, 2023.
- B. C. Russell, et. al., "LabelMe: a database and web-based tool for image annotation", International journal of computer vision, Vol.77, pp.157-173, 2008.
- O. Ronneberger, P. Fischer and T. Brox, "U-net: Convolutional networks for biomedical image segmentation", In Medical image computing and computer-assisted intervention-MICCAI 2015: 18th international conference, pp.234-241, 2015.
- W. S. Kim, et. al., "Weakly supervised crop are a segmentation for an autonomous combine harvester", Sensors, Vol.21, No.14, pp.4801, 2021.
- S. Cho, et. al., "Estimation of two-dimensional position of soybean crop for developing weeding robot", Journal of Drive and Control, Vol.20, No. 2, pp.15-23, 2023.
- S. E. Wei, et. al., "Convolutional pose machines", In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4724-4732, 2016.