Acknowledgement
This work was supported in part by the Korean government NRF-2019R1A2C1011297, NRF-2019R1A6A1A09031717 and in part by the Crop and Weed Project administered through the Agricultural Science and Technology Development Cooperation Research Program (PJ01572002).
References
- S. A. Fennimore, D. C. Slaughter, M. C. Siemens, R. G. Leon, and M. N. Saber, "Technology for automation of weed control in specialty crops," Weed Technology, Vol. 30, No. 4, pp. 823-837, Feb. 2016. https://doi.org/10.1614/WT-D-16-00070.1
- D. L. Shaner and H. J. Beckie, "The future for weed control and technology," Pest management science, Vol. 70, No. 9, pp. 1329-1339, Sep. 2014. https://doi.org/10.1002/ps.3706
- P. Lottes, J. Behley, N. Chebrolu, A. Milioto, and C. Stachniss, "Joint stem detection and crop-weed classification for plant-specific treatment in precision farming," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp. 8233-8238, Oct. 2018.
- A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," Advances in neural information processing systems, Vol. 25, 2012.
- F. Schroff, D. Kalenichenko, and J. Philbin, "Facenet: A unified embedding for face recognition and clustering," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 815-823, Boston, USA, Jun. 2015.
- A. S. Al Waisy, R. Qahwaji, S. Ipson, and S. Al-Fahdawi, "Amultimodal deep learning framework using local feature representations for face recognition," Machine Vision and Applications, Vol. 29, No. 1, pp. 35- 54, Jan. 2018. https://doi.org/10.1007/s00138-017-0870-2
- Z. Weng, F. Meng, S. Liu, Y. Zhang, Z. Zheng, and C. Gong, "Cattle face recognition based on a two-branch convolutional neural network," Computers and Electronics in Agriculture, Vol. 196, p. 106871, May, 2022. https://doi.org/10.1016/j.compag.2022.106871
- T. Falk, D. Mai, R. Bensch, O. Cicek, A. Abdulkadir, Y. Marrakchi, A. Bohm, J. Deubner, Z. Jackel, K. Seiwald, et al. "U-net: deep learning for cell counting, detection, and morphometry," Nature methods, Vol. 16, No. 1, pp. 67-70, 2019. https://doi.org/10.1038/s41592-018-0261-2
- R. Jain, P. Nagrath, G. Kataria, V. S. Kaushik, and D. J. Hemanth, "Pneumonia detection in chest x-ray images using convolutional neural networks and transfer learning," Measurement, Vol. 165, pp. 108046, Dec. 2020. https://doi.org/10.1016/j.measurement.2020.108046
- Y. Xu, A. Hosny, R. Zeleznik, C. Parmar, T. Coroller, I. Franco, R. H. Mak, and H. J. Aerts, "Deep learning predicts lung cancer treatment response from serial medical imaging," Clinical Cancer Research, Vol. 25, No. 11, pp. 3266-3275, Jun. 2019. https://doi.org/10.1158/1078-0432.CCR-18-2495
- J. Wang, Y. Wang, X. Tao, Q. Li, L. Sun, J. Chen, M. Zhou, M. Hu, and X. Zhou, "Pca-u-net based breast cancer nest segmentation from microarray hyperspectral images," Fundamental Research, Vol. 1, No. 5, pp. 631-640, Sep. 2021. https://doi.org/10.1016/j.fmre.2021.06.013
- C. Gunavathi, K. Sivasubramanian, P. Keerthika, and C. Paramasivam, "A review on convolutional neural network based deep learning methods in gene expression data for disease diagnosis," Materials Today: Proceedings, Vol. 45, pp. 2282-2285, 2021. https://doi.org/10.1016/j.matpr.2020.10.263
- J. Ni, Y. Chen, Y. Chen, J. Zhu, D. Ali, and W. Cao, "A survey on theories and applications for self-driving cars based on deep learning methods," Applied Sciences, Vol. 10, No. 8, pp. 2749, Apr. 2020. https://doi.org/10.3390/app10082749
- A. Fuentes, S. Yoon, S. C. Kim, and D. S. Park, "Arobust deep-learning- based detector for real-time tomato plant diseases and pests recognition," Sensors, Vol. 17, No. 9, pp. 2022, Sep. 2017. https://doi.org/10.3390/s17092022
- H. Huang, J. Deng, Y. Lan, A. Yang, X. Deng, and L. Zhang, "A fully convolutional network for weed mapping of unmanned aerial vehicle (uav) imagery," PloS one, Vol. 13, No. 4, Apr. 2018.
- M. Agarwal, S. K. Gupta, and K. Biswas, "Development of efficient cnn model for tomato crop disease identification," Sustainable Computing: Informatics and Systems, Vol. 28, 2020.
- M. Xu, S. Yoon, A. Fuentes, J. Yang, and D. S. Park, "Style-consistent image translation: A novel data augmentation paradigm to improve plant disease recognition.," Frontiers in Plant Science, Vol. 12, pp. 773142-773142, Feb. 2021.
- S. P. Mohanty, D. P. Hughes, and M. Salathe, "Using deep learning for image-based plant disease detection," Frontiers in plant science, Vol. 7, Sep. 2016.
- N. Teimouri, M. Dyrmann, P. R. Nielsen, S. K. Mathiassen, G. J. Somerville, and R. N. Jorgensen, "Weed growth stage estimator using deep convolutional neural networks," Sensors, Vol. 18, No. 5, May, 2018.
- C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, "Rethinking the inception architecture for computer vision," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 28182826, Jun. 2016.
- O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, Springer, pp. 234-241, Nov. 2015.
- F. Liu and L. Wang, "Unet-based model for crack detection integrating visual explanations," Construction and Building Materials, Vol. 322, Mar. 2022.
- N. Cinar, A. Ozcan, and M. Kaya, "A hybrid densenet121-unet model for brain tumor segmentation from mr images," Biomedical Signal Processing and Control, Vol. 76, Jul. 2022.
- S. P. Adhikari, H. Yang, and H. Kim, "Learning semantic graphics using convolutional encoder-decoder network for autonomous weeding in paddy," Frontiers in plant science, Oct. 2019.
- T. Ilyas and H. Kim, "A deep learning based approach for strawberry yield prediction via semantic graphics," in 2021 21st International Conference on Control, Automation and Systems (ICCAS), IEEE, pp. 1835-1841, Jeju, Korea, Oct. 2021.
- S. Minaee, Y. Y. Boykov, F. Porikli, A. J. Plaza, N. Kehtarnavaz, and D. Terzopoulos, "Image segmentation using deep learning: A survey," IEEE transactions on pattern analysis and machine intelligence, pp.3523-3542, Feb. 2021.
- J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
- K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
- V. Badrinarayanan, A. Kendall, and R. Cipolla, "Segnet: A deep convolutional encoder-decoder architecture for image segmentation," IEEE transactions on pattern analysis and machine intelligence, Vol. 39, No. 12, pp. 2481-2495, Jan. 2017. https://doi.org/10.1109/TPAMI.2016.2644615
- L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs," IEEE transactions on pattern analysis and machine intelligence, Vol. 40, No. 4, pp. 834-848, Apr. 2018. https://doi.org/10.1109/TPAMI.2017.2699184
- W. Liu, A. Rabinovich, and A. C. Berg, "Parsenet: Looking wider to see better," arXiv preprint arXiv:1506.04579, Nov. 2015.
- H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid scene parsing network," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2881-2890, Hawaii, USA, Jul. 2017.
- Y. Wang, Z. Xu, H. Shen, B. Cheng, and L. Yang, "Centermask: single shot instance segmentation with point representation," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9313-9321, Jun. 2020.
- L.-C. Chen, G. Papandreou, F. Schroff, and H. Adam, "Rethinking atrous convolution for semantic image segmentation," arXiv preprint arXiv:1706.05587, Dec. 2017.