Acknowledgement
This work was supported by Korea Environment Industry &Technology Institute (KEITI) through Aquatic Ecosystem Conservation Research Program, funded by Korea Ministry of Environment (MOE) (2020003030006) and the Nakdonggang National Institute of Biological Resources (NNIBR), funded by the Ministry of Environment (MOE) of the Republic of Korea (NNIBR202101103).
References
- Codd, G. A., Morrison, L. F., and Metcalf, J. S. (2005). Cyanobacterial toxins: risk management for health protection, Toxicology and Applied Pharmacology, 203, 264-272. https://doi.org/10.1016/j.taap.2004.02.016
- Girshick, R. (2015). Fast r-cnn, Proceedings of the IEEE International Conference on Computer Vision, 1440-1448.
- Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 580-587.
- He, K., Gkioxari, G., Dollar, P., and Girshick, R. (2017). Mask r-cnn, Proceedings of the IEEE International Conference on Computer Vision, 2961-2969.
- He, K., Zhang, X., Ren, S., and Sun, J. (2015). Spatial pyramid pooling in deep convolutional networks for visual recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 1904-1916. https://doi.org/10.1109/TPAMI.2015.2389824
- Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25, 1097-1105.
- LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning, Nature, 521, 436-444. https://doi.org/10.1038/nature14539
- Lin, T. Y., Goyal, P., Girshick, R., He, K., and Dollar, P. (2017). Focal loss for dense object detection, Proceedings of the IEEE International Conference on Computer Vision, 2980-2988.
- Liu, W. Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., and Berg, A. C. (2016). SSD: Single shot multibox detector, Proceedings of European Conference on Computer Vision, 21-37.
- Ozenne, B., Subtil, F., and Maucort-Boulch, D. (2015). The precision-recall curve overcame the optimism of the receiver operating characteristic curve in rare diseases, Journal of Clinical Epidemiology, 68, 855-859. https://doi.org/10.1016/j.jclinepi.2015.02.010
- Paerl, H. W. and Otten, T. G. (2013). Harmful cyanobacterial blooms: causes, consequences, and controls, Microbial Ecology, 65, 995-1010. https://doi.org/10.1007/s00248-012-0159-y
- Pedraza, A., Bueno, G., Deniz, O., Ruiz-Santaquiteria, J., Sanchez, C., Blanco, S., Borrego-Ramos, M., Olenici, A., and Cristobal, G. (2018). Lights and pitfalls of convolutional neural networks for diatom identification, Proceedings of Optics, Photonics, and Digital Technologies for Imaging Applications V, 106790G.
- Redmon, J. and Farhadi, A. (2017). YOLO9000: Better, faster, stronger, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7263-7271.
- Redmon, J. and Farhadi, A. (2018). Yolov3: An incremental improvement, arXiv preprint arXiv, 1804.02767.
- Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016). You only look once: Unified, real-time object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779-788.
- Ren, S., He, K., Girshick, R., and Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks, arXiv preprint arXiv, 1506.01497.
- Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., and Bernstein, M. (2015). Imagenet large scale visual recognition challenge, International Journal of Computer Vision, 115, 211-252. https://doi.org/10.1007/s11263-015-0816-y
- Salido, J., Sanchez, C., Ruiz-Santaquiteria, J., Cristobal, G., Blanco, S., and Bueno, G. (2020). A low-cost automated digital microscopy platform for automatic identification of diatoms, Applied Sciences, 10, 6033. https://doi.org/10.3390/app10176033
- Sultana, F., Sufian, A., and Dutta, P. (2020). A review of object detection models based on convolutional neural network, Intelligent Computing: Image Processing Based Applications, 1-16.
- Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E., and Liang, Z. (2019). Apple detection during different growth stages in orchards using the improved YOLO-V3 model, Computers and Electronics in Agriculture, 157, 417-426. https://doi.org/10.1016/j.compag.2019.01.012
- World Health Organization (WHO). (2004). Guidelines for drinking-water quality, Geneva, Switzland, Volume 1, World Health Organization.
- Zhao, K. and Ren, X. (2019). Small aircraft detection in remote sensing images based on YOLOv3, Proceedings of IOP Conference Series: Materials Science and Engineering, 012056.
- Zhao, Z. Q., Zheng, P., Xu, S. T., and Wu, X. (2019). Object detection with deep learning: A review, IEEE Transactions on Neural Networks and Learning Systems, 30(11), 3212-3232. https://doi.org/10.1109/tnnls.2018.2876865