References
- Aggarwal, C. C. (2014). Data Classification: Algorithms and Applications, CRC Press.
- Cramer, A., Lostanlen, V., Farnsworth, A., Salamon, J. and Bello, J. P. (2020). Chirping up the Right Tree: Incorporating Biological Taxonomies into Deep Bioacoustic Classifiers. IEEE International Conference on Acoustics, Speech, and Signal Processing. May. 04-08, Barcelona, Spain, pp. 901-905.
- Fernandez, A., Garcia, S., Galar, M., Prati, R. C., Krawczyk, B. and Herrera, F. (2018). Learning from Imbalanced Data Sets, Springer.
- Ganaie, M. A., Hu, M., Malik, A. K., Tanveer, M. and Suganthan, P. N. (2022). Ensemble Deep Learning: A Review. Engineering Applications of Artificial Intelligence, 115. https://doi.org/10.1016/j.engappai.2022.105151
- Gunawan, K. W., Hidayat, A. A., Cenggoro, T. W. and Pardamean, B. (2023). Repurposing Transfer Learning Strategy of Computer Vision for Owl Sound Classification. Procedia Computer Science, 216, 424-430. https://doi.org/10.1016/j.procs.2022.12.154
- Hidayat, A. A., Cenggoro, T. W. and Pardamean, B. (2021). Convolutional Neural Networks for Scops Owl Sound Classification. Procedia Computer Science, 179. https://doi.org/10.1016/j.procs.2020.12.010
- Huang, G., Liu, Z., Maaten, L. and Weinberger, K. Q. (2017). Densely Connected Convolutional Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jul. 21-26, Honolulu, HI, USA, pp. 4700-4708.
- Incze, A., Jancso, H., Szilagyi, Z., Farkas, A. and Sulyok, C. (2018). Bird Sound Recognition Using a Convolutional Neural Network. IEEE 16th International Symposium on Intelligent Systems and Informatics, Sep. 13-15, Subotica, Serbia, pp. 295-300.
- Jeong H., Go, J. and Shin, C. (2021). Abnormal Detection with Microscope through Deep Learning. Journal of Korea Society of Industrial Information Systems, 26(2), https://doi.org/10.9723/jksiis.2021.26.2.001
- Kahl, S., Wood, C., Eibl, M. and Klinck, H. (2021). BirdNET: A Deep Learning Solution for Avian Diversity Monitoring. Ecological Informatics, 61, https://doi.org/10.1016/j.ecoinf.2021.101236
- Kim, C., Cho, Y., Jung, S., Rew, J. and Hwang, E. (2020). Animal Sounds Classification Scheme based on Multi-Feature Network with Mixed Datasets. KSI I Transactions of Internet and Information Systems, 14(8), 3384-3398, https://doi.org/10.3837/tiis.2020.08.013
- Kim, E., Moon, J., Shim, J. and Hwang, E. (2023). DualDiscWaveGAN-Based Data Augmentation Scheme for Animal Sound Classification. Sensors, 23(4), https://doi.org/10.3390/s23042024
- Kim, J., Seok, C., Kim, M. and Kim, S. (2022). A System for Recommending Audio Devices based on Frequency Band Analysis of Vocal Component in Sound Source. Journal of Korea Society of Industrial Information Systems, 27(6), 1-12, https://doi.org/10.9723/jksiis.2022.27.6.001
- Kim, J., Lee, Y., Kim, D. and Ko, H. (2020). Temporal Attention based Animal Sound Classification. The Journal of the Acoustical Society of Korea, 39(5), 406-413, https://doi.org/10.7776/ASK.2020.39.5.406
- Koh, C., Chang, J., Tai, C., Huang, D., Hsieh, H. and Liu, Y. (2019). Bird Sound Classification using Convolutional Neural Networks. Conference and Labs of the Evaluation Forum, Sep. 9-12, Lugano, Switzerland, 2380
- Korea Forest Service (2023). Changes in Forests due to Climate Change, https://www. forest.go.kr/ (Accessed on Jan. 03rd, 2024)
- Lee, W., Kim, Y., Kim, J. and Lee, C. (2020). Forecasting of Iron Ore Prices using Machine Learning. Journal of Korea Society of Industrial Information Systems, 25(2), 57-72, https://doi.org/10.9723/jksiis.2020.25.2.057
- Lin, T., Goyal, P., Girshick, R., He, K. and Dollar, P. (2017). Focal Loss for Dense Object Detection. Proceedings of the IEEE/CVF International Conference on Computer Vision, Oct. 22-29, Venice, Italy, pp. 2980-2988.
- Martynov, E. and Uematsu, Y. (2022). Dealing with Class Imbalance in Bird Sound Classification. Conference and Labs of the Evaluation Forum, Sep. 5-8, Bologna, Italy, pp. 2151-2158.
- Mohammed, A. and Kora, R. (2023). A Comprehensive Review on Ensemble Deep Learning: Opportunities and Challenges. Journal of King Saud University - Computer and Information Science, 35(2), 757-774, https://doi.org/10.1016/j.jksuci.2023.01.014
- Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L, Bai, J., and Chintala, S. (2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library. Proceedings of the Advances in Neural Information Processing Systems, Dec. 08-14, Vancouver, BC, Canada, pp. 8026-8037.
- Prusa, Z. and Holighaus, N. (2022). Phase Vocoder Done Right. arXiv, arXiv:2202.07382, https://doi.org/10.48550/arXiv.2202.07382
- Sun, Y., Maeda, T. M., Solis-Lemus, C., Pimentel-Alarcon, D. and Burivalova, Z. (2022). Classification of Animal Sounds in a Hyperdiverse Rainforest using Convolutional Neural Networks with Data Augmentation. Ecological Indicators, 145. https://doi.org/10.1016/j.ecolind.2022.109621
- Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 26-Jul 01, Las Vegas, NV, USA, pp. 2818-2826.