DOI QR코드

DOI QR Code

Korean Traditional Music Genre Classification Using Sample and MIDI Phrases

  • Lee, JongSeol (Department of Computer Science and Engineering Konkuk University) ;
  • Lee, MyeongChun (Smart Media R&D Korea Electronics Technology Institute) ;
  • Jang, Dalwon (Smart Media R&D Korea Electronics Technology Institute) ;
  • Yoon, Kyoungro (Department of Computer Science and Engineering Konkuk University)
  • Received : 2017.09.27
  • Accepted : 2017.03.08
  • Published : 2018.04.30

Abstract

This paper proposes a MIDI- and audio-based music genre classification method for Korean traditional music. There are many traditional instruments in Korea, and most of the traditional songs played using the instruments have similar patterns and rhythms. Although music information processing such as music genre classification and audio melody extraction have been studied, most studies have focused on pop, jazz, rock, and other universal genres. There are few studies on Korean traditional music because of the lack of datasets. This paper analyzes raw audio and MIDI phrases in Korean traditional music, performed using Korean traditional musical instruments. The classified samples and MIDI, based on our classification system, will be used to construct a database or to implement our Kontakt-based instrument library. Thus, we can construct a management system for a Korean traditional music library using this classification system. Appropriate feature sets for raw audio and MIDI phrases are proposed and the classification results-based on machine learning algorithms such as support vector machine, multi-layer perception, decision tree, and random forest-are outlined in this paper.

Keywords

References

  1. J. S. Downie, "Music information retrieval," Annual Review of Information Science and Technology, 37:295-340, 2003.
  2. D. Jang, C.-J. Song, S. Shin, S.-J. Park, S.-J. Jang and S.-P. Lee, "Implementation of a matching engine for a practical query-by-singing/humming system," Int. Symp. on Signal Processing and Information Technology (ISSPIT), pp. 258-263, 2011.
  3. J. S. R. Jang and H. R. Lee, "A general framework of progressive filtering and its application to query by singing/humming," IEEE Trans. on Audio, Speech, and language Processing, vol. 16, no. 2, pp. 350-358, 2008. https://doi.org/10.1109/TASL.2007.913035
  4. S. W. Hainsworth and M. D. Macleod, "Particle filtering applied to musical tempo tracking," EURASIP J. Applied Signal Processing, vol. 15, pp. 2385-2395, 2004.
  5. D. P. W. Ellis and G. E. Poliner, "Identifying cover songs with chroma features and dynamic programming beat tracking," Proc. Int. Conf. Acoustic, Speech and Signal Processing, 2007.
  6. D. Jang, C. D. Yoo, S. Lee, S. Kim and T. Kalker, "Pairwise Boosted Audio Fingerprint," IEEE Trans. Information Forensics and Security, vol. 4, no. 4, pp. 995-1004, Dec. 2009. https://doi.org/10.1109/TIFS.2009.2034452
  7. J. Haitsma and T. Kalker, "A highly robust audio fingerprinting system," Proc. International Conf. on Music Information Retrieval (ISMIR), 2002.
  8. K. Choi, G. Fazekas, and M. Sandler, "Automatic tagging using deep convolutional neural networks," Proc. International Conf. on Music Information Retrieval (ISMIR), 2016.
  9. S. Jo and C. D. Yoo, "Melody extraction from polyphonic audio based on particle filter," Proc. International Conf. on Music Information Retrieval (ISMIR), pp. 357-362, 2010.
  10. V. Arora and L. Behera, "On-line melody extraction from polyphonic audio using harmonic cluster tracking," IEEE Trans. on Audio Speech and Language Processing, vol. 21, no. 3, pp. 520 -530, 2013. https://doi.org/10.1109/TASL.2012.2227731
  11. G. Tzanetakis and P. Cook, "Musical genre classification of audio signals," IEEE Trans. Speech Audio Process. vol. 10, no. 5, pp. 293-302, 2002. https://doi.org/10.1109/TSA.2002.800560
  12. C-H. Lee, J-L. Shih, K-M. Yu, and J-M Su, "Automatic music genre classification using modulation spectral contrast feature," IEEE Int. Conf. on Multimedia and Expo (ICME), 2007.
  13. D. Jang, M. Jin and C. D. Yoo, "Music genre classification using novel features and a weighted voting method," IEEE Int. Conf. on Multimedia and Expo (ICME), 2008.
  14. Y.-F. Huang, S.-M. L., H.-Y. Wu, and Y.-S. Li. "Music genre classification based on local feature selection using a self-adaptive harmony search algorithm," Data & Knowledge Engineering, vol. 92 pp. 60-76, 2014. https://doi.org/10.1016/j.datak.2014.07.005
  15. K. Choi, G. Fazekas, M. Sandler, and K. Cho, "Transfer learning for music classification and regression tasks," Proc. International Conf. on Music Information Retrieval (ISMIR), 2017.
  16. C. McKay, "Automatic genre classification of MIDI recordings," Dissertation, McGill University, 2004.
  17. J. Valverde-Rebaza, A. Soriano, L. Berton, M. C. F. de Oliveira, and A. Lopes, "Music genre classification using traditional and relational approaches," in Proceedings of Brazilian Conference on Intelligent Systems (BRAClS), pp. 259-264, 2014.
  18. International MIDI Association. MIDI musical instrument digital interface specification 1.0. 1983.
  19. A. Eronen, "Musical instrument recognition using ICA-based transform of features and discriminatively trained HMMs," Seventh Int. Symp. on Signal Processing and Its Applications, vol. 2, pp. 133-136, 2003.
  20. P. Annesi, R. Basili, R. Gitto, A. Moschitti and R. Petitti, "Audio feature engineering for automatic music genre classification," Proc of. Int. RIAO Large Scale Semantic Access to Content (Text, Image, Video, and Sound), pp. 702-711, 2007.
  21. D. Chathuranga and L. Jayaratne, "Automatic music genre classification of audio signals with machine learning approaches," GSTF Journal on Computing (JoC), vol. 3 no. 2 pp. 13-24, 2013. https://doi.org/10.7603/s40601-013-0013-1
  22. A. Rosner and B. Kostek, "Automatic music genre classification based on musical instrument track separation," Journal of Intelligent Information Systems, pp. 1-22. 2017.
  23. J. J. Aucouturier and F. Pachet, "Representing musical genre: A state of the art," Journal of New Music Research, vol. 32, no. 11, pp. 83-93, 2003. https://doi.org/10.1076/jnmr.32.1.83.16801
  24. B. L. Sturm, "An analysis of the GTZAN music genre dataset," in Proc. of the second int. ACM workshop on Music information retrieval with user-centered and multimodal strategies, pp. 7-12, 2012.
  25. T. Bertin-Mahieux, D. P.W. Ellis, B. Whitman, and P. Lamere. "The Million Song Dataset," in Proc. of Int. Society for Music Information Retrieval Conference (ISMIR), 2011.
  26. S, Oramas, F Gomez, E Gomez, and J. Mora, "FlaBase: Towards the Creation of a Flamenco Music Knowledge Base," In Proc. of Int. Society for Music Information Retrieval Conference (ISMIR), 2015.
  27. D. Makris, I, Karydis, and S. Sioutas, "The Greek music dataset," Proc. of the 16th Int. Conf. on Engineering Applications of Neural Networks, 2015.
  28. M.K. Karaosmanoglu, B. Bozkurt, and A. Holzapfel, "A symbolic dataset of Tukish makam music phrase," in Fourth International Workshop on Folk Music Analysis (FMA), 2014.
  29. S. Gulati, J. Serra, V. Ishwar, S. Senturk and X. Serra, "Phrase-based rĀga recognition using vector space modeling," IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 66-70, 2016.
  30. Traditional Korean Music, Jeollabuk-do, [online] Available: http://www.koreamusic.org/langen/main.aspx.
  31. Gugak archieve, National Gugak Center, [online] Available: http://archive.gugak.go.kr/ArchivePortal/
  32. Y. Lin, X. Chen, and D. Yang, D."Exploration of music emotion recognition based on midi," in Proc. of Int. Society for Music Information Retrieval Conference (ISMIR), 2013.
  33. X. Huang, A. Acero and H.-W. Hon, "Spoken Language Processing," Prentice Hall PTR, 2001.
  34. D. N. Jiang, L. Lu, H. J. Zhang, J. H. Tao, and L. H. Cai, "Music type classification by spectral contrast feature," in Proc. of IEEE Int. Conf. on Multimedia and Expo (ICME), vol. 1, pp. 113-116, 2002.
  35. S.-C. Lim, J. -S. Lee, S.-J. Jang, S. -P. Lee, and M. Y. Kim, "Music-genre classification system based on spectro-temporal feature and feature selection," IEEE Trans. on Consumer Electronics, Vol. 58, No. 4, Now. 2012.
  36. A. Kotsifakos, E. E. Kotsifakos, P. Papapetrou and V. Athitsos, "Genre classification of symbolic music with SMBGT," in Proc. of the Int. Conf. on Pervasive Technologies Related to Assistive Environments (PETRA), no. 44, 2013.
  37. A. Ruppin and H. Yeshurun, "Midi music genre classification by invariant features," in Proc. of Int. Society for Music Information Retrieval Conference (ISMIR), pp. 397-399, 2006.
  38. Z. Cataltepe, Y. Yaslan, A. Sonmez, "Music genre classification using MIDI and audio features", EURASIP J. Adv. Signal Process, vol. 2007, 2007.
  39. B. McFee, C. Raffel, D. Liang, D.P.W. Ellis, M. McVicar, E. Battenberg, and O. Nieto "librosa: Audio and music signal analysis in python." In Proc of the 14th python in science conference, pp. 18-25. 2015.
  40. Y. M. G. Costa, L. S. Oliveirab and C. N. Silla Jr.c, "An evaluation of Convolutional Neural Networks for music classification using spectrograms," Applied Soft Computing, vol. 52, pp. 28-38, 2017. https://doi.org/10.1016/j.asoc.2016.12.024
  41. Q. Kong, X. Feng, and Y. Li, "Music genre classification using convolutional neural network," in Proc. of Int. Society for Music Information Retrieval Conference (ISMIR), 2014.
  42. D. Jang, S. Shin, J. Lee, and S.-J. Jang, "Web-based platform for music production/playing/distribution," in Proc. of Int. Workshop on Advanced Image Technology (IWAIT), 2016.
  43. F. Pedregosa et al., "Scikit-learn: Machine Learning in Python," The Journal of Machine Learning Research, vol. 12, pp.2825-2830, 2011.
  44. E. Frank, M. A. Hall, and I. H. Witten, The WEKA Workbench. Online Appendix for "Data Mining: Practical Machine Learning Tools and Techniques", Morgan Kaufmann, Fourth Edition, 2016.

Cited by

  1. DBTMPE: Deep Bidirectional Transformers-Based Masked Predictive Encoder Approach for Music Genre Classification vol.9, pp.5, 2018, https://doi.org/10.3390/math9050530