Browse > Article
http://dx.doi.org/10.5391/IJFIS.2015.15.3.159

Ranking Tag Pairs for Music Recommendation Using Acoustic Similarity  

Lee, Jaesung (School of Computer Science and Engineering, Chung-Ang University)
Kim, Dae-Won (School of Computer Science and Engineering, Chung-Ang University)
Publication Information
International Journal of Fuzzy Logic and Intelligent Systems / v.15, no.3, 2015 , pp. 159-165 More about this Journal
Abstract
The need for the recognition of music emotion has become apparent in many music information retrieval applications. In addition to the large pool of techniques that have already been developed in machine learning and data mining, various emerging applications have led to a wealth of newly proposed techniques. In the music information retrieval community, many studies and applications have concentrated on tag-based music recommendation. The limitation of music emotion tags is the ambiguity caused by a single music tag covering too many subcategories. To overcome this, multiple tags can be used simultaneously to specify music clips more precisely. In this paper, we propose a novel technique to rank the proper tag combinations based on the acoustic similarity of music clips.
Keywords
Music emotion annotation; Acoustic feature extraction; Music emotion recognition;
Citations & Related Records
연도 인용수 순위
  • Reference
1 B. Zhu and K. Zhang, “Music Emotion Recognition System Based on Improved GA-BP,” In Proceedings of International Conference on Computer Design and Applications, Qinhuangdao, 2010, pp. 409-412.
2 K. Bischoff, C. Firan, W. Nejdl, and R. Paiu, “How Do You Feel about Dancing Queen? Deriving Mood and Theme Annotations from User Tags,” In Proceedings of ACM/IEEE-CS Joint Conference on Digital Libraries, Austin, 2009, pp. 285-294.
3 O. Lartillot, and P. Toiviainen, “MIR in MATLAB (II): A toolbox for musical feature extraction from audio,” in Proceedings of International Conference on Music Information Retrieval, Vienna, 2007, pp. 237-244.
4 M. Ruxanda, B. Chua, A. Nanopoulos, C. Jensen, “Emotion-based Music Retrieval on a Well-reduced Audio Feature Space,” In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, 2009, pp. 181-184.
5 D. Turnbull, L. Barrington, D. Torres, and G. Lanckriet, “Semantic annotation and retrieval of music and sound effects,” IEEE Transactions on Audio, Speech and Language Processing, vol. 16, no. 2, pp. 467-476, Feb. 2008.   DOI
6 M. Wang, N. Zhang, H. Zhu, “User-adaptive Music Emotion Recognition,” In Proceedings of International Conference on Signal Processing, Beijing, 2004, pp. 1352-1355.
7 B. Rocha, R. Panda, and R. Paiva, “Music Emotion Recognition: The Importance of Melodic Features,” In Proceedings of International Workshop on Machine Learning and Music, Prague, 2013, pp. 1-4.
8 R. Agrawal and R. Srikant, “Fast algorithm for mining association rules,” In Proceedings of International Conference on Very Large Data Bases, Santiago, 1994, pp.487-499.
9 J. Skowronek, M. McKinney, and S. Van De Par, “A Demonstrator for Automatic Music Mood Estimation,” in Proceedings of International Conference on Music Information Retrieval, Vienna, 2007, pp. 345-346.
10 K. Tsoumakas, G. Tsoumakas, G. Kalliris, and I. Vlahavas, “Multi-label Classication of Music into Emotions,” in Proceedings of International Conference on Music Information Retrieval, Philadelphia, 2008, pp. 325-330.
11 Y. Kim, E. Schmidt, R. Migneco, B. Morton, P. Richardson, J. Scott, J. Speck, and D. Trunbull, “Music Emotion Recognition: A State of the art review,” in Proceedings of International Conference on Music Information Retrieval, Utrecht, 2010, pp. 255-266.
12 E. Schmidt, D. Turnbull, and Y. Kim, “Feature Selection for Content-Based, Time-Varying Musical Emotion Regression,” in Proceedings of International Conference on Music Information Retrieval, Utrecht, 2010, pp. 267–274.
13 X. Zhu, Y. Shi, H. Kim, and K. Eom, “An Integrated Music Recommendation System,” IEEE Transactions on Consumer Electronics, vol. 52, no. 3, pp. 917-925, Aug. 2006.   DOI
14 T. Eerola, O. Lartillot, and P. Toiviainen, “Prediction of Multidimensional Emotional Ratings in Music from Audio using Multivariate Regression Models,” In Proceedings of International Conference on Music Information Retrieval, Kobe, 2009, pp. 621-626.
15 Y. Yang, Y. Su, Y. Lin, and H. Chen, “Music Emotion Recognition: The Role of Individuality,” in Proceedings of the International Workshop on Human-centered Multimedia, Augsburg, 2007, pp. 13-22.
16 M. Soleymani, M. Caro, and E. Schmidt, C. Sha, and Y. Yang, “1000 Songs for Emotional Analysis of Music,” In Proceedings of ACM International Workshop on Crowdsourcing for Multimedia, Barcelona, 2013, pp. 1-6.
17 Y. Yang, Y. Lin, Y. Su, and H. Chen, “A Regression Approach to Music Emotion Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 16, no. 2, pp. 448-457, Feb. 2008.   DOI
18 Y. Yang and H. Chen, “Ranking-Based Emotion Recognition for Music Organization and Retrieval,” IEEE Transactions on Audio, Speech,and Language Processing, vol. 19, no. 4, pp. 762-774, Aug. 2010.
19 Y. Feng, Y. Zhuang, and Y. Pan, “Music Information Retrieval by Detecting Mood via Computational Media Aesthetics,” in Proceedings of the IEEE/WIC International Conference on Web Intelligence, Halifax, 2003, pp. 235-241.
20 T. Li and M. Ogihara, “Toward Intelligent Music Information Retrieval,” IEEE Transactions on Multimedia, vol. 8, no. 3, pp. 564-574, Jun. 2006.   DOI
21 M. Korhonen, D. Clausi, and M. Jernigan, “Modeling Emotional Content of Music Using System Identication,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 36, no. 3, pp. 588-599, Jun. 2006.
22 L. Lu, D. Liu, and H. Zhang, “Automatic Mood Detection and Tracking of Music Audio Signals,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, no. 1, pp. 5-18, Jan. 2006.   DOI