• Title/Summary/Keyword: music emotion recognition

Search Result 21, Processing Time 0.024 seconds

SYMMER: A Systematic Approach to Multiple Musical Emotion Recognition

  • Lee, Jae-Sung;Jo, Jin-Hyuk;Lee, Jae-Joon;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.11 no.2
    • /
    • pp.124-128
    • /
    • 2011
  • Music emotion recognition is currently one of the most attractive research areas in music information retrieval. In order to use emotion as clues when searching for a particular music, several music based emotion recognizing systems are fundamentally utilized. In order to maximize user satisfaction, the recognition accuracy is very important. In this paper, we develop a new music emotion recognition system, which employs a multilabel feature selector and multilabel classifier. The performance of the proposed system is demonstrated using novel musical emotion data.

Ranking Tag Pairs for Music Recommendation Using Acoustic Similarity

  • Lee, Jaesung;Kim, Dae-Won
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.3
    • /
    • pp.159-165
    • /
    • 2015
  • The need for the recognition of music emotion has become apparent in many music information retrieval applications. In addition to the large pool of techniques that have already been developed in machine learning and data mining, various emerging applications have led to a wealth of newly proposed techniques. In the music information retrieval community, many studies and applications have concentrated on tag-based music recommendation. The limitation of music emotion tags is the ambiguity caused by a single music tag covering too many subcategories. To overcome this, multiple tags can be used simultaneously to specify music clips more precisely. In this paper, we propose a novel technique to rank the proper tag combinations based on the acoustic similarity of music clips.

A Study on the Performance of Music Retrieval Based on the Emotion Recognition (감정 인식을 통한 음악 검색 성능 분석)

  • Seo, Jin Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.3
    • /
    • pp.247-255
    • /
    • 2015
  • This paper presents a study on the performance of the music search based on the automatically recognized music-emotion labels. As in the other media data, such as speech, image, and video, a song can evoke certain emotions to the listeners. When people look for songs to listen, the emotions, evoked by songs, could be important points to consider. However; very little study has been done on the performance of the music-emotion labels to the music search. In this paper, we utilize the three axes of human music perception (valence, activity, tension) and the five basic emotion labels (happiness, sadness, tenderness, anger, fear) in measuring music similarity for music search. Experiments were conducted on both genre and singer datasets. The search accuracy of the proposed emotion-based music search was up to 75 % of that of the conventional feature-based music search. By combining the proposed emotion-based method with the feature-based method, we achieved up to 14 % improvement of search accuracy.

A Study on the Variation of Music Characteristics based on User Controlled Music Emotion (음악 감성의 사용자 조절에 따른 음악의 특성 변형에 관한 연구)

  • Nguyen, Van Loi;Xubin, Xubin;Kim, Donglim;Lim, Younghwan
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.3
    • /
    • pp.421-430
    • /
    • 2017
  • In this paper, research results on the change of music emotion are described. Our gaol was to provide a method of changing music emotion by a human user. Then we tried to find a way of transforming the contents of the original music into the music whose emotion is similar with the changed emotion. For the purpose, a method of changing the emotion of playing music on two-dimensional plan was describe. Then the original music should be transformed into the music which emotion would be equal to the changed emotion. As the first step a method of deciding which music factors and how much should be changed was presented. Finally the experimental method of editing by sound editor for changing the emotion was described. There are so many research results on the recognition of music emotion. But the try of changing the music emotion is very rare. So this paper would open another way of doing research on music emotion field.

Enhancing Music Recommendation Systems Through Emotion Recognition and User Behavior Analysis

  • Qi Zhang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.177-187
    • /
    • 2024
  • 177-Existing music recommendation systems do not sufficiently consider the discrepancy between the intended emotions conveyed by song lyrics and the actual emotions felt by users. In this study, we generate topic vectors for lyrics and user comments using the LDA model, and construct a user preference model by combining user behavior trajectories reflecting time decay effects and playback frequency, along with statistical characteristics. Empirical analysis shows that our proposed model recommends music with higher accuracy compared to existing models that rely solely on lyrics. This research presents a novel methodology for improving personalized music recommendation systems by integrating emotion recognition and user behavior analysis.

Emotion-based music visualization using LED lighting control system (LED조명 시스템을 이용한 음악 감성 시각화에 대한 연구)

  • Nguyen, Van Loi;Kim, Donglim;Lim, Younghwan
    • Journal of Korea Game Society
    • /
    • v.17 no.3
    • /
    • pp.45-52
    • /
    • 2017
  • This paper proposes a new strategy of emotion-based music visualization. Emotional LED lighting control system is suggested to help audiences enhance the musical experience. In the system, emotion in music is recognized by a proposed algorithm using a dimensional approach. The algorithm used a method of music emotion variation detection to overcome some weaknesses of Thayer's model in detecting emotion in a one-second music segment. In addition, IRI color model is combined with Thayer's model to determine LED light colors corresponding to 36 different music emotions. They are represented on LED lighting control system through colors and animations. The accuracy of music emotion visualization achieved to over 60%.

Multiple Regression-Based Music Emotion Classification Technique (다중 회귀 기반의 음악 감성 분류 기법)

  • Lee, Dong-Hyun;Park, Jung-Wook;Seo, Yeong-Seok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.6
    • /
    • pp.239-248
    • /
    • 2018
  • Many new technologies are studied with the arrival of the 4th industrial revolution. In particular, emotional intelligence is one of the popular issues. Researchers are focused on emotional analysis studies for music services, based on artificial intelligence and pattern recognition. However, they do not consider how we recommend proper music according to the specific emotion of the user. This is the practical issue for music-related IoT applications. Thus, in this paper, we propose an probability-based music emotion classification technique that makes it possible to classify music with high precision based on the range of emotion, when developing music related services. For user emotion recognition, one of the popular emotional model, Russell model, is referenced. For the features of music, the average amplitude, peak-average, the number of wavelength, average wavelength, and beats per minute were extracted. Multiple regressions were derived using regression analysis based on the collected data, and probability-based emotion classification was carried out. In our 2 different experiments, the emotion matching rate shows 70.94% and 86.21% by the proposed technique, and 66.83% and 76.85% by the survey participants. From the experiment, the proposed technique generates improved results for music classification.

A Design and Implementation of Music & Image Retrieval Recommendation System based on Emotion (감성기반 음악.이미지 검색 추천 시스템 설계 및 구현)

  • Kim, Tae-Yeun;Song, Byoung-Ho;Bae, Sang-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.73-79
    • /
    • 2010
  • Emotion intelligence computing is able to processing of human emotion through it's studying and adaptation. Also, Be able more efficient to interaction of human and computer. As sight and hearing, music & image is constitute of short time and continue for long. Cause to success marketing, understand-translate of humanity emotion. In this paper, Be design of check system that matched music and image by user emotion keyword(irritability, gloom, calmness, joy). Suggested system is definition by 4 stage situations. Then, Using music & image and emotion ontology to retrieval normalized music & image. Also, A sampling of image peculiarity information and similarity measurement is able to get wanted result. At the same time, Matched on one space through pared correspondence analysis and factor analysis for classify image emotion recognition information. Experimentation findings, Suggest system was show 82.4% matching rate about 4 stage emotion condition.

Music classification system through emotion recognition based on regression model of music signal and electroencephalogram features (음악신호와 뇌파 특징의 회귀 모델 기반 감정 인식을 통한 음악 분류 시스템)

  • Lee, Ju-Hwan;Kim, Jin-Young;Jeong, Dong-Ki;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.2
    • /
    • pp.115-121
    • /
    • 2022
  • In this paper, we propose a music classification system according to user emotions using Electroencephalogram (EEG) features that appear when listening to music. In the proposed system, the relationship between the emotional EEG features extracted from EEG signals and the auditory features extracted from music signals is learned through a deep regression neural network. The proposed system based on the regression model automatically generates EEG features mapped to the auditory characteristics of the input music, and automatically classifies music by applying these features to an attention-based deep neural network. The experimental results suggest the music classification accuracy of the proposed automatic music classification framework.

Emotion Recognition in Children With Autism Spectrum Disorder: A Comparison of Musical and Visual Cues (음악 단서와 시각 단서 조건에 따른 학령기 자폐스펙트럼장애 아동과 일반아동의 정서 인식 비교)

  • Yoon, Yea-Un
    • Journal of Music and Human Behavior
    • /
    • v.19 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • The purpose of this study was to evaluate how accurately children with autism spectrum disorder (ASD; n = 9) recognized four basic emotions (i.e., happiness, sadness, anger, and fear) following musical or visual cues. Their performance was compared to that of typically developing children (TD; n = 14). All of the participants were between the ages of 7 and 13 years. Four musical cues and four visual cues for each emotion were presented to evaluate the participants' ability to recognize the four basic emotions. The results indicated that there were significant differences between the two groups between the musical and visual cues. In particular, the ASD group demonstrated significantly less accurate recognition of the four emotions compared to the TD group. However, the emotion recognition of both groups was more accurate following the musical cues compared to the visual cues. Finally, for both groups, their greatest recognition accuracy was for happiness following the musical cues. In terms of the visual cues, the ASD group exhibited the greatest recognition accuracy for anger. This initial study support that musical cues can facilitate emotion recognition in children with ASD. Further research is needed to improve our understanding of the mechanisms involved in emotion recognition and the role of sensory cues play in emotion recognition for children with ASD.