• Title/Summary/Keyword: Emotion-based music classification

Search Result 21, Processing Time 0.022 seconds

Study of Music Classification Optimized Environment and Atmosphere for Intelligent Musical Fountain System (지능형 음악분수 시스템을 위한 환경 및 분위기에 최적화된 음악분류에 관한 연구)

  • Park, Jun-Heong;Park, Seung-Min;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.218-223
    • /
    • 2011
  • Various research studies are underway to explore music classification by genre. Because sound professionals define the criterion of music to categorize differently each other, those classification is not easy to come up clear result. When a new genre is appeared, there is onerousness to renew the criterion of music to categorize. Therefore, music is classified by emotional adjectives, not genre. We classified music by light and shade in precedent study. In this paper, we propose the music classification system that is based on emotional adjectives to suitable search for atmosphere, and the classification criteria is three kinds; light and shade in precedent study, intense and placid, and grandeur and trivial. Variance Considered Machines that is an improved algorithm for Support Vector Machine was used as classification algorithm, and it represented 85% classification accuracy with the result that we tried to classify 525 songs.

Detection of Music Mood for Context-aware Music Recommendation (상황인지 음악추천을 위한 음악 분위기 검출)

  • Lee, Jong-In;Yeo, Dong-Gyu;Kim, Byeong-Man
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.263-274
    • /
    • 2010
  • To provide context-aware music recommendation service, first of all, we need to catch music mood that a user prefers depending on his situation or context. Among various music characteristics, music mood has a close relation with people‘s emotion. Based on this relationship, some researchers have studied on music mood detection, where they manually select a representative segment of music and classify its mood. Although such approaches show good performance on music mood classification, it's difficult to apply them to new music due to the manual intervention. Moreover, it is more difficult to detect music mood because the mood usually varies with time. To cope with these problems, this paper presents an automatic method to classify the music mood. First, a whole music is segmented into several groups that have similar characteristics by structural information. Then, the mood of each segments is detected, where each individual's preference on mood is modelled by regression based on Thayer's two-dimensional mood model. Experimental results show that the proposed method achieves 80% or higher accuracy.

Automatic Emotion Classification of Music Signals Using MDCT-Driven Timbre and Tempo Features

  • Kim, Hyoung-Gook;Eom, Ki-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2E
    • /
    • pp.74-78
    • /
    • 2006
  • This paper proposes an effective method for classifying emotions of the music from its acoustical signals. Two feature sets, timbre and tempo, are directly extracted from the modified discrete cosine transform coefficients (MDCT), which are the output of partial MP3 (MPEG 1 Layer 3) decoder. Our tempo feature extraction method is based on the long-term modulation spectrum analysis. In order to effectively combine these two feature sets with different time resolution in an integrated system, a classifier with two layers based on AdaBoost algorithm is used. In the first layer the MDCT-driven timbre features are employed. By adding the MDCT-driven tempo feature in the second layer, the classification precision is improved dramatically.

A Playlist Generation System based on Musical Preferences (사용자의 취향을 고려한 음악 재생 목록 생성 시스템)

  • Bang, Sun-Woo;Kim, Tae-Yeon;Jung, Hye-Wuk;Lee, Jee-Hyong;Kim, Yong-Se
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.337-342
    • /
    • 2010
  • The rise of music resources has led to a parallel rise in the need to manage thousands of songs on user devices. So users are tend to build play-list for manage songs. However the manual selection of songs for creating play-list is bothersome task. This paper proposes an auto play-list recommendation system considering user's context of use and preference. This system has two separate systems: mood and emotion classification system and music recommendation system. Users need to choose just one seed song for reflection their context of use and preference. The system recommends songs before the current song ends in order to fill up user play-list. User also can remove unsatisfied songs from recommended song list to adapt user preferences of the system for the next recommendation precess. The generated play-lists show well defined mood and emotion of music and provide songs that user preferences are reflected.

Salient Region Detection Algorithm for Music Video Browsing (뮤직비디오 브라우징을 위한 중요 구간 검출 알고리즘)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.112-118
    • /
    • 2009
  • This paper proposes a rapid detection algorithm of a salient region for music video browsing system, which can be applied to mobile device and digital video recorder (DVR). The input music video is decomposed into the music and video tracks. For the music track, the music highlight including musical chorus is detected based on structure analysis using energy-based peak position detection. Using the emotional models generated by SVM-AdaBoost learning algorithm, the music signal of the music videos is classified into one of the predefined emotional classes of the music automatically. For the video track, the face scene including the singer or actor/actress is detected based on a boosted cascade of simple features. Finally, the salient region is generated based on the alignment of boundaries of the music highlight and the visual face scene. First, the users select their favorite music videos from various music videos in the mobile devices or DVR with the information of a music video's emotion and thereafter they can browse the salient region with a length of 30-seconds using the proposed algorithm quickly. A mean opinion score (MOS) test with a database of 200 music videos is conducted to compare the detected salient region with the predefined manual part. The MOS test results show that the detected salient region using the proposed method performed much better than the predefined manual part without audiovisual processing.

A New Tempo Feature Extraction Based on Modulation Spectrum Analysis for Music Information Retrieval Tasks

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.2
    • /
    • pp.95-106
    • /
    • 2007
  • This paper proposes an effective tempo feature extraction method for music information retrieval. The tempo information is modeled by the narrow-band temporal modulation components, which are decomposed into a modulation spectrum via joint frequency analysis. In implementation, the tempo feature is directly extracted from the modified discrete cosine transform coefficients, which is the output of partial MP3(MPEG 1 Layer 3) decoder. Then, different features are extracted from the amplitudes of modulation spectrum and applied to different music information retrieval tasks. The logarithmic scale modulation frequency coefficients are employed in automatic music emotion classification and music genre classification. The classification precision in both systems is improved significantly. The bit vectors derived from adaptive modulation spectrum is used in audio fingerprinting task That is proved to be able to achieve high robustness in this application. The experimental results in these tasks validate the effectiveness of the proposed tempo feature.

  • PDF

An Auto Playlist Generation System with One Seed Song

  • Bang, Sung-Woo;Jung, Hye-Wuk;Kim, Jae-Kwang;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.1
    • /
    • pp.19-24
    • /
    • 2010
  • The rise of music resources has led to a parallel rise in the need to manage thousands of songs on user devices. So users have a tendency to build playlist for manage songs. However the manual selection of songs for creating playlist is a troublesome work. This paper proposes an auto playlist generation system considering user context of use and preferences. This system has two separated systems; 1) the mood and emotion classification system and 2) the music recommendation system. Firstly, users need to choose just one seed song for reflecting their context of use. Then system recommends candidate song list before the current song ends in order to fill up user playlist. User also can remove unsatisfied songs from the recommended song list to adapt the user preference model on the system for the next song list. The generated playlists show well defined mood and emotion of music and provide songs that the preference of the current user is reflected.

Development of Music Classification of Light and Shade using VCM and Beat Tracking (VCM과 Beat Tracking을 이용한 음악의 명암 분류 기법 개발)

  • Park, Seung-Min;Park, Jun-Heong;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.884-889
    • /
    • 2010
  • Recently, a music genre classification has been studied. However, experts use different criteria to classify each of these classifications is difficult to derive accurate results. In addition, when the emergence of a new genre of music genre is a newly re-defined. Music as a genre rather than to separate search should be classified as emotional words. In this paper, the feelings of people on the basis of brightness and darkness tries to categorize music. The proposed classification system by applying VCM(Variance Considered Machines) is the contrast of the music. In this paper, we are using three kinds of musical characteristics. Based on surveys made throughout the learning, based on musical attributes(beat, timbre, note) was used to study in the VCM. VCM is classified by the trained compared with the results of the survey were analyzed. Note extraction using the MATLAB, sampled at regular intervals to share music via the FFT frequency analysis by the sector average is defined as representing the element extracted note by quantifying the height of the entire distribution was identified. Cumulative frequency distribution in the entire frequency rage, using the difference in Timbre and were quantified. VCM applied to these three characteristics with the experimental results by comparing the survey results to see the contrast of the music with a probability of 95.4% confirmed that the two separate.

Sound Visualization based on Emotional Analysis of Musical Parameters (음악 구성요소의 감정 구조 분석에 기반 한 시각화 연구)

  • Kim, Hey-Ran;Song, Eun-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.6
    • /
    • pp.104-112
    • /
    • 2021
  • In this study, emotional analysis was conducted based on the basic attribute data of music and the emotional model in psychology, and the result was applied to the visualization rules in the formative arts. In the existing studies using musical parameter, there were many cases with more practical purposes to classify, search, and recommend music for people. In this study, the focus was on enabling sound data to be used as a material for creating artworks and used for aesthetic expression. In order to study the music visualization as an art form, a method that can include human emotions should be designed, which is the characteristics of the arts itself. Therefore, a well-structured basic classification of musical attributes and a classification system on emotions were provided. Also, through the shape, color, and animation of the visual elements, the visualization of the musical elements was performed by reflecting the subdivided input parameters based on emotions. This study can be used as basic data for artists who explore a field of music visualization, and the analysis method and work results for matching emotion-based music components and visualizations will be the basis for automated visualization by artificial intelligence in the future.

Affective Representations of Basic Tastes and Intensity using Multivariate Analyses (다변량분석방법을 이용한 미각 자극의 기본 맛과 강도에 따른 정서표상 )

  • Chaery Park;Inik Kim;Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.2
    • /
    • pp.39-52
    • /
    • 2023
  • According to the core affect theory, affect consists of two independent dimensions of valence and arousal. Previous studies have found that various types of stimuli, such as pictures, videos, and music, are mapped onto the core affect space. However, the research on affect using gustatory stimuli has not been explored sufficiently. This study investigated whether the affects elicited by tastes could be mapped onto the core affect space. Stimuli were selected based on two factors (taste types and intensity). Participants were presented with each stimulus, evaluated the tastes, and rated their affective responses on taste and emotion scales. The data were analyzed using repeated-measures ANOVAs and multivariate analyses (multidimensional scaling and classification). The results of univariate analyses indicated that participants felt positive for sweet stimuli but negative for bitter and salty. Furthermore, participants reported high arousal with high intensity. Multidimensional scaling revealed that taste stimuli are also represented on the core affect dimensions. Specifically, it was confirmed that in the first dimension, sweetness was represented as a positive affect, while bitter and salty tastes were represented as a negative affect. In the second dimension, bitterness was represented as low arousal and sourness as high arousal. Classification analyses confirmed that the taste was identified consistently based on the affective responses within and across participants. This study showed that the taste stimuli in daily life are also located on core affect dimensions of valence and arousal.