• Title/Summary/Keyword: Music emotion classification

Search Result 27, Processing Time 0.019 seconds

Development of Music Classification of Light and Shade using VCM and Beat Tracking (VCM과 Beat Tracking을 이용한 음악의 명암 분류 기법 개발)

  • Park, Seung-Min;Park, Jun-Heong;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.884-889
    • /
    • 2010
  • Recently, a music genre classification has been studied. However, experts use different criteria to classify each of these classifications is difficult to derive accurate results. In addition, when the emergence of a new genre of music genre is a newly re-defined. Music as a genre rather than to separate search should be classified as emotional words. In this paper, the feelings of people on the basis of brightness and darkness tries to categorize music. The proposed classification system by applying VCM(Variance Considered Machines) is the contrast of the music. In this paper, we are using three kinds of musical characteristics. Based on surveys made throughout the learning, based on musical attributes(beat, timbre, note) was used to study in the VCM. VCM is classified by the trained compared with the results of the survey were analyzed. Note extraction using the MATLAB, sampled at regular intervals to share music via the FFT frequency analysis by the sector average is defined as representing the element extracted note by quantifying the height of the entire distribution was identified. Cumulative frequency distribution in the entire frequency rage, using the difference in Timbre and were quantified. VCM applied to these three characteristics with the experimental results by comparing the survey results to see the contrast of the music with a probability of 95.4% confirmed that the two separate.

The Study of Bio Emotion Cognition follow Stress Index Number by Multiplex SVM Algorithm (다중 SVM 알고리즘을 이용한 스트레스 지수에 따른 생체 감성 인식에 관한 연구)

  • Kim, Tae-Yeun;Seo, Dae-Woong;Bae, Sang-Hyun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.5 no.1
    • /
    • pp.45-51
    • /
    • 2012
  • In this paper, it's a system which recognize the user's emotions after obtaining the biological informations(pulse sensor, blood pressure sensor, blood sugar sensor etc.) about user's bio informations through wireless sensors in accordance of previously collected informations about user's stress index and classification the Colors & Music. This system collects the inputs, saves in the database and finally, classifies emotions according to the stress quotient by using multiple SVM(Support Vector Machine) algorithm. The experiment of multiple SVM algorithm was conducted by using 2,000 data sets. The experiment has approximately 87.7% accuracy.

Greeting, Function, and Music: How Users Chat with Voice Assistants

  • Wang, Ji;Zhang, Han;Zhang, Cen;Xiao, Junjun;Lee, Seung Hee
    • Science of Emotion and Sensibility
    • /
    • v.23 no.2
    • /
    • pp.61-74
    • /
    • 2020
  • Voice user interface has become a commercially viable and extensive interaction mechanism with the development of voice assistants. Despite the popularity of voice assistants, the academic community does not utterly understand about what, when, and how users chat with them. Chatting with a voice assistant is crucial as it defines how a user will seek the help of the assistant in the future. This study aims to cover the essence and construct of conversational AI, to develop a classification method to deal with user utterances, and, most importantly, to understand about what, when, and how Chinese users chat with voice assistants. We collected user utterances from the real conventional database of a commercial voice assistant, NetEase Sing in China. We also identified different utterance categories on the basis of previous studies and real usage conditions and annotated the utterances with 17 labels. Furthermore, we found that the three top reasons for the usage of voice assistants in China are the following: (1) greeting, (2) function, and (3) music. Chinese users like to interact with voice assistants at night from 7 PM to 10 PM, and they are polite toward the assistants. The whole percentage of negative feedback utterances is less than 6%, which is considerably low. These findings appear to be useful in voice interaction designs for intelligent hardware.

Sound Visualization based on Emotional Analysis of Musical Parameters (음악 구성요소의 감정 구조 분석에 기반 한 시각화 연구)

  • Kim, Hey-Ran;Song, Eun-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.6
    • /
    • pp.104-112
    • /
    • 2021
  • In this study, emotional analysis was conducted based on the basic attribute data of music and the emotional model in psychology, and the result was applied to the visualization rules in the formative arts. In the existing studies using musical parameter, there were many cases with more practical purposes to classify, search, and recommend music for people. In this study, the focus was on enabling sound data to be used as a material for creating artworks and used for aesthetic expression. In order to study the music visualization as an art form, a method that can include human emotions should be designed, which is the characteristics of the arts itself. Therefore, a well-structured basic classification of musical attributes and a classification system on emotions were provided. Also, through the shape, color, and animation of the visual elements, the visualization of the musical elements was performed by reflecting the subdivided input parameters based on emotions. This study can be used as basic data for artists who explore a field of music visualization, and the analysis method and work results for matching emotion-based music components and visualizations will be the basis for automated visualization by artificial intelligence in the future.

Affective Responses to ASMR Using Multidimensional Scaling and Classification (다차원척도법과 분류분석을 이용한 ASMR에 대한 정서표상)

  • Kim, Hyeonjung;Kim, Jongwan
    • Science of Emotion and Sensibility
    • /
    • v.25 no.3
    • /
    • pp.47-62
    • /
    • 2022
  • Previous emotion studies revealed the two core affective dimensions of valence and arousal using affect-eliciting stimuli, such as pictures, music, and videos. Autonomous sensory meridian response (ASMR), a type of stimuli that has emerged recently, produces a sense of psychological stability and calmness. We explored whether ASMR could be represented on the core affect dimensions. In this study, we used three affective types ASMR (negative, neutral, and positive) as stimuli. Auditory ASMR videos were used in Study 1, while auditory and audiovisual ASMR videos were used in Study 2. Participants were asked to rate how they felt about the ten adjectives using five-point Likert scales. Multidimensional scaling (MDS) and classification analyses were performed. The results of the MDS showed that distinctions between auditory and audiovisual ASMR videos were represented well in the valence dimension. Additionally, the results of the classification showed that affective conditions within and across individuals for within- and cross-modalities. Thus, we confirmed that the affective representations for individuals could be predicted and that the affective representations were consistent between individuals. These results suggest that ASMR videos, including other affect-eliciting videos, were also located in the core affect dimension space, supporting the core affect theory (Russell, 1980).

Affective Representations of Basic Tastes and Intensity using Multivariate Analyses (다변량분석방법을 이용한 미각 자극의 기본 맛과 강도에 따른 정서표상 )

  • Chaery Park;Inik Kim;Jongwan Kim
    • Science of Emotion and Sensibility
    • /
    • v.26 no.2
    • /
    • pp.39-52
    • /
    • 2023
  • According to the core affect theory, affect consists of two independent dimensions of valence and arousal. Previous studies have found that various types of stimuli, such as pictures, videos, and music, are mapped onto the core affect space. However, the research on affect using gustatory stimuli has not been explored sufficiently. This study investigated whether the affects elicited by tastes could be mapped onto the core affect space. Stimuli were selected based on two factors (taste types and intensity). Participants were presented with each stimulus, evaluated the tastes, and rated their affective responses on taste and emotion scales. The data were analyzed using repeated-measures ANOVAs and multivariate analyses (multidimensional scaling and classification). The results of univariate analyses indicated that participants felt positive for sweet stimuli but negative for bitter and salty. Furthermore, participants reported high arousal with high intensity. Multidimensional scaling revealed that taste stimuli are also represented on the core affect dimensions. Specifically, it was confirmed that in the first dimension, sweetness was represented as a positive affect, while bitter and salty tastes were represented as a negative affect. In the second dimension, bitterness was represented as low arousal and sourness as high arousal. Classification analyses confirmed that the taste was identified consistently based on the affective responses within and across participants. This study showed that the taste stimuli in daily life are also located on core affect dimensions of valence and arousal.

Music Classification Based On Emotion Utilizing Data Mining (데이터마이닝 기법을 이용한 감정 기반 음악 분류)

  • Jo, Wooyeon;Shon, Taeshik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.941-944
    • /
    • 2015
  • 저장 장치의 급속한 발전으로 인해 기존에 서비스할 수 없었던 개인 사용자를 위한 클라우드 서비스가 활성화되고 있다. 이 중 음악을 대상으로 하는 스트리밍 및 공유 서비스는 다양한 음악의 종류를 수용하기 위해 체계적인 분류를 필요로 한다. 기존의 분류체계는 단순히 작곡가나 업로더의 의견에 의해서 일방적으로 정해지기 때문에 사용자가 중심이 되는 클라우드 서비스에는 어울리지 않는다. 따라서 본 논문은 이와 같은 문제점을 해결하기 위해 사랑의 감정을 기준으로 새로운 분류체계를 제안한다. 자동적인 분류를 위해 데이터마이닝 기법을 접목시켰으며, 원활한 마이닝을 위해 오디오 음악 파일(raw data)을 정해진 크기로 자르고 feature extraction을 통해 오디오 음악 파일에 대한 전처리를 수행하였다. 이후 feature selection을 수행하기 위해 clustering을 이용해 유효한 중요도를 지나는 feature를 선별하였으며 선별된 feature를 토대로 SVN(Support Vector Machine)을 이용해 feature의 중요도에 대한 유효성을 검증함과 동시에 분류를 수행하여 감정을 기반으로 분류한 결과를 보였다.