• Title/Summary/Keyword: Music emotion classification

Search Result 27, Processing Time 0.026 seconds

A Selection of Optimal EEG Channel for Emotion Analysis According to Music Listening using Stochastic Variables (확률변수를 이용한 음악에 따른 감정분석에의 최적 EEG 채널 선택)

  • Byun, Sung-Woo;Lee, So-Min;Lee, Seok-Pil
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.11
    • /
    • pp.1598-1603
    • /
    • 2013
  • Recently, researches on analyzing relationship between the state of emotion and musical stimuli are increasing. In many previous works, data sets from all extracted channels are used for pattern classification. But these methods have problems in computational complexity and inaccuracy. This paper proposes a selection of optimal EEG channel to reflect the state of emotion efficiently according to music listening by analyzing stochastic feature vectors. This makes EEG pattern classification relatively simple by reducing the number of dataset to process.

Parting Lyrics Emotion Classification using Word2Vec and LSTM (Word2Vec과 LSTM을 활용한 이별 가사 감정 분류)

  • Lim, Myung Jin;Park, Won Ho;Shin, Ju Hyun
    • Smart Media Journal
    • /
    • v.9 no.3
    • /
    • pp.90-97
    • /
    • 2020
  • With the development of the Internet and smartphones, digital sound sources are easily accessible, and accordingly, interest in music search and recommendation is increasing. As a method of recommending music, research using melodies such as pitch, tempo, and beat to classify genres or emotions is being conducted. However, since lyrics are becoming one of the means of expressing human emotions in music, the role of the lyrics is increasing, so a study of emotion classification based on lyrics is needed. Therefore, in this thesis, we analyze the emotions of the farewell lyrics in order to subdivide the farewell emotions based on the lyrics. After constructing an emotion dictionary by vectoriziong the similarity between words appearing in the parting lyrics through Word2Vec learning, we propose a method of classifying parting lyrics emotions using Word2Vec and LSTM, which classify lyrics by similar emotions by learning lyrics using LSTM.

Study of Music Classification Optimized Environment and Atmosphere for Intelligent Musical Fountain System (지능형 음악분수 시스템을 위한 환경 및 분위기에 최적화된 음악분류에 관한 연구)

  • Park, Jun-Heong;Park, Seung-Min;Lee, Young-Hwan;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.218-223
    • /
    • 2011
  • Various research studies are underway to explore music classification by genre. Because sound professionals define the criterion of music to categorize differently each other, those classification is not easy to come up clear result. When a new genre is appeared, there is onerousness to renew the criterion of music to categorize. Therefore, music is classified by emotional adjectives, not genre. We classified music by light and shade in precedent study. In this paper, we propose the music classification system that is based on emotional adjectives to suitable search for atmosphere, and the classification criteria is three kinds; light and shade in precedent study, intense and placid, and grandeur and trivial. Variance Considered Machines that is an improved algorithm for Support Vector Machine was used as classification algorithm, and it represented 85% classification accuracy with the result that we tried to classify 525 songs.

Detection of Music Mood for Context-aware Music Recommendation (상황인지 음악추천을 위한 음악 분위기 검출)

  • Lee, Jong-In;Yeo, Dong-Gyu;Kim, Byeong-Man
    • The KIPS Transactions:PartB
    • /
    • v.17B no.4
    • /
    • pp.263-274
    • /
    • 2010
  • To provide context-aware music recommendation service, first of all, we need to catch music mood that a user prefers depending on his situation or context. Among various music characteristics, music mood has a close relation with people‘s emotion. Based on this relationship, some researchers have studied on music mood detection, where they manually select a representative segment of music and classify its mood. Although such approaches show good performance on music mood classification, it's difficult to apply them to new music due to the manual intervention. Moreover, it is more difficult to detect music mood because the mood usually varies with time. To cope with these problems, this paper presents an automatic method to classify the music mood. First, a whole music is segmented into several groups that have similar characteristics by structural information. Then, the mood of each segments is detected, where each individual's preference on mood is modelled by regression based on Thayer's two-dimensional mood model. Experimental results show that the proposed method achieves 80% or higher accuracy.

Music player using emotion classification of facial expressions (얼굴표정을 통한 감정 분류 및 음악재생 프로그램)

  • Yoon, Kyung-Seob;Lee, SangWon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.243-246
    • /
    • 2019
  • 본 논문에서는 감성과 힐링, 머신러닝이라는 주제를 바탕으로 딥러닝을 통한 사용자의 얼굴표정을 인식하고 그 얼굴표정을 기반으로 음악을 재생해주는 얼굴표정 기반의 음악재생 프로그램을 제안한다. 얼굴표정 기반 음악재생 프로그램은 딥러닝 기반의 음악 프로그램으로써, 이미지 인식 분야에서 뛰어난 성능을 보여주고 있는 CNN 모델을 기반으로 얼굴의 표정을 인식할 수 있도록 데이터 학습을 진행하였고, 학습된 모델을 이용하여 웹캠으로부터 사용자의 얼굴표정을 인식하는 것을 통해 사용자의 감정을 추측해낸다. 그 후, 해당 감정에 맞게 감정을 더 증폭시켜줄 수 있도록, 감정과 매칭되는 노래를 재생해주고, 이를 통해, 사용자의 감정이 힐링 및 완화될 수 있도록 도움을 준다.

  • PDF

A New Tempo Feature Extraction Based on Modulation Spectrum Analysis for Music Information Retrieval Tasks

  • Kim, Hyoung-Gook
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.6 no.2
    • /
    • pp.95-106
    • /
    • 2007
  • This paper proposes an effective tempo feature extraction method for music information retrieval. The tempo information is modeled by the narrow-band temporal modulation components, which are decomposed into a modulation spectrum via joint frequency analysis. In implementation, the tempo feature is directly extracted from the modified discrete cosine transform coefficients, which is the output of partial MP3(MPEG 1 Layer 3) decoder. Then, different features are extracted from the amplitudes of modulation spectrum and applied to different music information retrieval tasks. The logarithmic scale modulation frequency coefficients are employed in automatic music emotion classification and music genre classification. The classification precision in both systems is improved significantly. The bit vectors derived from adaptive modulation spectrum is used in audio fingerprinting task That is proved to be able to achieve high robustness in this application. The experimental results in these tasks validate the effectiveness of the proposed tempo feature.

  • PDF

A Playlist Generation System based on Musical Preferences (사용자의 취향을 고려한 음악 재생 목록 생성 시스템)

  • Bang, Sun-Woo;Kim, Tae-Yeon;Jung, Hye-Wuk;Lee, Jee-Hyong;Kim, Yong-Se
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.337-342
    • /
    • 2010
  • The rise of music resources has led to a parallel rise in the need to manage thousands of songs on user devices. So users are tend to build play-list for manage songs. However the manual selection of songs for creating play-list is bothersome task. This paper proposes an auto play-list recommendation system considering user's context of use and preference. This system has two separate systems: mood and emotion classification system and music recommendation system. Users need to choose just one seed song for reflection their context of use and preference. The system recommends songs before the current song ends in order to fill up user play-list. User also can remove unsatisfied songs from recommended song list to adapt user preferences of the system for the next recommendation precess. The generated play-lists show well defined mood and emotion of music and provide songs that user preferences are reflected.

An Auto Playlist Generation System with One Seed Song

  • Bang, Sung-Woo;Jung, Hye-Wuk;Kim, Jae-Kwang;Lee, Jee-Hyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.10 no.1
    • /
    • pp.19-24
    • /
    • 2010
  • The rise of music resources has led to a parallel rise in the need to manage thousands of songs on user devices. So users have a tendency to build playlist for manage songs. However the manual selection of songs for creating playlist is a troublesome work. This paper proposes an auto playlist generation system considering user context of use and preferences. This system has two separated systems; 1) the mood and emotion classification system and 2) the music recommendation system. Firstly, users need to choose just one seed song for reflecting their context of use. Then system recommends candidate song list before the current song ends in order to fill up user playlist. User also can remove unsatisfied songs from the recommended song list to adapt the user preference model on the system for the next song list. The generated playlists show well defined mood and emotion of music and provide songs that the preference of the current user is reflected.

Automatic Emotion Classification of Music Signals Using MDCT-Driven Timbre and Tempo Features

  • Kim, Hyoung-Gook;Eom, Ki-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.2E
    • /
    • pp.74-78
    • /
    • 2006
  • This paper proposes an effective method for classifying emotions of the music from its acoustical signals. Two feature sets, timbre and tempo, are directly extracted from the modified discrete cosine transform coefficients (MDCT), which are the output of partial MP3 (MPEG 1 Layer 3) decoder. Our tempo feature extraction method is based on the long-term modulation spectrum analysis. In order to effectively combine these two feature sets with different time resolution in an integrated system, a classifier with two layers based on AdaBoost algorithm is used. In the first layer the MDCT-driven timbre features are employed. By adding the MDCT-driven tempo feature in the second layer, the classification precision is improved dramatically.

Salient Region Detection Algorithm for Music Video Browsing (뮤직비디오 브라우징을 위한 중요 구간 검출 알고리즘)

  • Kim, Hyoung-Gook;Shin, Dong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.2
    • /
    • pp.112-118
    • /
    • 2009
  • This paper proposes a rapid detection algorithm of a salient region for music video browsing system, which can be applied to mobile device and digital video recorder (DVR). The input music video is decomposed into the music and video tracks. For the music track, the music highlight including musical chorus is detected based on structure analysis using energy-based peak position detection. Using the emotional models generated by SVM-AdaBoost learning algorithm, the music signal of the music videos is classified into one of the predefined emotional classes of the music automatically. For the video track, the face scene including the singer or actor/actress is detected based on a boosted cascade of simple features. Finally, the salient region is generated based on the alignment of boundaries of the music highlight and the visual face scene. First, the users select their favorite music videos from various music videos in the mobile devices or DVR with the information of a music video's emotion and thereafter they can browse the salient region with a length of 30-seconds using the proposed algorithm quickly. A mean opinion score (MOS) test with a database of 200 music videos is conducted to compare the detected salient region with the predefined manual part. The MOS test results show that the detected salient region using the proposed method performed much better than the predefined manual part without audiovisual processing.