• Title/Summary/Keyword: music identification

Search Result 53, Processing Time 0.021 seconds

HMM-based Music Identification System for Copyright Protection (저작권 보호를 위한 HMM기반의 음악 식별 시스템)

  • Kim, Hee-Dong;Kim, Do-Hyun;Kim, Ji-Hwan
    • Phonetics and Speech Sciences
    • /
    • v.1 no.1
    • /
    • pp.63-67
    • /
    • 2009
  • In this paper, in order to protect music copyrights, we propose a music identification system which is scalable to the number of pieces of registered music and robust to signal-level variations of registered music. For its implementation, we define the new concepts of 'music word' and 'music phoneme' as recognition units to construct 'music acoustic models'. Then, with these concepts, we apply the HMM-based framework used in continuous speech recognition to identify the music. Each music file is transformed to a sequence of 39-dimensional vectors. This sequence of vectors is represented as ordered states with Gaussian mixtures. These ordered states are trained using Baum-Welch re-estimation method. Music files with a suspicious copyright are also transformed to a sequence of vectors. Then, the most probable music file is identified using Viterbi algorithm through the music identification network. We implemented a music identification system for 1,000 MP3 music files and tested this system with variations in terms of MP3 bit rate and music speed rate. Our proposed music identification system demonstrates robust performance to signal variations. In addition, scalability of this system is independent of the number of registered music files, since our system is based on HMM method.

  • PDF

Children's Music Cognition: Comparison of Identification, Classification, and Seriation in Music Tasks (아동의 음악 인지 : 음악의 동일성·유목화·서열화 인지 비교)

  • Kim, Keum Hee;Yi, Soon Hyung
    • Korean Journal of Child Studies
    • /
    • v.20 no.3
    • /
    • pp.259-273
    • /
    • 1999
  • This studied investigated children's music identification, classification, and seriation cognitive task performance abilities by age and sex. The subjects were l20 six-, eight-, and ten-year-old school children. There were significant positive correlations among music cognition tasks and significant age and sex differences within each of the music tasks. Ten-year-old children were more likely to complete their music identification tasks than the younger children and girls were more likely than boys to complete their music identification tasks. Eight- and 10-year-old children were more likely to complete their music classification tasks than the younger group. Piagetian stage theory was demonstrated in children's music classification task performance. There was an age-related increase in the performance of the music seriation tasks. Developmental sequential theory was demonstrated in music seriation performance.

  • PDF

Listeners' Perception of Intended Emotions in Music

  • Chong, Hyun Ju;Jeong, Eunju;Kim, Soo Ji
    • International Journal of Contents
    • /
    • v.9 no.4
    • /
    • pp.78-85
    • /
    • 2013
  • Music functions as a catalyst for various emotional experiences. Among the numerous genres of music, film music has been reported to induce strong emotional responses. However, the effectiveness of film music in evoking different types of emotions and its relationship in terms of which musical elements contribute to listeners' perception of intended emotion have been rarely investigated. The purpose of this study was to examine the congruence between the intended emotion and the perceived emotion of listeners in film music listening and to identify musical characteristics of film music that correspond with specific types of emotion. Additionally, the study aimed to investigate possible relationships between participants' identification responses and personal musical experience. A total of 147 college students listened to twelve 15-second music excerpts and identified the perceived emotion during music listening. The results showed a high degree of congruence between the intended emotion in film music and the participants' perceived emotion. Existence of tonality and modality were found to play an important role in listeners' perception of intended emotion. The findings suggest that identification of perceived emotion in film music excerpts was congruent regardless of individual differences. Specific music components that led to high congruence are further discussed.

A relevance-based pairwise chromagram similarity for improving cover song retrieval accuracy (커버곡 검색 정확도 향상을 위한 적합도 기반 크로마그램 쌍별 유사도)

  • Jin Soo Seo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.200-206
    • /
    • 2024
  • Computing music similarity is an indispensable component in developing music search service. This paper proposes a relevance weight of each chromagram vector for cover song identification in computing a music similarity function in order to boost identification accuracy. We derive a music similarity function using the relevance weight based on the probabilistic relevance model, where higher relevance weights are assigned to less frequently-occurring discriminant chromagram vectors while lower weights to more frequently-occurring ones. Experimental results performed on two cover music datasets show that the proposed music similarity improves the cover song identification performance.

A music similarity function based on probabilistic linear discriminant analysis for cover song identification (커버곡 검색을 위한 확률적 선형 판별 분석 기반 음악 유사도)

  • Jin Soo, Seo;Junghyun, Kim;Hyemi, Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.662-667
    • /
    • 2022
  • Computing music similarity is an indispensable component in developing music search service. This paper focuses on learning a music similarity function in order to boost cover song identification performance. By using the probabilistic linear discriminant analysis, we construct a latent music space where the distances between cover song pairs reduces while the distances between the non-cover song pairs increases. We derive a music similarity function by testing hypothesis, whether two songs share the same latent variable or not, using the probabilistic models with the assumption that observed music features are generated from the learned latent music space. Experimental results performed on two cover music datasets show that the proposed music similarity improves the cover song identification performance.

Longitudinal music perception performance of postlingual deaf adults with cochlear implants using acoustic and/or electrical stimulation

  • Chang, Son A;Shin, Sujin;Kim, Sungkeong;Lee, Yeabitna;Lee, Eun Young;Kim, Hanee;Shin, You-Ree;Chun, Young-Myoung
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.103-109
    • /
    • 2021
  • In this study, we investigated longitudinal music perception of adult cochlear implant (CI) users and how acoustic stimulation with CI affects their music performance. A total of 163 participants' data were analyzed retrospectively. 96 participants were using acoustic stimulation with CI and 67 participants were using electrical stimulation only via CI. The music performance (melody identification, appreciation, and satisfaction) data were collected pre-implantation, 1-year, and 2-year post-implantation. Mixed repeated measures of ANOVA and pairwise analysis adjusted by Tukey were used for the statistics. As result, in both groups, there were significant improvements in melody identification, music appreciation, and music satisfaction at 1-year, and 2-year post-implantation than a pre-implantation, but there was no significant difference between 1 and 2 years in any of the variables. Also, the group of acoustic stimulation with CI showed better perception skill of melody identification than the CI-only group. However, no differences found in music appreciation and satisfaction between the two groups, and possible explanations were discussed. In conclusion, acoustic and/or electrical hearing devices benefit the recipients in music performance over time. Although acoustic stimulation accompanied with electrical stimulation could benefit the recipients in terms of listening skills, those benefits may not extend to the subjective acceptance of music. These results suggest the need for improved sound processing mechanisms and music rehabilitation.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • Journal of Audiology & Otology
    • /
    • v.24 no.3
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

Effect of Music Training on Categorical Perception of Speech and Music

  • L., Yashaswini;Maruthy, Sandeep
    • Korean Journal of Audiology
    • /
    • v.24 no.3
    • /
    • pp.140-148
    • /
    • 2020
  • Background and Objectives: The aim of this study is to evaluate the effect of music training on the characteristics of auditory perception of speech and music. The perception of speech and music stimuli was assessed across their respective stimulus continuum and the resultant plots were compared between musicians and non-musicians. Subjects and Methods: Thirty musicians with formal music training and twenty-seven non-musicians participated in the study (age: 20 to 30 years). They were assessed for identification of consonant-vowel syllables (/da/ to /ga/), vowels (/u/ to /a/), vocal music note (/ri/ to /ga/), and instrumental music note (/ri/ to /ga/) across their respective stimulus continuum. The continua contained 15 tokens with equal step size between any adjacent tokens. The resultant identification scores were plotted against each token and were analyzed for presence of categorical boundary. If the categorical boundary was found, the plots were analyzed by six parameters of categorical perception; for the point of 50% crossover, lower edge of categorical boundary, upper edge of categorical boundary, phoneme boundary width, slope, and intercepts. Results: Overall, the results showed that both speech and music are perceived differently in musicians and non-musicians. In musicians, both speech and music are categorically perceived, while in non-musicians, only speech is perceived categorically. Conclusions: The findings of the present study indicate that music is perceived categorically by musicians, even if the stimulus is devoid of vocal tract features. The findings support that the categorical perception is strongly influenced by training and results are discussed in light of notions of motor theory of speech perception.

Investigation of Timbre-related Music Feature Learning using Separated Vocal Signals (분리된 보컬을 활용한 음색기반 음악 특성 탐색 연구)

  • Lee, Seungjin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1024-1034
    • /
    • 2019
  • Preference for music is determined by a variety of factors, and identifying characteristics that reflect specific factors is important for music recommendations. In this paper, we propose a method to extract the singing voice related music features reflecting various musical characteristics by using a model learned for singer identification. The model can be trained using a music source containing a background accompaniment, but it may provide degraded singer identification performance. In order to mitigate this problem, this study performs a preliminary work to separate the background accompaniment, and creates a data set composed of separated vocals by using the proven model structure that appeared in SiSEC, Signal Separation and Evaluation Campaign. Finally, we use the separated vocals to discover the singing voice related music features that reflect the singer's voice. We compare the effects of source separation against existing methods that use music source without source separation.

Indentification of Coherent/Incoherent Noise Sources Using A Microphone Line Array (독립, 비독립 음원이 동시에 존재할 경우 선형 마이크로폰 어레이를 이용한 소음원 탐지 방법)

  • 김시문;김양한
    • Journal of KSNVE
    • /
    • v.6 no.6
    • /
    • pp.835-842
    • /
    • 1996
  • To identify the locations and strengths of acoustic sources, one may use a microphone line array. Apparent advantage of the source identification method utilizing a line array is that it requires less measurement points than intensity method and holography. This method is based on the information of magnitude and phase difference between pressure signals at each microphone. Since those differences are dependent on the source model, we have to assume them such as plane, monopole, etc. In this paper the conventional source identification methods such as beamforming method and MUSIC method are briefly reviewed by modeling a source as plane and spherical wave, then a modified method is introduced. This can be applied to sound field which may by either coherent or incoherent. Typical simulations and experiment are performed to confirm this identification method.

  • PDF