• Title/Summary/Keyword: Music retrieval

Search Result 133, Processing Time 0.031 seconds

Multiclass Music Classification Approach Based on Genre and Emotion

  • Jonghwa Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.27-32
    • /
    • 2024
  • Reliable and fine-grained musical metadata are required for efficient search of rapidly increasing music files. In particular, since the primary motive for listening to music is its emotional effect, diversion, and the memories it awakens, emotion classification along with genre classification of music is crucial. In this paper, as an initial approach towards a "ground-truth" dataset for music emotion and genre classification, we elaborately generated a music corpus through labeling of a large number of ordinary people. In order to verify the suitability of the dataset through the classification results, we extracted features according to MPEG-7 audio standard and applied different machine learning models based on statistics and deep neural network to automatically classify the dataset. By using standard hyperparameter setting, we reached an accuracy of 93% for genre classification and 80% for emotion classification, and believe that our dataset can be used as a meaningful comparative dataset in this research field.

Robust Music Categorization Method using Social Tags (소셜 태그를 이용한 강인한 음악 분류 기법)

  • Lee, Jaesung;Kim, Dae-Won
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.01a
    • /
    • pp.181-182
    • /
    • 2015
  • 음악 검색에 있어 소셜 태그 정보는 사용자로 하여금 음악의 내재적 의미를 빠르게 파악할 수 있도록 한다. 음악의 소셜 태그 정보는 음악 추천 시스템을 활용하는 사용자(청취자)에 의해 점진적으로 완성되기 때문에 초기에 완전한 태그 정보를 수집하는 것은 어렵다. 본 논문에서는 음악의 일부 태그가 누락되어 있는 상황에서 음악 정보 검색을 자동으로 수행할 수 있는 클래스 분류 알고리즘을 제안하고자 한다.

  • PDF

Music Identification Using Its Pattern

  • Islam, Mohammad Khairul;Lee, Hyung-Jin;Paul, Anjan Kumar;Baek, Joong-Hwan
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.419-420
    • /
    • 2007
  • In this method, we extract peak periods using energy contents of each segment of music. This feature extraction method is equally applied on both the training and query music. Similarity matching algorithm is applied on the extracted feature values for identifying the query music from the database. The retrieval accuracy of 95% of our method is a pretty good result.

  • PDF

Improving Cover Song Search Accuracy by Extracting Salient Chromagram Components (강인한 크로마그램 성분 추출을 통한 커버곡 검색 성능 개선)

  • Seo, Jin Soo
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.639-645
    • /
    • 2019
  • This paper proposes a salient chromagram components extraction method based on the temporal discrete cosine transform of a chromagram block to improve cover song retrieval accuracy. The proposed salient chromagram emphasizes tonal contents of music, which are well-preserved between an original song and its cover version, while reducing the effects of timbre difference. We apply the proposed salient chromagram extraction method as a preprocessing step for the Fourier-transform based cover song matching. Experiments on two cover song datasets confirm that the proposed salient chromagram improves the cover song search accuracy.

HummingBird: A Similar Music Retrieval System using Improved Scaled and Warped Matching (HummingBird: 향상된 스케일드앤워프트 매칭을 이용한 유사 음악 검색 시스템)

  • Lee, Hye-Hwan;Shim, Kyu-Seok;Park, Hyoung-Min
    • Journal of KIISE:Databases
    • /
    • v.34 no.5
    • /
    • pp.409-419
    • /
    • 2007
  • Database community focuses on the similar music retrieval systems for music database when a humming query is given. One of the approaches is converting the midi data to time series, building their indices and performing the similarity search on them. Queries based on humming can be transformed to time series by using the known pitch detection algorithms. The recently suggested algorithm, scaled and warped matching, is based on dynamic time warping and uniform scaling. This paper proposes Humming BIRD(Humming Based sImilaR mini music retrieval system) using sliding window and center-aligned scaled and warped matching. Center-aligned scaled and warped matching is a mixed distance measure of center-aligned uniform scaling and time warping. The newly proposed measure gives tighter lower bound than previous ones which results in reduced search space. The empirical results show the superiority of this algorithm comparing the pruning power while it returns the same results.

Musician Search in Time-Series Pattern Index Files using Features of Audio (오디오 특징계수를 이용한 시계열 패턴 인덱스 화일의 뮤지션 검색 기법)

  • Kim, Young-In
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.69-74
    • /
    • 2006
  • The recent development of multimedia content-based retrieval technologies brings great attention of musician retrieval using features of a digital audio data among music information retrieval technologies. But the indexing techniques for music databases have not been studied completely. In this paper, we present a musician retrieval technique for audio features using the space split methods in the time-series pattern index file. We use features of audio to retrieve the musician and a time-series pattern index file to search the candidate musicians. Experimental results show that the time-series pattern index file using the rotational split method is efficient for musician retrievals in the time-series pattern files.

  • PDF

Extracting Melodies from Polyphonic Piano Solo Music Based on Patterns of Music Structure (음악 구조의 패턴에 기반을 둔 다음(Polyphonic) 피아노 솔로 음악으로부터의 멜로디 추출)

  • Choi, Yoon-Jae;Lee, Ho-Dong;Lee, Ho-Joon;Park, Jong C.
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.725-732
    • /
    • 2009
  • Thanks to the development of the Internet, people can easily access a vast amount of music. This brings attention to application systems such as a melody-based music search service or music recommendation service. Extracting melodies from music is a crucial process to provide such services. This paper introduces a novel algorithm that can extract melodies from piano music. Since piano can produce polyphonic music, we expect that by studying melody extraction from piano music, we can help extract melodies from general polyphonic music.

  • PDF

A User Study on Information Searching Behaviors for Designing User-centered Query Interface of Content-Based Music Information Retrieval System (내용기반 음악정보 검색시스템을 위한 이용자 중심의 질의 인터페이스 설계에 관한 연구)

  • Lee, Yoon-Joo;Moon, Sung-Been
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • The purpose of this study is to observe and analyze information searching behaviors of various user groups in different access modes for designing user-centered query interface of content-based Music Information Retrieval System(MIRS). Two expert groups and two non-expert groups were recruited for this research. The data gathering techniques employed in this study were in-depth interviewing, participant observation, searching task experiments, think-aloud protocols, and post-search surveys. Expert users, especially majoring in music theory, preferred to input exact notes one by one using the devices such as keyboard and musical score. On the other hand, non-expert users preferred to input melodic contours by humming.

Musical Genre Classification System based on Multiple-Octave Bands (다중 옥타브 밴드 기반 음악 장르 분류 시스템)

  • Byun, Karam;Kim, Moo Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.238-244
    • /
    • 2013
  • For musical genre classification, various types of feature vectors are utilized. Mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), and octave-based spectral contrast (OSC) are widely used as short-term features, and their long-term variations are also utilized. In this paper, OSC features are extracted not only in the single-octave band domain, but also in the multiple-octave band one to capture the correlation between octave bands. As a baseline system, we select the genre classification system that won the fourth place in the 2012 music information retrieval evaluation exchange (MIREX) contest. By applying the OSC features based on multiple-octave bands, we obtain the better classification accuracy by 0.40% and 3.15% for the GTZAN and Ballroom databases, respectively.