• Title/Summary/Keyword: MPEG -7 Audio Descriptor

Search Result 5, Processing Time 0.02 seconds

Emotion-Based Music Retrieval Using Consistency Principle and Multi-Query Feedback (검색의 일관성원리와 피드백을 이용한 감성기반 음악 검색 시스템)

  • Shin, Song-Yi;Park, En-Jong;Eum, Kyoung-Bae;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.17B no.2
    • /
    • pp.99-106
    • /
    • 2010
  • In this paper, we propose the construction of multi-queries and consistency principle for the user's emotion-based music retrieval system. The features used in the system are MPEG-7 audio descriptors, which are international standards recommended for content-based audio retrievals. In addition we propose the method to determine the weight that represent the importance of each descriptor for each emotion in order to reduce the computation. Also, the proposed retrieval algorithm that uses the relevance feedback based on consistency principal and multi-queries improves the success ratio of musics corresponding to user's emotion.

A Study on the Music Retrieval System using MPEG-7 Audio Low-Level Descriptors (MPEG-7 오디오 하위 서술자를 이용한 음악 검색 방법에 관한 연구)

  • Park Mansoo;Park Chuleui;Kim Hoi-Rin;Kang Kyeongok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.215-218
    • /
    • 2003
  • 본 논문에서는 MPEG-7에 정의된 오디오 서술자를 이용한 오디오 특징을 기반으로 한 음악 검색 알고리즘을 제안한다. 특히 timbral 특징들은 음색 구분을 용이하게 할 수 있어 음악 검색뿐만 아니라 음악 장르 분류 또는 Query by humming에 이용 될 수 있다. 이러한 연구를 통하여 오디오 신호의 대표적인 특성을 표현 할 수 있는 특징벡터를 구성 할 수 있다면 추후에 멀티모달 시스템을 이용한 검색 알고리즘에도 오디오 특징으로 이용 될 수 있을 것이다 본 논문에서는 방송 시스템에 적용 할 수 있도록 검색 범위를 특정 컨텐츠의 O.S.T 앨범으로 제한하였다. 즉, 사용자가 임의로 선택한 부분적인 오디오 클립만을 이용하여 그 컨텐츠 전체의 O.S.T 앨범 내에서 음악을 검색할 수 있도록 하였다. 오디오 특징벡터를 구성하기 위한 MPEG-7 오디오 서술자의 조합 방법을 제안하고 distance 또는 ratio 계산 방식을 통해 성능 향상을 추구하였다. 또한 reference 음악의 템플릿 구성 방식의 변화를 통해 성능 향상을 추구하였다. Classifier로 k-NN 방식을 사용하여 성능 평가를 수행한 결과 timbral spectral feature들의 비율을 이용한 IFCR(Intra-Feature Component Ratio) 방식이 Euclidean distance 방식보다 우수한 성능을 보였다.

  • PDF

Content-based Music Information Retrieval using Pitch Histogram (Pitch 히스토그램을 이용한 내용기반 음악 정보 검색)

  • 박만수;박철의;김회린;강경옥
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.2-7
    • /
    • 2004
  • In this paper, we proposed the content-based music information retrieval technique using some MPEG-7 low-level descriptors. Especially, pitch information and timbral features can be applied in music genre classification, music retrieval, or QBH(Query By Humming) because these can be modeling the stochasticpattern or timbral information of music signal. In this work, we restricted the music domain as O.S.T of movie or soap opera to apply broadcasting system. That is, the user can retrievalthe information of the unknown music using only an audio clip with a few seconds extracted from video content when background music sound greeted user's ear. We proposed the audio feature set organized by MPEG-7 descriptors and distance function by vector distance or ratio computation. Thus, we observed that the feature set organized by pitch information is superior to timbral spectral feature set and IFCR(Intra-Feature Component Ratio) is better than ED(Euclidean Distance) as a vector distance function. To evaluate music recognition, k-NN is used as a classifier

The Weight Decision of Multi-dimensional Features using Fuzzy Similarity Relations and Emotion-Based Music Retrieval (퍼지 유사관계를 이용한 다차원 특징들의 가중치 결정과 감성기반 음악검색)

  • Lim, Jee-Hye;Lee, Joon-Whoan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.637-644
    • /
    • 2011
  • Being digitalized, the music can be easily purchased and delivered to the users. However, there is still some difficulty to find the music which fits to someone's taste using traditional music information search based on musician, genre, tittle, album title and so on. In order to reduce the difficulty, the contents-based or the emotion-based music retrieval has been proposed and developed. In this paper, we propose new method to determine the importance of MPEG-7 low-level audio descriptors which are multi-dimensional vectors for the emotion-based music retrieval. We measured the mutual similarities of musics which represent a pair of emotions expressed by opposite meaning in terms of each multi-dimensional descriptor. Then rough approximation, and inter- and intra similarity ratio from the similarity relation are used for determining the importance of a descriptor, respectively. The set of weights based on the importance decides the aggregated similarity measure, by which emotion-based music retrieval can be achieved. The proposed method shows better result than previous method in terms of the average number of satisfactory musics in the experiment emotion-based retrieval based on content-based search.

The Design of Object-based 3D Audio Broadcasting System (객체기반 3차원 오디오 방송 시스템 설계)

  • 강경옥;장대영;서정일;정대권
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.7
    • /
    • pp.592-602
    • /
    • 2003
  • This paper aims to describe the basic structure of novel object-based 3D audio broadcasting system To overcome current uni-directional audio broadcasting services, the object-based 3D audio broadcasting system is designed for providing the ability to interact with important audio objects as well as realistic 3D effects based on the MPEG-4 standard. The system is composed of 6 sub-modules. The audio input module collects the background sound object, which is recored by 3D microphone, and audio objects, which are recorded by monaural microphone or extracted through source separation method. The sound scene authoring module edits the 3D information of audio objects such as acoustical characteristics, location, directivity and etc. It also defines the final sound scene with a 3D background sound, which is intended to be delievered to a receiving terminal by producer. The encoder module encodes scene descriptors and audio objects for effective transmission. The decoder module extracts scene descriptors and audio objects from decoding received bistreams. The sound scene composition module reconstructs the 3D sound scene with scene descriptors and audio objects. The 3D sound renderer module maximizes the 3D sound effects through adapting the final sound to the listner's acoustical environments. It also receives the user's controls on audio objects and sends them to the scene composition module for changing the sound scene.