• Title/Summary/Keyword: Audio Information Retrieval

Search Result 73, Processing Time 0.028 seconds

A Study on Visualization of Musical Rhythm Based on Music Information Retrieval (Music Information Retrieval(MIR)을 활용한 음악적 리듬의 시각화 연구 -Onset 검출(Onset Detection) 알고리즘에 의한 시각화 어플리케이션)

  • Che, Swann
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1075-1080
    • /
    • 2009
  • 이 글은 Music Information Retrieval(MIR) 기법을 사용하여 오디오 콘텐츠의 리듬 정보를 자동으로 분석하고 이를 시각화하는 방법에 대해 다룬다. 특히 MIR을 활용한 간단한 시각화(sound visualization) 어플리케이션을 소개함으로써 음악 정보 분석이 디자인, 시각 예술에서 다양하게 활용될 수 있음을 보이고자 한다. 음악적 정보를 시각 예술로 담아내려는 시도는 20세기 초 아방가르드 화가들에 의해 본격적으로 시작되었다. 80년대 이후에는 컴퓨터 기술의 급속한 발전으로 사운드와 이미지를 디지털 영역에서 쉽게 하나로 다룰 수 있게 되었고, 이에 따라 다양한 오디오 비주얼 예술작품들이 등장하였다. MIR은 오디오 콘텐츠로부터 음악적 정보를 분석하는 DSP(Digital Signal Processing) 기술로 최근 디지털 콘텐츠 시장의 확장과 더불어 연구가 활발히 진행되고 있다. 특히 웹이나 모바일에서는 이미 다양한 상용 어플리케이션이 적용되고 있는데 query-by-humming과 같은 음악 인식 어플리케이션이 대표적인 경우이다. 이 글에서는 onset 검출(onset detection)을 중심으로 음악적 리듬을 분석하는 알고리즘을 살펴보고 기본적인 조형원리에 따라 이를 시각화하는 어플리케이션의 예를 소개한다.

  • PDF

Similar Movie Contents Retrieval Using Peak Features from Audio (오디오의 Peak 특징을 이용한 동일 영화 콘텐츠 검색)

  • Chung, Myoung-Bum;Sung, Bo-Kyung;Ko, Il-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1572-1580
    • /
    • 2009
  • Combing through entire video files for the purpose of recognizing and retrieving matching movies requires much time and memory space. Instead, most current similar movie-matching methods choose to analyze only a part of each movie's video-image information. Yet, these methods still share a critical problem of erroneously recognizing as being different matching videos that have been altered only in resolution or converted merely with a different codecs. This paper proposes an audio-information-based search algorithm by which similar movies can be identified. The proposed method prepares and searches through a database of movie's spectral peak information that remains relatively steady even with changes in the bit-rate, codecs, or sample-rate. The method showed a 92.1% search success rate, given a set of 1,000 video files whose audio-bit-rate had been altered or were purposefully written in a different codec.

  • PDF

Same music file recognition method by using similarity measurement among music feature data (음악 특징점간의 유사도 측정을 이용한 동일음원 인식 방법)

  • Sung, Bo-Kyung;Chung, Myoung-Beom;Ko, Il-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.3
    • /
    • pp.99-106
    • /
    • 2008
  • Recently, digital music retrieval is using in many fields (Web portal. audio service site etc). In existing fields, Meta data of music are used for digital music retrieval. If Meta data are not right or do not exist, it is hard to get high accurate retrieval result. Contents based information retrieval that use music itself are researched for solving upper problem. In this paper, we propose Same music recognition method using similarity measurement. Feature data of digital music are extracted from waveform of music using Simplified MFCC (Mel Frequency Cepstral Coefficient). Similarity between digital music files are measured using DTW (Dynamic time Warping) that are used in Vision and Speech recognition fields. We success all of 500 times experiment in randomly collected 1000 songs from same genre for preying of proposed same music recognition method. 500 digital music were made by mixing different compressing codec and bit-rate from 60 digital audios. We ploved that similarity measurement using DTW can recognize same music.

  • PDF

A Study on the Signal Processing for Content-Based Audio Genre Classification (내용기반 오디오 장르 분류를 위한 신호 처리 연구)

  • 윤원중;이강규;박규식
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.271-278
    • /
    • 2004
  • In this paper, we propose a content-based audio genre classification algorithm that automatically classifies the query audio into five genres such as Classic, Hiphop, Jazz, Rock, Speech using digital sign processing approach. From the 20 seconds query audio file, the audio signal is segmented into 23ms frame with non-overlapped hamming window and 54 dimensional feature vectors, including Spectral Centroid, Rolloff, Flux, LPC, MFCC, is extracted from each query audio. For the classification algorithm, k-NN, Gaussian, GMM classifier is used. In order to choose optimum features from the 54 dimension feature vectors, SFS(Sequential Forward Selection) method is applied to draw 10 dimension optimum features and these are used for the genre classification algorithm. From the experimental result, we can verify the superior performance of the proposed method that provides near 90% success rate for the genre classification which means 10%∼20% improvements over the previous methods. For the case of actual user system environment, feature vector is extracted from the random interval of the query audio and it shows overall 80% success rate except extreme cases of beginning and ending portion of the query audio file.

A Similarity Computation Algorithm Based on the Pitch and Rhythm of Music Melody (선율의 음높이와 리듬 정보를 이용한 음악의 유사도 계산 알고리즘)

  • Mo, Jong-Sik;Kim, So-Young;Ku, Kyong-I;Han, Chang-Ho;Kim, Yoo-Sung
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3762-3774
    • /
    • 2000
  • The advances of computer hardware and information processing technologies raise the needs of multimedia information retrieval systems. Up to date. multimedia information systems have been developed for text information and image information. Nowadays. the multimedia information systems for video and audio information. especially for musical information have been grown up more and more. In recent music information retrieval systems. not only the information retrieval based on meta-information such like composer and title but also the content-based information retrieval is supported. The content-based information retrieval in music information retrieval systems utilize the similarity value between the user query and the music information stored in music database. In tbis paper. hence. we developed a similarity computation algorithm in which the pitches and lengths of each corresponding pair of notes are used as the fundamental factors for similarity computation between musical information. We also make an experiment of the proposed algorithm to validate its appropriateness. From the experimental results. the proposed similarity computation algorithm is shown to be able to correctly check whether two music files are analogous to each other or not based on melodies.

  • PDF

A Study on the Extension of the Description Elements for Audio-visual Archives (시청각기록물의 기술요소 확장에 관한 연구)

  • Nam, Young-Joon;Moon, Jung-Hyun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.21 no.4
    • /
    • pp.67-80
    • /
    • 2010
  • The output and usage rate of audio-visual materials have sharply increased as the information industry advances and diverse archives became available. However, the awareness of the audio-visual archives are more of a separate record with collateral value. The organizations that hold these materials have very weak system of the various areas such as the categories and archiving methods. Moreover, the management system varies among the organizations, so the users face difficulty retrieving and utilizing the audio-visual materials. Thus, this study examined the feasibility of the synchronized management of audio-visual archives by comparing the descriptive elements of the audio-visual archives in internal key agencies. The study thereby examines the feasibility of the metadata element of the organizations and that of synchronized management to propose the effect of the use of management, retrieval and service of efficient AV materials. The study also proposes the improvement of descriptive element of metadata.

Content Based Classification of Audio Signal using Discriminant Function (식별함수를 이용한 오디오신호의 내용기반 분류)

  • Kim, Young-Sub;Lee, Kwang-Seok;Koh, Si-Young;Hur, Kang-In
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.201-204
    • /
    • 2007
  • In this paper, we research the content-based analysis and classification according to the composition of the feature parameters pool for the auditory signals to implement the auditory indexing and searching system. Auditory data is classified to the primitive various auditory types. we described the analysis and feature extraction method for the feature parameters available to the auditory data classification. And we compose the feature parameters pool in the indexing group unit, then compare and analysis the auditory data centering around the including level and indexing criterion into the audio categories. Based on this result, we composit feature vectors of audio data according to the classification categories, then experiment the classification using discrimination function.

  • PDF

A Threshold Adaptation based Voice Query Transcription Scheme for Music Retrieval (음악검색을 위한 가변임계치 기반의 음성 질의 변환 기법)

  • Han, Byeong-Jun;Rho, Seung-Min;Hwang, Een-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.445-451
    • /
    • 2010
  • This paper presents a threshold adaptation based voice query transcription scheme for music information retrieval. The proposed scheme analyzes monophonic voice signal and generates its transcription for diverse music retrieval applications. For accurate transcription, we propose several advanced features including (i) Energetic Feature eXtractor (EFX) for onset, peak, and transient area detection; (ii) Modified Windowed Average Energy (MWAE) for defining multiple small but coherent windows with local threshold values as offset detector; and finally (iii) Circular Average Magnitude Difference Function (CAMDF) for accurate acquisition of fundamental frequency (F0) of each frame. In order to evaluate the performance of our proposed scheme, we implemented a prototype music transcription system called AMT2 (Automatic Music Transcriber version 2) and carried out various experiments. In the experiment, we used QBSH corpus [1], adapted in MIREX 2006 contest data set. Experimental result shows that our proposed scheme can improve the transcription performance.

Story-based Information Retrieval (스토리 기반의 정보 검색 연구)

  • You, Eun-Soon;Park, Seung-Bo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.81-96
    • /
    • 2013
  • Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character's motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters' emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character's inner nature must be predetermined in order to model a character arc that can depict the character's growth and development. To this end, we analyze the amount of the character's dialogue in the script and track the character's inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character's inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character's emotion or inner nature, spatial movement, and conflicts and resolutions in the story.

Design and Implementation of Video Documents Management System (비디오 문서 관리시스템의 설계 및 구현)

  • Kweon, Jae-Gil;Bae, Jong-Min
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2287-2297
    • /
    • 2000
  • Video documents which have audio-visual and other semantics information have complex relationship among media. While user requests for topic retrieval or specific region retrieval increase, it is difficult to meet these requests with the existing design methodology, In order to support the systematic management and the various retrieval capabilities of video document, we must formulate structural and systematic model on metadata using semantics and structural informations which are abstracted automaticallv or manuallv. This paper suggests generic metadata model with which we analyze the characteristics of video document, supports various query types and serves as a generic framework for video applications, we propose the generic integrated management model(GIMM)for generic metadata,, design video documents management system(VDMS) and implement it using GIMM.

  • PDF