• Title/Summary/Keyword: 오디오 특징 추출

Search Result 63, Processing Time 0.034 seconds

Search speed improved minimum audio fingerprinting using the difference of Gaussian (가우시안의 차를 이용하여 검색속도를 향상한 최소 오디오 핑거프린팅)

  • Kwon, Jin-Man;Ko, Il-Ju;Jang, Dae-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.75-87
    • /
    • 2009
  • This paper, which is about the method of creating the audio fingerprint and comparing with the audio data, presents how to distinguish music using the characteristics of audio data. It is a process of applying the Difference of Gaussian (DoG: generally used for recognizing images) to the audio data, and to extract the music that changes radically, and to define the location of fingerprint. This fingerprint is made insensitive to the changes of sound, and is possible to extract the same location of original fingerprint with just a portion of music data. By reducing the data and calculation of fingerprint, this system indicates more efficiency than the pre-system which uses pre-frequency domain. Adopting this, it is possible to indicate the copyrighted music distributed in internet, or meta information of music to users.

Auto Frame Extraction Method for Video Cartooning System (동영상 카투닝 시스템을 위한 자동 프레임 추출 기법)

  • Kim, Dae-Jin;Koo, Ddeo-Ol-Ra
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.28-39
    • /
    • 2011
  • While the broadband multimedia technologies have been developing, the commercial market of digital contents has also been widely spreading. Most of all, digital cartoon market like internet cartoon has been rapidly large so video cartooning continuously has been researched because of lack and variety of cartoon. Until now, video cartooning system has been focused in non-photorealistic rendering and word balloon. But the meaningful frame extraction must take priority for cartooning system when applying in service. In this paper, we propose new automatic frame extraction method for video cartooning system. At frist, we separate video and audio from movie and extract features parameter like MFCC and ZCR from audio data. Audio signal is classified to speech, music and speech+music comparing with already trained audio data using GMM distributor. So we can set speech area. In the video case, we extract frame using general scene change detection method like histogram method and extract meaningful frames in the cartoon using face detection among the already extracted frames. After that, first of all existent face within speech area image transition frame extract automatically. Suitable frame about movie cartooning automatically extract that extraction image transition frame at continuable period of time domain.

Diagnosis of Parkinson's disease based on audio voice using wav2vec (Wav2vec을 이용한 오디오 음성 기반의 파킨슨병 진단)

  • Yoon, Hee-Jin
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.353-358
    • /
    • 2021
  • Parkinson's disease is the second most common degenerative brain disease after Alzheimer's in old age. Symptoms of Parkinson's disease are factors that reduce the quality of life in daily life, such as shaking hands, slowing behavior and cognitive function. Parkinson's disease that can slow the progression of the disease through early diagnosis. To diagnoze Parkinson's disease early, an algorithm was implemented to extract features using wav2vec and to diagnose the presence or absence of Parkinson's disease with deep learning(ANN). As a results of the experiment, the accuracy was 97.47%. It was better than the results of diagnosing Parkinson's disease using the existing neural network. The audio voice file could simply reduce the experiment process and obtain improved results.

High Precision Audio Contents Retrieval Method by Effective Melody Representation Method (효과적인 멜로디 표현법에 의한 고정도 오디오 콘텐츠 검색 기법)

  • Heo Sung-Phil;Suk Soo-Young;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.147-150
    • /
    • 2004
  • 허밍에 의한 고정도의 오디오 정보 검색 시스템을 구현하기 위해서는 시스템 측에서 발생 가능한 문제점과 유저 측에서 발생 가능한 문제점을 함께 고려한 해결 기법이 요구된다. 유저 측에서는 허밍시 자신의 애매한 기억에 기인한 음표의 삽입이나 탈락과 같은 가창실수, 허밍 도중에 음정 및 박자의 불안정한 변화, 같은 곡을 노래 부를지라도 개인차에 의해 상이한 음정과 템포 등이 발생한다. 또한 시스템 측에서 발생 가능한 사항으로써, 비록 허밍질의가 완벽하더라도 입력 허밍 신호를 멜로디 매칭에 이용되는 정확한 특징량의 추출 및 음악 표기로의 변환이 어렵다는 점이다. 종래의 오디오 정보 검색 시스템에서는 이러한 문제점을 해결하기 위해 다양한 멜로디 표현법과 매칭 방법이 제안되고 있으나, 성능 면에서는 아직 만족할 만한 결과를 얻지 못하고 있다. 따라서 이러한 문제점들을 해결하기 위해서 본 논문에서는 허밍 멜로디의 효과적인 표현방법과 시스템 및 유저 측에서 발생 가능한 오류에 강건한 멜로디 매칭 방법을 제안한다.

  • PDF

Feature Selection for Multi-Class Genre Classification using Gaussian Mixture Model (Gaussian Mixture Model을 이용한 다중 범주 분류를 위한 특징벡터 선택 알고리즘)

  • Moon, Sun-Kuk;Choi, Tack-Sung;Park, Young-Cheol;Youn, Dae-Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.965-974
    • /
    • 2007
  • In this paper, we proposed the feature selection algorithm for multi-class genre classification. In our proposed algorithm, we developed GMM separation score based on Gaussian mixture model for measuring separability between two genres. Additionally, we improved feature subset selection algorithm based on sequential forward selection for multi-class genre classification. Instead of setting criterion as entire genre separability measures, we set criterion as worst genre separability measure for each sequential selection step. In order to assess the performance proposed algorithm, we extracted various features which represent characteristics such as timbre, rhythm, pitch and so on. Then, we investigate classification performance by GMM classifier and k-NN classifier for selected features using conventional algorithm and proposed algorithm. Proposed algorithm showed improved performance in classification accuracy up to 10 percent for classification experiments of low dimension feature vector especially.

Detecting Prominent Content in Unstructured Audio using Intensity-based Attack/release Patterns (발생/소멸 패턴을 이용한 비정형 혼합 오디오의 주성분 검출)

  • Kim, Samuel
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.224-231
    • /
    • 2013
  • Defining the concept of prominent audio content as the most informative audio content from the users' perspective within a given unstructured audio segment, we propose a simple but robust intensity-based attack/release pattern features to detect the prominent audio content. We also propose a web-based annotation procedure to retrieve users' subjective perception and annotated 18 hours of video clips across various genres, such as cartoon, movie, news, etc. The experiments with a linear classification method whose models are trained for speech, music, and sound effect demonstrate promising - but varying across the genres of programs - results (e.g., 86.7% weighted accuracy for speech-oriented talk shows and 49.3% weighted accuracy for {action movies}).

Similar Movie Contents Retrieval Using Peak Features from Audio (오디오의 Peak 특징을 이용한 동일 영화 콘텐츠 검색)

  • Chung, Myoung-Bum;Sung, Bo-Kyung;Ko, Il-Ju
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.11
    • /
    • pp.1572-1580
    • /
    • 2009
  • Combing through entire video files for the purpose of recognizing and retrieving matching movies requires much time and memory space. Instead, most current similar movie-matching methods choose to analyze only a part of each movie's video-image information. Yet, these methods still share a critical problem of erroneously recognizing as being different matching videos that have been altered only in resolution or converted merely with a different codecs. This paper proposes an audio-information-based search algorithm by which similar movies can be identified. The proposed method prepares and searches through a database of movie's spectral peak information that remains relatively steady even with changes in the bit-rate, codecs, or sample-rate. The method showed a 92.1% search success rate, given a set of 1,000 video files whose audio-bit-rate had been altered or were purposefully written in a different codec.

  • PDF

A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning (DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법)

  • Lee, Yu Jin;Nang, Jongho
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1281-1297
    • /
    • 2016
  • In recent years, personal videos have seen a tremendous growth due to the substantial increase in the use of smart devices and networking services in which users create and share video content easily without many restrictions. However, taking both into account would significantly improve event detection performance because videos generally have multiple modalities and the frame data in video varies at different time points. This paper proposes an event detection method. In this method, high-level features are first extracted from multiple modalities in the videos, and the features are rearranged according to time sequence. Then the association of the modalities is learned by means of DNN to produce a personal video event detector. In our proposed method, audio and image data are first synchronized and then extracted. Then, the result is input into GoogLeNet as well as Multi-Layer Perceptron (MLP) to extract high-level features. The results are then re-arranged in time sequence, and every video is processed to extract one feature each for training by means of DNN.

Design and Implementation of Multimedia Retrieval a System (멀티미디어 검색 시스템의 설계 및 구현)

  • 노승민;황인준
    • Journal of KIISE:Databases
    • /
    • v.30 no.5
    • /
    • pp.494-506
    • /
    • 2003
  • Recently, explosive popularity of multimedia information has triggered the need for retrieving multimedia contents efficiently from the database including audio, video and images. In this paper, we propose an XML-based retrieval scheme and a data model that complement the weak aspects of annotation and conent based retrieval methods. The Property and hierarchy structure of image and video data are represented and manipulated based on the Multimedia Description Schema (MDS) that conforms to the MPEG-7 standard. For audio contents, pitch contours extracted from their acoustic features are converted into UDR string. Especially, to improve the retrieval performance, user's access pattern and frequency are utilized in the construction of an index. We have implemented a prototype system and evaluated its performance through various experiments.

Video genre classification using Multimodal features (멀티모달 특징을 이용한 비디오 장르 분류)

  • Jin Sung Ho;Bea Tea Meon;Choo Jin Ho;Ro Yong Man;Kang Kyeongok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.219-222
    • /
    • 2003
  • 본 논문에서는 멀티모달(multimodal) 특징을 이용한 비디오 장르 식별 방법을 제안한다. 비디오 장르 식별 기술은 방대한 양의 방송 컨텐츠를 보다 효율적으로 분류할 뿐 아니라 자동적인 비디오 요약을 위한 전처리 과정으로 활용될 수 있는 기술이다. 따라서, 그 필요성 및 중요성이 부각되고 있다. 본 논문에서 제안하고 있는 방법은 MPEG-7의 오디오 및 비주얼 서술자들을 적용하여 멀티모달 특징을 추출하고 여러 가지 방송 비디오 장르(genre)들로 구성된 데이터베이스에서 장르 분류를 위해 설계된 인식기(classifier)를 통한 성능을 평가한다.

  • PDF