• 제목/요약/키워드: Audio feature extraction

검색결과 44건 처리시간 0.028초

CutPaste-Based Anomaly Detection Model using Multi Scale Feature Extraction in Time Series Streaming Data

  • Jeon, Byeong-Uk;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권8호
    • /
    • pp.2787-2800
    • /
    • 2022
  • The aging society increases emergency situations of the elderly living alone and a variety of social crimes. In order to prevent them, techniques to detect emergency situations through voice are actively researched. This study proposes CutPaste-based anomaly detection model using multi-scale feature extraction in time series streaming data. In the proposed method, an audio file is converted into a spectrogram. In this way, it is possible to use an algorithm for image data, such as CNN. After that, mutli-scale feature extraction is applied. Three images drawn from Adaptive Pooling layer that has different-sized kernels are merged. In consideration of various types of anomaly, including point anomaly, contextual anomaly, and collective anomaly, the limitations of a conventional anomaly model are improved. Finally, CutPaste-based anomaly detection is conducted. Since the model is trained through self-supervised learning, it is possible to detect a diversity of emergency situations as anomaly without labeling. Therefore, the proposed model overcomes the limitations of a conventional model that classifies only labelled emergency situations. Also, the proposed model is evaluated to have better performance than a conventional anomaly detection model.

오디오 데이터의 특징 파라메터 구성에 따른 내용기반 분석 (The Content Based Analysis According to the Composition of the Feature Parameters for the Auditory Data)

  • 한학용;허강인;김수훈
    • 한국음향학회지
    • /
    • 제21권2호
    • /
    • pp.182-189
    • /
    • 2002
  • 본 논문은 오디오 색인·검색 시스템을 구현하기 위하여 오디오 신호에 대한특징 파라메터 풀 (pool)을 구성하고 이에 따른 오디오 데이터의 내용분석 및 분류에 관한 연구이다. 오디오 데이터는 기본적인 다양한 오디오 형태로 분류되어진다. 본 논문에서는 오디오 데이터의 분류에 이용 가능한 특징 파라메터를 분석하고 추출방법에 대하여 논한다. 그리고 특징 파라메터 풀을 색인 그룹 단위로 구성하여 오디오 카테고리에 대한 설정된 특징들의 포함 정도와 색인기준을 오디오 데이터의 내용을 중심으로 비교 ·분석한다. 그리고 위의 결과를 바탕으로 분류절차를 구성하여 오디오 신호를 분류하는 모의실험을 행하였다.

신호의 복원된 위상 공간을 이용한 오디오 상황 인지 (Audio Context Recognition Using Signal's Reconstructed Phase Space)

  • ;;;이승룡;구교호
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2009년도 추계학술발표대회
    • /
    • pp.243-244
    • /
    • 2009
  • So far, many researches have been conducted in the area of audio based context recognition. Nevertheless, most of them are based on existing feature extraction techniques derived from linear signal processing such as Fourier transform, wavelet transform, linear prediction... Meanwhile, environmental audio signal may potentially contains non-linear dynamic properties. Therefore, it is a big potential to utilize non-linear dynamic signal processing techniques in audio based context recognition.

서포트 벡터 머신과 퍼지 클러스터링 기법을 이용한 오디오 분할 및 분류 (Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques)

  • ;강명수;김철홍;김종면
    • 정보처리학회논문지B
    • /
    • 제19B권1호
    • /
    • pp.19-26
    • /
    • 2012
  • 최근 멀티미디어 정보가 급증함에 따라 콘텐츠 관리에 대한 요구도 함께 증가되고 있다. 이에 오디오 분할 및 분류는 멀티미디어 콘텐츠를 효과적으로 관리할 수 있는 대안이 될 수 있다. 따라서 본 논문에서는 동영상에서 취득한 오디오 신호를 분할하고, 분할된 오디오 신호를 음악, 음성, 배경 음악이 포함된 음성, 잡음이 포함된 음성, 묵음(silence)으로 분류하는 정확도가 높은 오디오 분할 및 분류 알고리즘을 제안한다. 제안하는 알고리즘은 오디오 분할을 위해 서포트 벡터 머신(support vector machine, SVM)을 이용하였다. 오디오 신호의 분류를 위해서는 분할된 오디오 신호의 특징을 추출하고 이를 퍼지 클러스터링 알고리즘(fuzzy c-means, FCM)의 입력으로 사용하여 각 계층으로 오디오 신호를 분류하였다. 제안하는 알고리즘의 평가는 분할과 분류에 대해 각각 그 성능을 평가하였으며, 분할 성능 평가는 정확도율(precesion rate)과 오차율(recall rate)을 이용하였으며, 분류 성능 평가는 정확성(classification accuracy)을 사용하였다. 또한 오디오 분할의 경우는 이진 분류기와 퍼지 클러스터링을 이용한 기존의 알고리즘과 그 성능을 비교하였다. 모의 실험 결과, 제안한 알고리즘의 분류 성능이 기존 알고리즘 보다 정확도율과 오차율 면에서 모두 우수하였다.

Improved Bimodal Speech Recognition Study Based on Product Hidden Markov Model

  • Xi, Su Mei;Cho, Young Im
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권3호
    • /
    • pp.164-170
    • /
    • 2013
  • Recent years have been higher demands for automatic speech recognition (ASR) systems that are able to operate robustly in an acoustically noisy environment. This paper proposes an improved product hidden markov model (HMM) used for bimodal speech recognition. A two-dimensional training model is built based on dependently trained audio-HMM and visual-HMM, reflecting the asynchronous characteristics of the audio and video streams. A weight coefficient is introduced to adjust the weight of the video and audio streams automatically according to differences in the noise environment. Experimental results show that compared with other bimodal speech recognition approaches, this approach obtains better speech recognition performance.

Music Genre Classification Based on Timbral Texture and Rhythmic Content Features

  • Baniya, Babu Kaji;Ghimire, Deepak;Lee, Joonwhon
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2013년도 춘계학술발표대회
    • /
    • pp.204-207
    • /
    • 2013
  • Music genre classification is an essential component for music information retrieval system. There are two important components to be considered for better genre classification, which are audio feature extraction and classifier. This paper incorporates two different kinds of features for genre classification, timbral texture and rhythmic content features. Timbral texture contains several spectral and Mel-frequency Cepstral Coefficient (MFCC) features. Before choosing a timbral feature we explore which feature contributes less significant role on genre discrimination. This facilitates the reduction of feature dimension. For the timbral features up to the 4-th order central moments and the covariance components of mutual features are considered to improve the overall classification result. For the rhythmic content the features extracted from beat histogram are selected. In the paper Extreme Learning Machine (ELM) with bagging is used as classifier for classifying the genres. Based on the proposed feature sets and classifier, experiment is performed with well-known datasets: GTZAN databases with ten different music genres, respectively. The proposed method acquires the better classification accuracy than the existing approaches.

YCbCr 농도 대비를 이용한 입술특징 추출 (Lip Feature Extraction using Contrast of YCbCr)

  • 김우성;민경원;고한석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.259-260
    • /
    • 2006
  • Since audio speech recognition is affected by noise in real environment, visual speech recognition is used to support speech recognition. For the visual speech recognition, this paper suggests the extraction of lip-feature using two types of image segmentation and reduced ASM. Input images are transformed to YCbCr based images and lips are segmented using the contrast of Y/Cb/Cr between lip and face. Subsequently, lip-shape model trained by PCA is placed on segmented lip region and then lip features are extracted using ASM.

  • PDF

Collaborative Filtering and Genre Classification for Music Recommendation

  • Byun, Jeong-Yong;Nasridinov, Aziz
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2014년도 추계학술발표대회
    • /
    • pp.693-694
    • /
    • 2014
  • This short paper briefly describes the proposed music recommendation method that provides suitable music pieces to a listener depending on both listeners' ratings and content of music pieces. The proposed method consists of two methods. First, listeners' ratings prediction method is a combination the traditional user-based and item-based collaborative filtering methods. Second, genre classification method is a combination of feature extraction and classification procedures. The feature extraction step obtains audio signal information and stores it in data structure, while the second one classifies the music pieces into various genres using decision tree algorithm.

Korean Traditional Music Genre Classification Using Sample and MIDI Phrases

  • Lee, JongSeol;Lee, MyeongChun;Jang, Dalwon;Yoon, Kyoungro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권4호
    • /
    • pp.1869-1886
    • /
    • 2018
  • This paper proposes a MIDI- and audio-based music genre classification method for Korean traditional music. There are many traditional instruments in Korea, and most of the traditional songs played using the instruments have similar patterns and rhythms. Although music information processing such as music genre classification and audio melody extraction have been studied, most studies have focused on pop, jazz, rock, and other universal genres. There are few studies on Korean traditional music because of the lack of datasets. This paper analyzes raw audio and MIDI phrases in Korean traditional music, performed using Korean traditional musical instruments. The classified samples and MIDI, based on our classification system, will be used to construct a database or to implement our Kontakt-based instrument library. Thus, we can construct a management system for a Korean traditional music library using this classification system. Appropriate feature sets for raw audio and MIDI phrases are proposed and the classification results-based on machine learning algorithms such as support vector machine, multi-layer perception, decision tree, and random forest-are outlined in this paper.

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.