• Title/Summary/Keyword: Speech/Music Classification

Search Result 28, Processing Time 0.021 seconds

Analysis and Implementation of Speech/Music Classification for 3GPP2 SMV Based on GMM (3GPP2 SMV의 실시간 음성/음악 분류 성능 향상을 위한 Gaussian Mixture Model의 적용)

  • Song, Ji-Hyun;Lee, Kye-Hwan;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.8
    • /
    • pp.390-396
    • /
    • 2007
  • In this letter, we propose a novel approach to improve the performance of speech/music classification for the selectable mode vocoder(SMV) of 3GPP2 using the Gaussian mixture model(GMM) which is based on the expectation-maximization(EM) algorithm. We first present an effective analysis of the features and the classification method adopted in the conventional SMV. And then feature vectors which are applied to the GMM are selected from relevant Parameters of the SMV for the efficient speech/music classification. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional scheme of the SMV.

Target signal detection using MUSIC spectrum in noise environments (MUSIC 스펙트럼을 이용한 잡음환경에서의 목표 신호 구간 검출)

  • Park, Sang-Jun;Jeong, Sang-Bae
    • Phonetics and Speech Sciences
    • /
    • v.4 no.3
    • /
    • pp.103-110
    • /
    • 2012
  • In this paper, a target signal detection method using multiple signal classification (MUSIC) algorithm is proposed. The MUSIC algorithm is a subspace-based direction of arrival (DOA) estimation method. Using the inverse of the eigenvalue-weighted eigen spectra, the algorithm detects the DOAs of multiple sources. To apply the algorithm in target signal detection for GSC-based beamforming, we utilize its spectral response for the DOA of the target source in noisy conditions. The performance of the proposed target signal detection method is compared with those of the normalized cross-correlation (NCC), the fixed beamforming, and the power ratio method. Experimental results show that the proposed algorithm significantly outperforms the conventional ones in receiver operating characteristics (ROC) curves.

On-Line Audio Genre Classification using Spectrogram and Deep Neural Network (스펙트로그램과 심층 신경망을 이용한 온라인 오디오 장르 분류)

  • Yun, Ho-Won;Shin, Seong-Hyeon;Jang, Woo-Jin;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.21 no.6
    • /
    • pp.977-985
    • /
    • 2016
  • In this paper, we propose a new method for on-line genre classification using spectrogram and deep neural network. For on-line processing, the proposed method inputs an audio signal for a time period of 1sec and classifies its genre among 3 genres of speech, music, and effect. In order to provide the generality of processing, it uses the spectrogram as a feature vector, instead of MFCC which has been widely used for audio analysis. We measure the performance of genre classification using real TV audio signals, and confirm that the proposed method has better performance than the conventional method for all genres. In particular, it decreases the rate of classification error between music and effect, which often occurs in the conventional method.

Direction-of-Arrival Estimation of Speech Signals Based on MUSIC and Reverberation Component Reduction (MUSIC 및 반향 성분 제거 기법을 이용한 음성신호의 입사각 추정)

  • Chang, Hyungwook;Jeong, Sangbae;Kim, Youngil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1302-1309
    • /
    • 2014
  • In this paper, we propose a method to improve the performance of the direction-of-arrival (DOA) estimation of a speech source using a multiple signal classification (MUSIC)-based algorithm. Basically, the proposed algorithm utilizes a complex coefficient band pass filter to generate the narrow band signals for signal analysis. Also, reverberation component reduction and quadratic function-based response approximation in MUSIC spatial spectrum are utilized to improve the accuracy of DOA estimation. Experimental results show that the proposed method outperforms the well-known generalized cross-correlation (GCC)-based DOA estimation algorithm in the aspect of the estimation error and success rate, respectively.Abstract should be placed here. These instructions give you guidelines for preparing papers for JICCE.

Voice range differences in vowels by voice classification among male students of popular music vocals (대중가요 보컬 전공 남학생의 성종에 따른 모음 간 음역 차이)

  • Il-Song Ji;Jaeock Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.2
    • /
    • pp.37-47
    • /
    • 2024
  • This study was conducted on 27 male students majoring in or preparing for popular music vocals to determine whether they were aware of their voice classification and vocal range. Additionally, differences in the fundamental frequency and average speaking fundamental frequency were compared among the voice classifications. Moreover, considering that they may differ in their ability to produce high frequencies depending on the vowel, differences in voice ranges among the cardinal vowels, /a/, /i/, and /u/, were examined, and differences in voice ranges between vowels were compared by voice classification. The results showed that more than half of the male students majoring in or preparing for popular music vocals were not accurately aware of their voice types. In addition, statistically significant differences were found in the maximum fundamental frequency and frequency range among vowels, indicating differences in the voice range that can be produced depending on the vowel type. In particular, the voice range decreased in the following order: /a/>/u/>/i/. This suggests that while the vowel /a/ is easier to articulate in the high register compared to other vowels, vowels /u/ and /i/ as high vowels involve narrowing of the oral cavity due to the raised position of the tongue, accompanied by raising of the larynx, resulting in a decrease in voice range and difficulty in vocalizing in the high register.

Audio genre classification using deep learning (딥 러닝을 이용한 오디오 장르 분류)

  • Shin, Seong-Hyeon;Jang, Woo-Jin;Yun, Ho-won;Park, Ho-Chong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.80-81
    • /
    • 2016
  • 본 논문에서는 딥 러닝을 이용한 오디오 장르 분류 기술을 제안한다. 장르는 music, speech, effect 3가지로 정의하여 분류한다. 기존의 GMM을 이용한 장르 분류 기술은 speech의 인식률에 비해 music과 effect에 대한 인식률이 낮아 각 장르에 대한 인식률의 차이를 보인다. 이러한 문제를 해결하기 위해 본 논문에서는 딥 러닝을 이용해 높은 수준의 추상화 과정을 거쳐 더 세분된 학습을 진행한다. 제안한 방법을 사용하면 미세한 차이의 특성까지 학습해 장르에 대한 인식률의 차이를 줄일 수 있으며, 각 장르에 대해 GMM을 이용한 오디오 장르 분류보다 높은 인식률을 얻을 수 있다.

  • PDF

Detecting Prominent Content in Unstructured Audio using Intensity-based Attack/release Patterns (발생/소멸 패턴을 이용한 비정형 혼합 오디오의 주성분 검출)

  • Kim, Samuel
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.12
    • /
    • pp.224-231
    • /
    • 2013
  • Defining the concept of prominent audio content as the most informative audio content from the users' perspective within a given unstructured audio segment, we propose a simple but robust intensity-based attack/release pattern features to detect the prominent audio content. We also propose a web-based annotation procedure to retrieve users' subjective perception and annotated 18 hours of video clips across various genres, such as cartoon, movie, news, etc. The experiments with a linear classification method whose models are trained for speech, music, and sound effect demonstrate promising - but varying across the genres of programs - results (e.g., 86.7% weighted accuracy for speech-oriented talk shows and 49.3% weighted accuracy for {action movies}).

A Study on Image Retrieval Using Sound Classifier (사운드 분류기를 이용한 영상검색에 관한 연구)

  • Kim, Seung-Han;Lee, Myeong-Sun;Roh, Seung-Yong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.419-421
    • /
    • 2006
  • The importance of automatic discrimination image data has evolved as a research topic over recent years. We have used forward neural network as a classifier using sound data features within image data, our initial tests have shown encouraging results that indicate the viability of our approach.

  • PDF

The Performance Analysis of On-line Audio Genre Classification (온라인 오디오 장르 분류의 성능 분석)

  • Yun, Ho-Won;Jang, Woo-Jin;Shin, Seong-Hyeon;Park, Ho-Chong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.23-24
    • /
    • 2016
  • 본 논문에서는 온라인 오디오 장르 분류의 성능을 비교 분석한다. 온라인 동작을 위해 1초 단위의 오디오 신호를 입력하여 music, speech, effect 중 하나의 장르로 판단한다. 학습 방법은 GMM과 심층 신경망을 사용하며, 특성은 MFCC와 스펙트로그램을 포함하는 네 가지 종류의 벡터를 사용한다. 각 성능을 비교 분석하여 장르 분류에 적합한 학습 방법과 특성 벡터를 확인한다.

  • PDF

Real Time Environmental Classification Algorithm Using Neural Network for Hearing Aids (인공 신경망을 이용한 보청기용 실시간 환경분류 알고리즘)

  • Seo, Sangwan;Yook, Sunhyun;Nam, Kyoung Won;Han, Jonghee;Kwon, See Youn;Hong, Sung Hwa;Kim, Dongwook;Lee, Sangmin;Jang, Dong Pyo;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.1
    • /
    • pp.8-13
    • /
    • 2013
  • Persons with sensorineural hearing impairment have troubles in hearing at noisy environments because of their deteriorated hearing levels and low-spectral resolution of the auditory system and therefore, they use hearing aids to compensate weakened hearing abilities. Various algorithms for hearing loss compensation and environmental noise reduction have been implemented in the hearing aid; however, the performance of these algorithms vary in accordance with external sound situations and therefore, it is important to tune the operation of the hearing aid appropriately in accordance with a wide variety of sound situations. In this study, a sound classification algorithm that can be applied to the hearing aid was suggested. The proposed algorithm can classify the different types of speech situations into four categories: 1) speech-only, 2) noise-only, 3) speech-in-noise, and 4) music-only. The proposed classification algorithm consists of two sub-parts: a feature extractor and a speech situation classifier. The former extracts seven characteristic features - short time energy and zero crossing rate in the time domain; spectral centroid, spectral flux and spectral roll-off in the frequency domain; mel frequency cepstral coefficients and power values of mel bands - from the recent input signals of two microphones, and the latter classifies the current speech situation. The experimental results showed that the proposed algorithm could classify the kinds of speech situations with an accuracy of over 94.4%. Based on these results, we believe that the proposed algorithm can be applied to the hearing aid to improve speech intelligibility in noisy environments.