• Title/Summary/Keyword: 음성/음악 판별

Search Result 13, Processing Time 0.024 seconds

Implementation of Music Signals Discrimination System for FM Broadcasting (FM 라디오 환경에서의 실시간 음악 판별 시스템 구현)

  • Kang, Hyun-Woo
    • The KIPS Transactions:PartB
    • /
    • v.16B no.2
    • /
    • pp.151-156
    • /
    • 2009
  • This paper proposes a Gaussian mixture model(GMM)-based music discrimination system for FM broadcasting. The objective of the system is automatically archiving music signals from audio broadcasting programs that are normally mixed with human voices, music songs, commercial musics, and other sounds. To improve the system performance, make it more robust and to accurately cut the starting/ending-point of the recording, we also added a post-processing module. Experimental results on various input signals of FM radio programs under PC environments show excellent performance of the proposed system. The fixed-point simulation shows the same results under 3MIPS computational power.

Design and Implementation of Speech Music Discrimination System per Block Unit on FM Radio Broadcast (FM 방송 중 블록 단위 음성 음악 판별 시스템의 설계 및 구현)

  • Jang, Hyeon-Jong;Eom, Jeong-Gwon;Im, Jun-Sik
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.25-28
    • /
    • 2007
  • 본 논문은 FM 라디오 방송의 오디오 신호를 블록 단위로 음성 음악을 판별하는 시스템을 제안하는 논문이다. 본 논문에서는 음성 음악 판별 시스템을 구축하기 위해 다양한 특정 파라미터와 분류 알고리즘을 제안 한다. 특정 파라미터는 신호처리 분야(Centroid, Rolloff, Flux, ZCR, Low Energy), 음성 인식 분야(LPC, MFCC), 음악 분석 분야(MPitch, Beat)에서 각각 사용되는 파라미터를 사용하였으며 분류 알고리즘으로는 패턴인식 분야(GMM, KNN, BP)와 퍼지 신경망(ANFIS)을 사용하였고, 거리 구현은 Mahalanobis 거리를 사용하였다.

  • PDF

Utterance Error Correction of Playing Music on Smart Speaker (스마트 스피커에서의 음악 재생 발화 오류 교정)

  • Lee, Daniel;Ko, Byeong-il;Kim, Eung-gyun
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.482-486
    • /
    • 2018
  • 본 논문에서는 스마트 스피커 환경에서 음악 재생 발화의 오류를 교정하는 음악 재생 발화 교정 모델을 제안한다. 음악 재생 발화에서 발생하는 다양한 오류 유형을 살펴보고, 음악 재생 발화 교정 모델에 대해 소개한다. 해당 모델은 후보 생성 모델과 교정 판별 모델로 이루어져 있다. 후보 생성 모델은 정답 후보들을 생성하고, 교정 판별 모델은 Random Forest를 사용하여 교정 여부를 판별한다. 제안하는 방법으로 음악 재생 발화에서 실제 사용자 만족도를 높일 수 있었다.

  • PDF

Efficient Implementation of SVM-Based Speech/Music Classification on Embedded Systems (SVM 기반 음성/음악 분류기의 효율적인 임베디드 시스템 구현)

  • Lim, Chung-Soo;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.8
    • /
    • pp.461-467
    • /
    • 2011
  • Accurate classification of input signals is the key prerequisite for variable bit-rate coding, which has been introduced in order to effectively utilize limited communication bandwidth. Especially, recent surge of multimedia services elevate the importance of speech/music classification. Among many speech/music classifier, the ones based on support vector machine (SVM) have a strong selling point, high classification accuracy, but their computational complexity and memory requirement hinder their way into actual implementations. Therefore, techniques that reduce the computational complexity and the memory requirement is inevitable, particularly for embedded systems. We first analyze implementation of an SVM-based classifier on embedded systems in terms of execution time and energy consumption, and then propose two techniques that alleviate the implementation requirements: One is a technique that removes support vectors that have insignificant contribution to the final classification, and the other is to skip processing some of input signals by virtue of strong correlations in speech/music frames. These are post-processing techniques that can work with any other optimization techniques applied during the training phase of SVM. With experiments, we validate the proposed algorithms from the perspectives of classification accuracy, execution time, and energy consumption.

Performance Comparison of Feature Parameters and Classifiers for Speech/Music Discrimination (음성/음악 판별을 위한 특징 파라미터와 분류기의 성능비교)

  • Kim Hyung Soon;Kim Su Mi
    • MALSORI
    • /
    • no.46
    • /
    • pp.37-50
    • /
    • 2003
  • In this paper, we evaluate and compare the performance of speech/music discrimination based on various feature parameters and classifiers. As for feature parameters, we consider High Zero Crossing Rate Ratio (HZCRR), Low Short Time Energy Ratio (LSTER), Spectral Flux (SF), Line Spectral Pair (LSP) distance, entropy and dynamism. We also examine three classifiers: k Nearest Neighbor (k-NN), Gaussian Mixure Model (GMM), and Hidden Markov Model (HMM). According to our experiments, LSP distance and phoneme-recognizer-based feature set (entropy and dunamism) show good performance, while performance differences due to different classifiers are not significant. When all the six feature parameters are employed, average speech/music discrimination accuracy up to 96.6% is achieved.

  • PDF

Improving Speech/Music Discrimination Parameter Using Time-Averaged MFCC (MFCC의 단구간 시간 평균을 이용한 음성/음악 판별 파라미터 성능 향상)

  • Choi, Mu-Yeol;Kim, Hyung-Soon
    • MALSORI
    • /
    • no.64
    • /
    • pp.155-169
    • /
    • 2007
  • Discrimination between speech and music is important in many multimedia applications. In our previous work, focusing on the spectral change characteristics of speech and music, we presented a method using the mean of minimum cepstral distances (MMCD), and it showed a very high discrimination performance. In this paper, to further improve the performance, we propose to employ time-averaged MFCC in computing the MMCD. Our experimental results show that the proposed method enhances the discrimination between speech and music. Moreover, the proposed method overcomes the weakness of the conventional MMCD method whose performance is relatively sensitive to the choice of the frame interval to compute the MMCD.

  • PDF

Speech/Music Discrimination Using Mel-Cepstrum Modulation Energy (멜 켑스트럼 모듈레이션 에너지를 이용한 음성/음악 판별)

  • Kim, Bong-Wan;Choi, Dea-Lim;Lee, Yong-Ju
    • MALSORI
    • /
    • no.64
    • /
    • pp.89-103
    • /
    • 2007
  • In this paper, we introduce mel-cepstrum modulation energy (MCME) for a feature to discriminate speech and music data. MCME is a mel-cepstrum domain extension of modulation energy (ME). MCME is extracted on the time trajectory of Mel-frequency cepstral coefficients, while ME is based on the spectrum. As cepstral coefficients are mutually uncorrelated, we expect the MCME to perform better than the ME. To find out the best modulation frequency for MCME, we perform experiments with 4 Hz to 20 Hz modulation frequency. To show effectiveness of the proposed feature, MCME, we compare the discrimination accuracy with the results obtained from the ME and the cepstral flux.

  • PDF

Performance Improvement of Speech/Music Discrimination Based on Cepstral Distance (켑스트럼 거리 기반의 음성/음악 판별 성능 향상)

  • Park Seul-Han;Choi Mu Yeol;Kim Hyung Soon
    • MALSORI
    • /
    • no.56
    • /
    • pp.195-206
    • /
    • 2005
  • Discrimination between speech and music is important in many multimedia applications. In this paper, focusing on the spectral change characteristics of speech and music, we propose a new method of speech/music discrimination based on cepstral distance. Instead of using cepstral distance between the frames with fixed interval, the minimum of cepstral distances among neighbor frames is employed to increase discriminability between fast changing music and speech. And, to prevent misclassification of speech segments including short pause into music, short pause segments are excluded from computing cepstral distance. The experimental results show that proposed method yields the error rate reduction of$68\%$, in comparison with the conventional approach using cepstral distance.

  • PDF

Speech/Music Discrimination Using Multi-dimensional MMCD (다차원 MMCD를 이용한 음성/음악 판별)

  • Choi, Mu-Yeol;Song, Hwa-Jeon;Park, Seul-Han;Kim, Hyung-Soon
    • Proceedings of the KSPS conference
    • /
    • 2006.11a
    • /
    • pp.142-145
    • /
    • 2006
  • Discrimination between speech and music is important in many multimedia applications. Previously we proposed a new parameter for speech/music discrimination, the mean of minimum cepstral distances (MMCD), and it outperformed the conventional parameters. One weakness of it is that its performance depends on range of candidate frames to compute the minimum cepstral distance, which requires the optimal selection of the range experimentally. In this paper, to alleviate the problem, we propose a multi-dimensional MMCD parameter which consists of multiple MMCDs with different ranges of candidate frames. Experimental results show that the multi-dimensional MMCD parameter yields an error rate reduction of 22.5% compared with the optimally chosen one-dimensional MMCD parameter.

  • PDF