• Title/Summary/Keyword: 멜 스펙트럼

Search Result 14, Processing Time 0.021 seconds

A Study on the Spectrum Variation of Korean Speech (한국어 음성의 스펙트럼 변화에 관한 연구)

  • Lee Sou-Kil;Song Jeong-Young
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.179-186
    • /
    • 2005
  • We can extract spectrum of the voices and analyze those, after employing features of frequency that voices have. In the spectrum of the voices monophthongs are thought to be stable, but when a consonant(s) meet a vowel(s) in a syllable or a word, there is a lot of changes. This becomes the biggest obstacle to phoneme speech recognition. In this study, using Mel Cepstrum and Mel Band that count Frequency Band and auditory information, we analyze the spectrums that each and every consonant and vowel has and the changes in the voices reftects auditory features and make it a system. Finally we are going to present the basis that can segment the voices by an unit of phoneme.

  • PDF

Speech Recognition Using Noise Robust Features and Spectral Subtraction (잡음에 강한 특징 벡터 및 스펙트럼 차감법을 이용한 음성 인식)

  • Shin, Won-Ho;Yang, Tae-Young;Kim, Weon-Goo;Youn, Dae-Hee;Seo, Young-Joo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.38-43
    • /
    • 1996
  • This paper compares the recognition performances of feature vectors known to be robust to the environmental noise. And, the speech subtraction technique is combined with the noise robust feature to get more performance enhancement. The experiments using SMC(Short time Modified Coherence) analysis, root cepstral analysis, LDA(Linear Discriminant Analysis), PLP(Perceptual Linear Prediction), RASTA(RelAtive SpecTrAl) processing are carried out. An isolated word recognition system is composed using semi-continuous HMM. Noisy environment experiments usign two types of noises:exhibition hall, computer room are carried out at 0, 10, 20dB SNRs. The experimental result shows that SMC and root based mel cepstrum(root_mel cepstrum) show 9.86% and 12.68% recognition enhancement at 10dB in compare to the LPCC(Linear Prediction Cepstral Coefficient). And when combined with spectral subtraction, mel cepstrum and root_mel cepstrum show 16.7% and 8.4% enhanced recognition rate of 94.91% and 94.28% at 10dB.

  • PDF

Study on the Performance of Spectral Contrast MFCC for Musical Genre Classification (스펙트럼 대비 MFCC 특징의 음악 장르 분류 성능 분석)

  • Seo, Jin-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.265-269
    • /
    • 2010
  • This paper proposes a novel spectral audio feature, spectral contrast MFCC (SCMFCC), and studies its performance on the musical genre classification. For a successful musical genre classifier, extracting features that allow direct access to the relevant genre-specific information is crucial. In this regard, the features based on the spectral contrast, which represents the relative distribution of the harmonic and non-harmonic components, have received increased attention. The proposed SCMFCC feature utilizes the spectral contrst on the mel-frequency cepstrum and thus conforms the conventional MFCC in a way more relevant for musical genre classification. By performing classification test on the widely used music DB, we compare the performance of the proposed feature with that of the previous ones.

Comparison of environmental sound classification performance of convolutional neural networks according to audio preprocessing methods (오디오 전처리 방법에 따른 콘벌루션 신경망의 환경음 분류 성능 비교)

  • Oh, Wongeun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.3
    • /
    • pp.143-149
    • /
    • 2020
  • This paper presents the effect of the feature extraction methods used in the audio preprocessing on the classification performance of the Convolutional Neural Networks (CNN). We extract mel spectrogram, log mel spectrogram, Mel Frequency Cepstral Coefficient (MFCC), and delta MFCC from the UrbanSound8K dataset, which is widely used in environmental sound classification studies. Then we scale the data to 3 distributions. Using the data, we test four CNNs, VGG16, and MobileNetV2 networks for performance assessment according to the audio features and scaling. The highest recognition rate is achieved when using the unscaled log mel spectrum as the audio features. Although this result is not appropriate for all audio recognition problems but is useful for classifying the environmental sounds included in the Urbansound8K.

Temporal attention based animal sound classification (시간 축 주의집중 기반 동물 울음소리 분류)

  • Kim, Jungmin;Lee, Younglo;Kim, Donghyeon;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.406-413
    • /
    • 2020
  • In this paper, to improve the classification accuracy of bird and amphibian acoustic sound, we utilize GLU (Gated Linear Unit) and Self-attention that encourages the network to extract important features from data and discriminate relevant important frames from all the input sequences for further performance improvement. To utilize acoustic data, we convert 1-D acoustic data to a log-Mel spectrogram. Subsequently, undesirable component such as background noise in the log-Mel spectrogram is reduced by GLU. Then, we employ the proposed temporal self-attention to improve classification accuracy. The data consist of 6-species of birds, 8-species of amphibians including endangered species in the natural environment. As a result, our proposed method is shown to achieve an accuracy of 91 % with bird data and 93 % with amphibian data. Overall, an improvement of about 6 % ~ 7 % accuracy in performance is achieved compared to the existing algorithms.

Voice-to-voice conversion using transformer network (Transformer 네트워크를 이용한 음성신호 변환)

  • Kim, June-Woo;Jung, Ho-Young
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.55-63
    • /
    • 2020
  • Voice conversion can be applied to various voice processing applications. It can also play an important role in data augmentation for speech recognition. The conventional method uses the architecture of voice conversion with speech synthesis, with Mel filter bank as the main parameter. Mel filter bank is well-suited for quick computation of neural networks but cannot be converted into a high-quality waveform without the aid of a vocoder. Further, it is not effective in terms of obtaining data for speech recognition. In this paper, we focus on performing voice-to-voice conversion using only the raw spectrum. We propose a deep learning model based on the transformer network, which quickly learns the voice conversion properties using an attention mechanism between source and target spectral components. The experiments were performed on TIDIGITS data, a series of numbers spoken by an English speaker. The conversion voices were evaluated for naturalness and similarity using mean opinion score (MOS) obtained from 30 participants. Our final results yielded 3.52±0.22 for naturalness and 3.89±0.19 for similarity.

Earthquake detection based on convolutional neural network using multi-band frequency signals (다중 주파수 대역 convolutional neural network 기반 지진 신호 검출 기법)

  • Kim, Seung-Il;Kim, Dong-Hyun;Shin, Hyun-Hak;Ku, Bonhwa;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.23-29
    • /
    • 2019
  • In this paper, a deep learning-based detection and classification using multi-band frequency signals is presented for detecting earthquakes prevalent in Korea. Based on an analysis of the previous earthquakes in Korea, it is observed that multi-band signals are appropriate for classifying earthquake signals. Therefore, in this paper, we propose a deep CNN (Convolutional Neural Network) using multi-band signals as training data. The proposed algorithm extracts the multi-band signals (Low/Medium/High frequency) by applying band pass filters to mel-spectrum of earthquake signals. Then, we construct three CNN architecture pipelines for extracting features and classifying the earthquake signals by a late fusion of the three CNNs. We validate effectiveness of the proposed method by performing various experiments for classifying the domestic earthquake signals detected in 2018.

Feature Parameter Extraction and Speech Recognition Using Matrix Factorization (Matrix Factorization을 이용한 음성 특징 파라미터 추출 및 인식)

  • Lee Kwang-Seok;Hur Kang-In
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.7
    • /
    • pp.1307-1311
    • /
    • 2006
  • In this paper, we propose new speech feature parameter using the Matrix Factorization for appearance part-based features of speech spectrum. The proposed parameter represents effective dimensional reduced data from multi-dimensional feature data through matrix factorization procedure under all of the matrix elements are the non-negative constraint. Reduced feature data presents p art-based features of input data. We verify about usefulness of NMF(Non-Negative Matrix Factorization) algorithm for speech feature extraction applying feature parameter that is got using NMF in Mel-scaled filter bank output. According to recognition experiment results, we confirm that proposed feature parameter is superior to MFCC(Mel-Frequency Cepstral Coefficient) in recognition performance that is used generally.

A New Power Spectrum Warping Approach to Speaker Warping (화자 정규화를 위한 새로운 파워 스펙트럼 Warping 방법)

  • 유일수;김동주;노용완;홍광석
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.103-111
    • /
    • 2004
  • The method of speaker normalization has been known as the successful method for improving the accuracy of speech recognition at speaker independent speech recognition system. A frequency warping approach is widely used method based on maximum likelihood for speaker normalization. This paper propose a new power spectrum warping approach to making improvement of speaker normalization better than a frequency warping. Th power spectrum warping uses Mel-frequency cepstrum analysis(MFCC) and is a simple mechanism to performing speaker normalization by modifying the power spectrum of Mel filter bank in MFCC. Also, this paper propose the hybrid VTN combined the Power spectrum warping and a frequency warping. Experiment of this paper did a comparative analysis about the recognition performance of the SKKU PBW DB applied each speaker normalization approach on baseline system. The experiment results have shown that a frequency warping is 2.06%, the power spectrum is 3.06%, and hybrid VTN is 4.07% word error rate reduction as of word recognition performance of baseline system.

A Study on Robust Feature Vector Extraction for Fault Detection and Classification of Induction Motor in Noise Circumstance (잡음 환경에서의 유도 전동기 고장 검출 및 분류를 위한 강인한 특징 벡터 추출에 관한 연구)

  • Hwang, Chul-Hee;Kang, Myeong-Su;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.187-196
    • /
    • 2011
  • Induction motors play a vital role in aeronautical and automotive industries so that many researchers have studied on developing a fault detection and classification system of an induction motor to minimize economical damage caused by its fault. With this reason, this paper extracts robust feature vectors from the normal/abnormal vibration signals of the induction motor in noise circumstance: partial autocorrelation (PARCOR) coefficient, log spectrum powers (LSP), cepstrum coefficients mean (CCM), and mel-frequency cepstrum coefficient (MFCC). Then, we classified different types of faults of the induction motor by using the extracted feature vectors as inputs of a neural network. To find optimal feature vectors, this paper evaluated classification performance with 2 to 20 different feature vectors. Experimental results showed that five to six features were good enough to give almost 100% classification accuracy except features by CCM. Furthermore, we considered that vibration signals could include noise components caused by surroundings. Thus, we added white Gaussian noise to original vibration signals, and then evaluated classification performance. The evaluation results yielded that LSP was the most robust in noise circumstance, then PARCOR and MFCC followed by LSP, respectively.