• Title/Summary/Keyword: cepstrum

Search Result 274, Processing Time 0.022 seconds

A Study on the Dynamic Feature of Phoneme for Word Recognition (단어인식을 위한 음소의 동적 특징에 관한 검토)

  • 김주곤
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1997.06a
    • /
    • pp.35-39
    • /
    • 1997
  • 본 연구에서는 음소를 인식의 기본단위로 하는 한국어 단어인식 시스템의 인식정도를 개선하기 이해 각 음소의 시간방향의 정보를 포함하고 있는 동적특징인 회귀계수와 K-L(Karhunen-Loeve)변환으로 얻은 특징파라미터(이하 K-L계수라 함)를 이용하여 음소인식과 단어인식 실험을 수행한 결과 그 유효성을 확인하였다. 이를 위해 먼저 파열음을 대상으로 정적 특징과 파라미터인 멜-켑스트럼(Mel-Cepstrum)과 동적 특징 파라미터인 회귀계수(Regressive Coefficient) 와 K-L 계수(Karhunen-Loeve Coefficient)를 추출하여 음소 인식실험을 수행하였다. 그 결과 멜-켑스트럼을 사용한 경우 39.84%, 회귀계수를 사용한 경우 48.52%, K-L계수를 사용한 경우 52.40%의 인식률을 얻었다. 이를 참고로 각각의 특징 파라미터를 결합하여 인식실험한 결과 멜-켑스트럼과 K-L계수를 사용한 경우 47.17%,멜 -켑스트럼과 회귀계수의 경우 60.11%,K-L계수와 회귀계수의 경우 60.35%, 멜-켑스트럼과 K-L계수 , 회귀계수를 사용한 경우 58.13%를 인식률을 얻어 동적특징인 K-L 계수와 회귀계수를 사용한 경우와 멜-켑스트럼과 회귀계수를 사용한 경우가 높은 인식률을 보였으며 이를 단어로 확장하여 인식실험을 수행한 결과 기존의 특징 파라미터를 이용한 경우보다 높은 인식률을 얻어 동적 파라미터의 유효성을 확인하였다

  • PDF

A Study on the Segmentation of Speech Signal into Phonemic Units (음성 신호의 음소 단위 구분화에 관한 연구)

  • Lee, Yeui-Cheon;Lee, Gang-Sung;Kim, Soon-Hyon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.10 no.4
    • /
    • pp.5-11
    • /
    • 1991
  • This paper suggests a segmentation method of speech signal into phonemic units. The suggested segmentation system is speaker-independent and performed without anyprior information of speech signal. In segmentation process, we first divide input speech signal into purevoiced region and not pure voiced speech regions. After then we apply the second algorithm which segments each region into the detailed phonemic units by using the voiced detection parameters, i.e., the time variation of 0th LPC cepstrum coefficient parameter and the ZCR parameter. Types of speech, used to prove the availability of segmentation algorithm suggested in this paper, are the vocabulary composed of isolated words and continuous words. According to the experiments, the successful segmentation rate for 507 phonemic units involved in the total vocabulary is 91.7%.

  • PDF

A Study on the Technique of Spectrum Flattening for Improved Pitch Detection (개선된 피치검출을 위한 스펙트럼 평탄화 기법에 관한 연구)

  • 강은영;배명진;민소연
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.310-314
    • /
    • 2002
  • The exact pitch (fundamental frequency) extraction is important in speech signal processing like speech recognition, speech analysis and synthesis. However the exact pitch extraction from speech signal is very difficult due to the effect of formant and transitional amplitude. So in this paper, the pitch is detected after the elimination of formant ingredients by flattening the spectrum in frequency region. The effect of the transition and change of phoneme is low in frequency region. In this paper we proposed the new flattening method of log spectrum and the performance was compared with LPC method and Cepstrum method. The results show the proposed method is better than conventional method.

Voice Personality Transformation Using a Multiple Response Classification and Regression Tree (다중 응답 분류회귀트리를 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.253-261
    • /
    • 2004
  • In this paper, a new voice personality transformation method is proposed. which modifies speaker-dependent feature variables in the speech signals. The proposed method takes the cepstrum vectors and pitch as the transformation paremeters, which represent vocal tract transfer function and excitation signals, respectively. To transform these parameters, a multiple response classification and regression tree (MR-CART) is employed. MR-CART is the vector extended version of a conventional CART, whose response is given by the vector form. We evaluated the performance of the proposed method by comparing with a previously proposed codebook mapping method. We also quantitatively analyzed the performance of voice transformation and the complexities according to various observations. From the experimental results for 4 speakers, the proposed method objectively outperforms a conventional codebook mapping method. and we also observed that the transformed speech sounds closer to target speech.

A Study on Speaker Recognition Algorithm Through Wire/Wireless Telephone (유무선 전화를 통한 화자인식 알고리즘에 관한 연구)

  • 김정호;정희석;강철호;김선희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.182-187
    • /
    • 2003
  • In this thesis, we propose the algorithm to improve the performance of speaker verification that is mapping feature parameters by using RBF neural network. There is a big difference between wire vector region and wireless one which comes from the same speaker. For wire/wireless speakers model production, speaker verification system should distinguish the wire/wireless channel that based on speech recognition system. And the feature vector of untrained channel models is mapped to the feature vector(LPC Cepstrum) of trained channel model by using RBF neural network. As a simulation result, the proposed algorithm makes 0.6%∼10.5% performance improvement compared to conventional method such as cepstral mean subtraction.

A Study on the Pitch Detection of Speech Harmonics by the Peak-Fitting (음성 하모닉스 스펙트럼의 피크-피팅을 이용한 피치검출에 관한 연구)

  • Kim, Jong-Kuk;Jo, Wang-Rae;Bae, Myung-Jin
    • Speech Sciences
    • /
    • v.10 no.2
    • /
    • pp.85-95
    • /
    • 2003
  • In speech signal processing, it is very important to detect the pitch exactly in speech recognition, synthesis and analysis. If we exactly pitch detect in speech signal, in the analysis, we can use the pitch to obtain properly the vocal tract parameter. It can be used to easily change or to maintain the naturalness and intelligibility of quality in speech synthesis and to eliminate the personality for speaker-independence in speech recognition. In this paper, we proposed a new pitch detection algorithm. First, positive center clipping is process by using the incline of speech in order to emphasize pitch period with a glottal component of removed vocal tract characteristic in time domain. And rough formant envelope is computed through peak-fitting spectrum of original speech signal infrequence domain. Using the roughed formant envelope, obtain the smoothed formant envelope through calculate the linear interpolation. As well get the flattened harmonics waveform with the algebra difference between spectrum of original speech signal and smoothed formant envelope. Inverse fast fourier transform (IFFT) compute this flattened harmonics. After all, we obtain Residual signal which is removed vocal tract element. The performance was compared with LPC and Cepstrum, ACF. Owing to this algorithm, we have obtained the pitch information improved the accuracy of pitch detection and gross error rate is reduced in voice speech region and in transition region of changing the phoneme.

  • PDF

FPGA-Based Hardware Accelerator for Feature Extraction in Automatic Speech Recognition

  • Choo, Chang;Chang, Young-Uk;Moon, Il-Young
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.3
    • /
    • pp.145-151
    • /
    • 2015
  • We describe in this paper a hardware-based improvement scheme of a real-time automatic speech recognition (ASR) system with respect to speed by designing a parallel feature extraction algorithm on a Field-Programmable Gate Array (FPGA). A computationally intensive block in the algorithm is identified implemented in hardware logic on the FPGA. One such block is mel-frequency cepstrum coefficient (MFCC) algorithm used for feature extraction process. We demonstrate that the FPGA platform may perform efficient feature extraction computation in the speech recognition system as compared to the generalpurpose CPU including the ARM processor. The Xilinx Zynq-7000 System on Chip (SoC) platform is used for the MFCC implementation. From this implementation described in this paper, we confirmed that the FPGA platform is approximately 500× faster than a sequential CPU implementation and 60× faster than a sequential ARM implementation. We thus verified that a parallelized and optimized MFCC architecture on the FPGA platform may significantly improve the execution time of an ASR system, compared to the CPU and ARM platforms.

A Study on the Algorithm Development for Speech Recognition of Korean and Japanese (한국어와 일본어의 음성 인식을 위한 알고리즘 개발에 관한 연구)

  • Lee, Sung-Hwa;Kim, Hyung-Lae
    • Journal of IKEEE
    • /
    • v.2 no.1 s.2
    • /
    • pp.61-67
    • /
    • 1998
  • In this thesis, experiment have performed with the speaker recognition using multilayer feedforward neural network(MFNN) model using Korean and Japanese digits . The 5 adult males and 5 adult females pronounciate form 0 to 9 digits of Korean, Japanese 7 times. And then, they are extracted characteristics coefficient through Pitch deletion algorithm, LPC analysis, and LPC Cepstral analysis to generate input pattern of MFNN. 5 times among them are used to train a neural network, and 2 times is used to measure the performance of neural network. Both Korean and Japanese, Pitch coefficients is about 4%t more enhanced than LPC or LPC Cepstral coefficients.

  • PDF

Text-Independent Speaker Identification System Using Speaker Decision Network Based on Delayed Summing (지연누적에 기반한 화자결정회로망이 도입된 구문독립 화자인식시스템)

  • 이종은;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.82-95
    • /
    • 1998
  • In this paper, we propose a text-independent speaker identification system which has a classifier composed of two parts; to calculate the degree of likeness of each speech frame and to select the most probable speaker from the entire speech duration. The first part is realized using RBFN which is selforganized through learning and in the second part the speaker is determined using a con-tbination of MAXNET and delayed summings. And we use features from linear speech production model and features from fractal geometry. Closed-set speaker identification experiments on 13 male homogeneous speakers show that the proposed techniques can achieve the identification ratio of 100% as the number of delays increases.

  • PDF

Voice Quality of Dysarthric Speakers in Connected Speech (연결발화에서 마비말화자의 음질 특성)

  • Seo, Inhyo;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.33-41
    • /
    • 2013
  • This study investigated the perceptual and cepstral/spectral characteristics of phonation and their relationships in dysarthria in connected speech. Twenty-two participants were divided into two groups; the eleven dysarthric speakers were paired with matching age and gender healthy control participants. A perceptual evaluation was performed by three speech pathologists using the GRBAS scale to measure the cepstrual/spectral characteristics of phonation between the two groups' connected speech. Correlations showed dysarthric speakers scored significantly worse (with a higher rating) with severities in G (overall dysphonia grade), B (breathiness), and S (strain), while the smoothed prominence of the cepstral peak (CPPs) was significantly lower. The CPPs were significantly correlated with the perceptual ratings, including G, B, and S. The utility of CPPs is supported by its high relationship with perceptually rated dysphonia severity in dysarthric speakers. The receiver operating characteristic (ROC) analysis showed that the threshold of 5.08 dB for the CPPs achieved a good classification for dysarthria, with 63.6% sensitivity and the perfect specificity (100%). Those results indicate the CPPs reliably distinguished between healthy controls and dysarthric speakers. However, the CPP frequency (CPP F0) and low-high spectral ratio (L/H ratio) were not significantly different between the two groups.