• Title/Summary/Keyword: 음성적 유사도

Search Result 308, Processing Time 0.034 seconds

Robust estimation of HMM parameters Based on the State-Dependent Source-Quantization for Speech Recognition (상태의존 소스 양자화에 기반한 음성인식을 위한 은닉 마르코프 모델 파라미터의 견고한 추정)

  • 최환진;박재득
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.1
    • /
    • pp.66-75
    • /
    • 1998
  • 최근 음성인식을 위한 대표적인 방법으로써 은닉 마르코프 모델이 사용되고 있으며, 이러한 방법은 음성의 특성을 잘 표현하도록 하는 음향적인 모델링 방법에 따라서 성능이 좌우된다. 본 논문에서는 상태에서의 출력확률은 견고히 추정하기 위한 방법으로 상태에서 의 출력활률을 소스들의 분포와 그들의 빈도로 가중한 출력분포로 표시하는 상태 의존 소스 양자화 모델링 방법을 제안한다. 이 방법은 한 상태 내에서 특징 파라미터들이 유사한 특성 을 가지며, 그들의 변이가 다른 상태에 있는 특징 파라미터들에 비해서 작다는 사실에 기반 한다. 실험결과에 의하면, 제안된 방법이 기존의 baseline시스템보다 단어 인식율의 경우는 2.7%, 문장 인식율의 경우 3.6%의 향상을 보였다. 이러한 결과로부터 제안된 SDSQ-DHMM이 인식율 향상면에서 유효하며, HMM에 있어서 상태별 출력확률의 견고한 추정을 위한 대안으로 사용될 수 있을 것으로 판단된다.

  • PDF

Vector Quantization based Speech Recognition Performance Improvement using Maximum Log Likelihood in Gaussian Distribution (가우시안 분포에서 Maximum Log Likelihood를 이용한 벡터 양자화 기반 음성 인식 성능 향상)

  • Chung, Kyungyong;Oh, SangYeob
    • Journal of Digital Convergence
    • /
    • v.16 no.11
    • /
    • pp.335-340
    • /
    • 2018
  • Commercialized speech recognition systems that have an accuracy recognition rates are used a learning model from a type of speaker dependent isolated data. However, it has a problem that shows a decrease in the speech recognition performance according to the quantity of data in noise environments. In this paper, we proposed the vector quantization based speech recognition performance improvement using maximum log likelihood in Gaussian distribution. The proposed method is the best learning model configuration method for increasing the accuracy of speech recognition for similar speech using the vector quantization and Maximum Log Likelihood with speech characteristic extraction method. It is used a method of extracting a speech feature based on the hidden markov model. It can improve the accuracy of inaccurate speech model for speech models been produced at the existing system with the use of the proposed system may constitute a robust model for speech recognition. The proposed method shows the improved recognition accuracy in a speech recognition system.

Speech Synthesis using Diphone Clustering and Improved Spectral Smoothing (다이폰 군집화와 개선된 스펙트럼 완만화에 의한 음성합성)

  • Jang, Hyo-Jong;Kim, Kwan-Jung;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.665-672
    • /
    • 2003
  • This paper describes a speech synthesis technique by concatenating unit phoneme. At that time, a major problem is that discontinuity is happened from connection part between unit phonemes, especially from connection part between unit phonemes recorded by different persons. To solve the problem, this paper uses clustered diphone, and proposes a spectral smoothing technique, not only using formant trajectory and distribution characteristic of spectrum but also reflecting human's acoustic characteristic. That is, the proposed technique performs unit phoneme clustering using distribution characteristic of spectrum at connection part between unit phonemes and decides a quantity and a scope for the smoothing by considering human's acoustic characteristic at the connection part of unit phonemes, and then performs the spectral smoothing using weights calculated along a time axes at the border of two diphones. The proposed technique removes the discontinuity and minimizes the distortion which can be occurred by spectrum smoothing. For the purpose of the performance evaluation, we test on five hundred diphones which are extracted from twenty sentences recorded by five persons, and show the experimental results.

Estimating Amino Acid Composition of Protein Sequences Using Position-Dependent Similarity Spectrum (위치 종속 유사도 스펙트럼을 이용한 단백질 서열의 아미노산 조성 추정)

  • Chi, Sang-Mun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.74-79
    • /
    • 2010
  • The amino acid composition of a protein provides basic information for solving many problems in bioinformatics. We propose a new method that uses biologically relevant similarity between amino acids to determine the amino acid composition, where the BOLOSUM matrix is exploited to define a similarity measure between amino acids. Futhermore, to extract more information from a protein sequence than conventional methods for determining amino acid composition, we exploit the concepts of spectral analysis of signals such as radar and speech signals-the concepts of time-dependent analysis, time resolution, and frequency resolution. The proposed method was applied to predict subcellular localization of proteins, and showed significantly improved performance over previous methods for amino acid composition estimation.

Analysis on Vowel and Consonant Sounds of Patent's Speech with Velopharyngeal Insufficiency (VPI) and Simulated Speech (구개인두부전증 환자와 모의 음성의 모음과 자음 분석)

  • Sung, Mee Young;Kim, Heejin;Kwon, Tack-Kyun;Sung, Myung-Whun;Kim, Wooil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.7
    • /
    • pp.1740-1748
    • /
    • 2014
  • This paper focuses on listening test and acoustic analysis of patients' speech with velopharyngeal insufficiency (VPI) and normal speakers' simulation speech. In this research, a set consisting of 50-words, vowels and single syllables is determined for speech database construction. A web-based listening evaluation system is developed for a convenient/automated evaluation procedure. The analysis results show the trend of incorrect recognition for VPI speech and the one for simulation speech are similar. Such similarity is also confirmed by comparing the formant locations of vowel and spectrum of consonant sounds. These results show that the simulation method for VPI speech is effective at generating the speech signals similar to actual VPI patient's speech. It is expected that the simulation speech data can be effectively employed for our future work such as acoustic model adaptation.

A DB Pruning Method in a Large Corpus-Based TTS with Multiple Candidate Speech Segments (대용량 복수후보 TTS 방식에서 합성용 DB의 감량 방법)

  • Lee, Jung-Chul;Kang, Tae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.572-577
    • /
    • 2009
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. To prune the redundant speech segments in a large speech segment DB, we can utilize a decision-tree based triphone clustering algorithm widely used in speech recognition area. But, the conventional methods have problems in representing the acoustic transitional characteristics of the phones and in applying context questions with hierarchic priority. In this paper, we propose a new clustering algorithm to downsize the speech DB. Firstly, three 13th order MFCC vectors from first, medial, and final frame of a phone are combined into a 39 dimensional vector to represent the transitional characteristics of a phone. And then the hierarchically grouped three question sets are used to construct the triphone trees. For the performance test, we used DTW algorithm to calculate the acoustic similarity between the target triphone and the triphone from the tree search result. Experimental results show that the proposed method can reduce the size of speech DB by 23% and select better phones with higher acoustic similarity. Therefore the proposed method can be applied to make a small sized TTS.

An Analysis on the Pitch Variation Of the Emotional Speech (감정 음성의 피치 변화 분석)

  • Chun Heejin;Chung Jihye;Kim Byungil;Lee Yanghee
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.93-96
    • /
    • 1999
  • 감정을 표현하는 음성 합성 시스템을 구현하기 위해서 이전 논문에서는 음운 및 운율 요소(피치, 에너지, 지속시간, 스펙트럼 인벨로프)가 각 감정 음성에 미치는 영향에 대한 분석을 수행하였다. 본 논문에서는 네 가지 감정 표현(평상, 화남, 기쁨, 슬픔)을 나타내는 음성 데이터에 대해 음절 세그먼트와 라벨링을 행한 감정 음성 데이터베이스를 토대로 감정 표현에 많은 영향을 미치는 요소인 피치가 어떻게 변화하는지를 분석하였다. 통계적인 방법을 이용하여 감정별 피치를 정규화 하였으며, 감정 음성 데이터베이스 내의 문장별 피치 패턴에 대해 분석하였다. 그 결과 감정별 피치의 평균 ZScore는 화남이 가장 작았으며, 기쁨, 평상, 슬픔의 순으로 높았다. 또한 감정별 피치의 범위 변화는 슬픔이 가장 작았으며, 평상, 화남, 기쁨의 순으로 높았다. 문장별 피치의 패턴은 감정 표현에 따라 전체적으로 대부분 유사하게 나타났으며, 문장의 처음 부분은 화남의 경우 다른 감정에 비해 대체로 높게 변화하였고, 화남과 기쁨의 경우 문장의 뒷부분에서 다른 감정에 비해 피치가 상승하는 것을 볼 수 있었다.

  • PDF

Comparison of MEL-LPC and LPC-MEL Analysis Method for the Korean Speech Recognition Systems. (한국어 음성 인식 시스템을 위한 MEL-LPC 분석 방법과 LPC-MEL 분석 방법의 비교)

  • 김주곤;김범국;정호열;정현열
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.833-836
    • /
    • 2001
  • 본 논문에서는 한국어 음성인식 시스템의 성능 향상을 위해 청각 주파수 분해능을 가진 MEL-LPC Cepstrum을 음소단위의 HMM(Hidden Markov Model)을 기반으로 하는 인식 시스템에 적용하여 그 결과를 비교 검토하였다. 선형예측(LP) 분석 후에 후처리로서 주파수를 왜곡시킨 LPC-MEL 분석이 계산량이 적고 효과적이라 일반적으로 많이 사용되고 있으나 주파수 분해능은 많이 개선되지 않는다. 따라서 본 논문에서는 주파수 분해능을 개선하기 위해, 원 음성신호로부터 직접적으로 멜주파수로 왜곡시킨 후 선형 예측 분석을 수행하는 MEL-LPC 분석방법을 이용한 음소기반의 화자 독립 음성인식 시스템을 구성하여 기존의 LPC-MEL 분석방법과 비교실험을 통하여 MEL-LPC 분석방법의 유효성을 검토하였다. 실험에 사용한 음성 데이터베이스는 음소 및 단어 인식실험에서는 ETRI 445단어 DB, 연속 숫자음인식 실험에서는 KLE 4연속 숫자음 DB를 사용하였다. 화자 독립 음소인식 실험의 경우, 묵음을 제외한 47개의 유사 음소에 대하여 4상태 3출력의 Left-to-Right 모델을이용하였다. 단어 및 연속 숫자음 인식 실험의 경우, 유한상태 네트워크에 의한 OPDP법을 이용하였다. 화자 독립 음소, 단어 및 4연속 숫자음 인식 실험결과, 기존의 LPC-MEL Cepstrum을 사용한 경우보다 MEL-LPC Cepstum을 사용한 경우가 더 높은 인식률을 나타내어 한국어 음성인식 시스템에서 MEL-LPC 분석방법의 유효성을 확인할 수 있었다.

  • PDF

Voice Personality Transformation Using a Multiple Response Classification and Regression Tree (다중 응답 분류회귀트리를 이용한 음성 개성 변환)

  • 이기승
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.3
    • /
    • pp.253-261
    • /
    • 2004
  • In this paper, a new voice personality transformation method is proposed. which modifies speaker-dependent feature variables in the speech signals. The proposed method takes the cepstrum vectors and pitch as the transformation paremeters, which represent vocal tract transfer function and excitation signals, respectively. To transform these parameters, a multiple response classification and regression tree (MR-CART) is employed. MR-CART is the vector extended version of a conventional CART, whose response is given by the vector form. We evaluated the performance of the proposed method by comparing with a previously proposed codebook mapping method. We also quantitatively analyzed the performance of voice transformation and the complexities according to various observations. From the experimental results for 4 speakers, the proposed method objectively outperforms a conventional codebook mapping method. and we also observed that the transformed speech sounds closer to target speech.

Voice Personality Transformation Using a Probabilistic Method (확률적 방법을 이용한 음성 개성 변환)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.150-159
    • /
    • 2005
  • This paper addresses a voice personality transformation algorithm which makes one person's voices sound as if another person's voices. In the proposed method, one person's voices are represented by LPC cepstrum, pitch period and speaking rate, the appropriate transformation rules for each Parameter are constructed. The Gaussian Mixture Model (GMM) is used to model one speaker's LPC cepstrums and conditional probability is used to model the relationship between two speaker's LPC cepstrums. To obtain the parameters representing each probabilistic model. a Maximum Likelihood (ML) estimation method is employed. The transformed LPC cepstrums are obtained by using a Minimum Mean Square Error (MMSE) criterion. Pitch period and speaking rate are used as the parameters for prosody transformation, which is implemented by using the ratio of the average values. The proposed method reveals the superior performance to the previous VQ-based method in subjective measures including average cepstrum distance reduction ratio and likelihood increasing ratio. In subjective test. we obtained almost the same correct identification ratio as the previous method and we also confirmed that high qualify transformed speech is obtained, which is due to the smoothly evolving spectral contours over time.