• Title/Summary/Keyword: 켑스트럼

Search Result 163, Processing Time 0.018 seconds

Voice personality transformation using an orthogonal vector space conversion (직교 벡터 공간 변환을 이용한 음성 개성 변환)

  • Lee, Ki-Seung;Park, Kun-Jong;Youn, Dae-Hee
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.96-107
    • /
    • 1996
  • A voice personality transformation algorithm using orthogonal vector space conversion is proposed in this paper. Voice personality transformation is the process of changing one person's acoustic features (source) to those of another person (target). In this paper, personality transformation is achieved by changing the LPC cepstrum coefficients, excitation spectrum and pitch contour. An orthogonal vector space conversion technique is proposed to transform the LPC cepstrum coefficients. The LPC cepstrum transformation is implemented by principle component decomposition by applying the Karhunen-Loeve transformation and minimum mean-square error coordinate transformation(MSECT). Additionally, we propose a pitch contour modification method to transform the prosodic characteristics of any speaker. To do this, reference pitch patterns for source and target speaker are firstly built up, and speaker's one. The experimental results show the effectiveness of the proposed algorithm in both subjective and objective evaluations.

  • PDF

Feature Extraction by Optimizing the Cepstral Resolution of Frequency Sub-bands (주파수 부대역의 켑스트럼 해상도 최적화에 의한 특징추출)

  • 지상문;조훈영;오영환
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.35-41
    • /
    • 2003
  • Feature vectors for conventional speech recognition are usually extracted in full frequency band. Therefore, each sub-band contributes equally to final speech recognition results. In this paper, feature Teeters are extracted indepedently in each sub-band. The cepstral resolution of each sub-band feature is controlled for the optimal speech recognition. For this purpose, different dimension of each sub-band ceptral vectors are extracted based on the multi-band approach, which extracts feature vector independently for each sub-band. Speech recognition rates and clustering quality are suggested as the criteria for finding the optimal combination of sub-band Teeter dimension. In the connected digit recognition experiments using TIDIGITS database, the proposed method gave string accuracy of 99.125%, 99.775% percent correct, and 99.705% percent accuracy, which is 38%, 32% and 37% error rate reduction relative to baseline full-band feature vector, respectively.

Text Independent Speaker Identification Using Separate Matrix Quantization (분할 매트릭스 부호화를 이용한 문장 독립형 화자인식 시스템)

  • 경연정;이황수
    • The Journal of the Acoustical Society of Korea
    • /
    • v.17 no.5
    • /
    • pp.69-72
    • /
    • 1998
  • 본 논문에서는 문장독립형 화자인식 시스템에 MQ(Matrix Quantization) 방법 사용 을 제안한다. 또한 인식율을 높이기 위해 MQ를 수정한 방법인 SMQ(Separated Matrix Quantization)을 제안한다. 기존의 VQ-distortion 방법은 대체로 좋은 성능을 가지나 화자의 동적 특성을 이용하지 못한다는 단점이 있다. MQ와 SMQ는 화자의 동적 특성을 이용할 수 있으므로 시간 변화에 대한 화자의 특징 변화까지 모델링 할 수 있는 장점이 있다. MQ는 여러 프레임을 묶어 Matrix Codebook을 가지며 SMQ는 MQ의 기본 codebook을 다시 켑스 트럼의 차수에 따라 나누어 codebook을 만든다. 즉, 켑스트럼 차수를 저, 중, 고차로 나누어 각 부분별로 Matrix codebook을 만들도록 한다. 인식실험은 문장독립 음성 데이터에 대해 실행했으며 MQ모델의 경우 Matrix의 크기를 짧은 음소크기부터 음절단위까지 변화시켜 실 험하였다. 아울러 SMQ 모델에서의 실험은 차수별 유용도를 보기 위하여 부분 차수를 이용 하여 실험하였다. 실험결과 MQ와 SMQ방법이 VQ에 비해 좋은 성능을 가짐을 확인하였다.

  • PDF

Applying feature normalization based on pole filtering to short-utterance speech recognition using deep neural network (심층신경망을 이용한 짧은 발화 음성인식에서 극점 필터링 기반의 특징 정규화 적용)

  • Han, Jaemin;Kim, Min Sik;Kim, Hyung Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.64-68
    • /
    • 2020
  • In a conventional speech recognition system using Gaussian Mixture Model-Hidden Markov Model (GMM-HMM), the cepstral feature normalization method based on pole filtering was effective in improving the performance of recognition of short utterances in noisy environments. In this paper, the usefulness of this method for the state-of-the-art speech recognition system using Deep Neural Network (DNN) is examined. Experimental results on AURORA 2 DB show that the cepstral mean and variance normalization based on pole filtering improves the recognition performance of very short utterances compared to that without pole filtering, especially when there is a large mismatch between the training and test conditions.

Emotion Recognition using Robust Speech Recognition System (강인한 음성 인식 시스템을 사용한 감정 인식)

  • Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.586-591
    • /
    • 2008
  • This paper studied the emotion recognition system combined with robust speech recognition system in order to improve the performance of emotion recognition system. For this purpose, the effect of emotional variation on the speech recognition system and robust feature parameters of speech recognition system were studied using speech database containing various emotions. Final emotion recognition is processed using the input utterance and its emotional model according to the result of speech recognition. In the experiment, robust speech recognition system is HMM based speaker independent word recognizer using RASTA mel-cepstral coefficient and its derivatives and cepstral mean subtraction(CMS) as a signal bias removal. Experimental results showed that emotion recognizer combined with speech recognition system showed better performance than emotion recognizer alone.

A Study on Speaker Recognition Algorithm Through Wire/Wireless Telephone (유무선 전화를 통한 화자인식 알고리즘에 관한 연구)

  • 김정호;정희석;강철호;김선희
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.3
    • /
    • pp.182-187
    • /
    • 2003
  • In this thesis, we propose the algorithm to improve the performance of speaker verification that is mapping feature parameters by using RBF neural network. There is a big difference between wire vector region and wireless one which comes from the same speaker. For wire/wireless speakers model production, speaker verification system should distinguish the wire/wireless channel that based on speech recognition system. And the feature vector of untrained channel models is mapped to the feature vector(LPC Cepstrum) of trained channel model by using RBF neural network. As a simulation result, the proposed algorithm makes 0.6%∼10.5% performance improvement compared to conventional method such as cepstral mean subtraction.

The Design of Speech Recognition Chip for a Small Vocabulary as a Word-level (소어휘 단어단위의 음성인식 칩 설계)

  • 안점영;최영식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.2
    • /
    • pp.330-338
    • /
    • 2002
  • A speech recognition chip that can recognize a small vocabulary as a word-level has been designed. It is composed of EPD(Start and End-point detection) block, LPC block, DTW block and external memory interface block. It is made of 126,938 gates on 4x4mm2 area with a CMOS 0.35um TLM process. The speed of the chip varies from 5MHz to 60MHz because of its specific hardware designed for the purpose. It can compare 100,000 voices as a small vocabulary which has approximately 50∼60 frames at the clock of 5MHz and also up to 1,200,000 voices at the clock of 60MHz.