• Title/Summary/Keyword: Speaker normalization

Search Result 46, Processing Time 0.021 seconds

Performance Improvement of Speech Recognition System Based on Speaker Normalization Through Linear Warping Function (선형워핑함수의 화자정규화에 의한 음성 인식시스템의 성능향상)

  • Choi, Seok-Yong;Chung, Kyoung-Yong;Lee, Jung-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2000.10b
    • /
    • pp.879-882
    • /
    • 2000
  • 화자종속 음성인식 시스템은 훈련 데이터가 화자들 사이의 음향적 변이를 충분히 모델링 할 수 있을 때, 화자독립 시스템보다 더 성능이 졸은 것으로 알려져 있다. 화자 정규화 기술은 입력음성의 스펙트럼을 수정하여 화자들 사이의 변이를 줄인다. 최근 성공적인 화자 정규화 알고리즘은 신호처리단계에 화자 특유 주파수 워핑을 통합했다. 이런 알고리즘은 입력음성에 담겨있는 음향적 특징을 다 사용하지 않는다. 본 논문에서는 화자의 음향적 특징으로 세 개의 포만트 주파수를 이용하였고, 수집된 포만트 주파수들로부터 워핑함수를 정의하는데 선형회귀를 사용한 화자 정규화 방법을 제안한다. 이 방법을 사용하여 인식 성능을 향상할 수 있었다.

  • PDF

Implementation of a Speech Recognition System for a Car Navigation System (차량 항법용 음성인식 시스템의 구현)

  • Lee, Tae-Han;Yang, Tae-Young;Park, Sang-Taick;Lee, Chung-Yong;Youn, Dae-Hee;Cha, Il-Hwan
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.9
    • /
    • pp.103-112
    • /
    • 1999
  • In this paper, a speaker-independent isolated world recognition system for a car navigation system is implemented using a general digital signal processor. This paper presents a method combining SNR normalization with RAS as a noise processing method. The semi-continuous hidden markov model is adopted and TMS320C31 is used in implementing the real-time system. Recognition word set is composed of 69 command words for a car navigation system. Experimental results showed that the recognition performance has a maximum of 93.62% in case of a combination of SNR normalization and spectral subtraction, and the performance improvement rate of the system is 3.69%, Presented noise processing method showed good speech recognition performance in 5dB SNR in car environment.

  • PDF

Robust Speech Parameters for the Emotional Speech Recognition (감정 음성 인식을 위한 강인한 음성 파라메터)

  • Lee, Guehyun;Kim, Weon-Goo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.6
    • /
    • pp.681-686
    • /
    • 2012
  • This paper studied the speech parameters less affected by the human emotion for the development of the robust emotional speech recognition system. For this purpose, the effect of emotion on the speech recognition system and robust speech parameters of speech recognition system were studied using speech database containing various emotions. In this study, mel-cepstral coefficient, delta-cepstral coefficient, RASTA mel-cepstral coefficient, root-cepstral coefficient, PLP coefficient and frequency warped mel-cepstral coefficient in the vocal tract length normalization method were used as feature parameters. And CMS (Cepstral Mean Subtraction) and SBR(Signal Bias Removal) method were used as a signal bias removal technique. Experimental results showed that the HMM based speaker independent word recognizer using frequency warped RASTA mel-cepstral coefficient in the vocal tract length normalized method, its derivatives and CMS as a signal bias removal showed the best performance.

Performance Improvement in GMM-based Text-Independent Speaker Verification System (GMM 기반의 문맥독립 화자 검증 시스템의 성능 향상)

  • Hahm Seong-Jun;Shen Guang-Hu;Kim Min-Jung;Kim Joo-Gon;Jung Ho-Youl;Chung Hyun-Yeol
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.131-134
    • /
    • 2004
  • 본 논문에서는 GMM(Gaussian Mixture Model)을 이용한 문맥독립 화자 검증 시스템을 구현한 후, arctan 함수를 이용한 정규화 방법을 사용하여 화자검증실험을 수행하였다. 특징파라미터로서는 선형예측방법을 이용한 켑스트럼 계수와 회귀계수를 사용하고 화자의 발성 변이를 고려하여 CMN(Cepstral Mean Normalization)을 적용하였다. 화자모델 생성을 위한 학습단에서는 화자발성의 음향학적 특징을 잘 표현할 수 있는 GMM(Gaussian Mixture Model)을 이용하였고 화자 검증단에서는 ML(Maximum Likelihood)을 이용하여 유사도를 계산하고 기존의 정규화 방법과 arctan 함수를 이용한 방법에 의해 정규화된 점수(score)와 미리 정해진 문턱값과 비교하여 검증하였다. 화자 검증 실험결과, arctan 함수를 부가한 방법이 기존의 방법보다 항상 향상된 EER을 나타냄을 확인할 수 있었다.

  • PDF

A comparison of normalized formant trajectories of English vowels produced by American men and women

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2019
  • Formant trajectories reflect the continuous variation of speakers' articulatory movements over time. This study examined formant trajectories of English vowels produced by ninety-three American men and women; the values were normalized using the scale function in R and compared using generalized additive mixed models (GAMMs). Praat was used to read the sound data of Hillenbrand et al. (1995). A formant analysis script was prepared, and six formant values at the corresponding time points within each vowel segment were collected. The results indicate that women yielded proportionately higher formant values than men. The standard deviations of each group showed similar patterns at the first formant (F1) and the second formant (F2) axes and at the measurement points. R was used to scale the first two formant data sets of men and women separately. GAMMs of all the scaled formant data produced various patterns of deviation along the measurement points. Generally, more group difference exists in F1 than in F2. Also, women's trajectories appear more dynamic along the vertical and horizontal axes than those of men. The trajectories are related acoustically to F1 and F2 and anatomically to jaw opening and tongue position. We conclude that scaling and nonlinear testing are useful tools for pinpointing differences between speaker group's formant trajectories. This research could be useful as a foundation for future studies comparing curvilinear data sets.

Comparison of Korean Speech De-identification Performance of Speech De-identification Model and Broadcast Voice Modulation (음성 비식별화 모델과 방송 음성 변조의 한국어 음성 비식별화 성능 비교)

  • Seung Min Kim;Dae Eol Park;Dae Seon Choi
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.56-65
    • /
    • 2023
  • In broadcasts such as news and coverage programs, voice is modulated to protect the identity of the informant. Adjusting the pitch is commonly used voice modulation method, which allows easy voice restoration to the original voice by adjusting the pitch. Therefore, since broadcast voice modulation methods cannot properly protect the identity of the speaker and are vulnerable to security, a new voice modulation method is needed to replace them. In this paper, using the Lightweight speech de-identification model as the evaluation target model, we compare speech de-identification performance with broadcast voice modulation method using pitch modulation. Among the six modulation methods in the Lightweight speech de-identification model, we experimented on the de-identification performance of Korean speech as a human test and EER(Equal Error Rate) test compared with broadcast voice modulation using three modulation methods: McAdams, Resampling, and Vocal Tract Length Normalization(VTLN). Experimental results show VTLN modulation methods performed higher de-identification performance in both human tests and EER tests. As a result, the modulation methods of the Lightweight model for Korean speech has sufficient de-identification performance and will be able to replace the security-weak broadcast voice modulation.