• Title/Summary/Keyword: Speaker Adaptation

Search Result 122, Processing Time 0.02 seconds

Self-Adaptation Algorithm Based on Maximum A Posteriori Eigenvoice for Korean Connected Digit Recognition (한국어 연결 숫자음 인식을 일한 최대 사후 Eigenvoice에 근거한 자기적응 기법)

  • Kim Dong Kook;Jeon Hyung Bae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.590-596
    • /
    • 2004
  • This paper Presents a new self-adaptation algorithm based on maximum a posteriori (MAP) eigenvoice for Korean connected digit recognition. The proposed MAP eigenvoice is developed by introducing a probability density model for the eigenvoice coefficients. The Proposed approach provides a unified framework that incorporates the Prior model into the conventional eigenvoice estimation. In self-adaptation system we use only one adaptation utterance that will be recognized, we use MAP eigenvoice that is most robust adaptation. In series of self-adaptation experiments on the Korean connected digit recognition task. we demonstrate that the performance of the proposed approach is better than that of the conventional eigenvoice algorithm for a small amount of adaptation data.

Noise-Robust Speaker Recognition Using Subband Likelihoods and Reliable-Feature Selection

  • Kim, Sung-Tak;Ji, Mi-Kyong;Kim, Hoi-Rin
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.89-100
    • /
    • 2008
  • We consider the feature recombination technique in a multiband approach to speaker identification and verification. To overcome the ineffectiveness of conventional feature recombination in broadband noisy environments, we propose a new subband feature recombination which uses subband likelihoods and a subband reliable-feature selection technique with an adaptive noise model. In the decision step of speaker recognition, a few very low unreliable feature likelihood scores can cause a speaker recognition system to make an incorrect decision. To overcome this problem, reliable-feature selection adjusts the likelihood scores of an unreliable feature by comparison with those of an adaptive noise model, which is estimated by the maximum a posteriori adaptation technique using noise features directly obtained from noisy test speech. To evaluate the effectiveness of the proposed methods in noisy environments, we use the TIMIT database and the NTIMIT database, which is the corresponding telephone version of TIMIT database. The proposed subband feature recombination with subband reliable-feature selection achieves better performance than the conventional feature recombination system with reliable-feature selection.

  • PDF

A Study on the Voice Conversion with HMM-based Korean Speech Synthesis (HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구)

  • Kim, Il-Hwan;Bae, Keun-Sung
    • MALSORI
    • /
    • v.68
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

One-shot multi-speaker text-to-speech using RawNet3 speaker representation (RawNet3를 통해 추출한 화자 특성 기반 원샷 다화자 음성합성 시스템)

  • Sohee Han;Jisub Um;Hoirin Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • Recent advances in text-to-speech (TTS) technology have significantly improved the quality of synthesized speech, reaching a level where it can closely imitate natural human speech. Especially, TTS models offering various voice characteristics and personalized speech, are widely utilized in fields such as artificial intelligence (AI) tutors, advertising, and video dubbing. Accordingly, in this paper, we propose a one-shot multi-speaker TTS system that can ensure acoustic diversity and synthesize personalized voice by generating speech using unseen target speakers' utterances. The proposed model integrates a speaker encoder into a TTS model consisting of the FastSpeech2 acoustic model and the HiFi-GAN vocoder. The speaker encoder, based on the pre-trained RawNet3, extracts speaker-specific voice features. Furthermore, the proposed approach not only includes an English one-shot multi-speaker TTS but also introduces a Korean one-shot multi-speaker TTS. We evaluate naturalness and speaker similarity of the generated speech using objective and subjective metrics. In the subjective evaluation, the proposed Korean one-shot multi-speaker TTS obtained naturalness mean opinion score (NMOS) of 3.36 and similarity MOS (SMOS) of 3.16. The objective evaluation of the proposed English and Korean one-shot multi-speaker TTS showed a prediction MOS (P-MOS) of 2.54 and 3.74, respectively. These results indicate that the performance of our proposed model is improved over the baseline models in terms of both naturalness and speaker similarity.

Speaker-Adaptive Speech Synthesis based on Fuzzy Vector Quantizer Mapping and Neural Networks (퍼지 벡터 양자화기 사상화와 신경망에 의한 화자적응 음성합성)

  • Lee, Jin-Yi;Lee, Gwang-Hyeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.1
    • /
    • pp.149-160
    • /
    • 1997
  • This paper is concerned with the problem of speaker-adaptive speech synthes is method using a mapped codebook designed by fuzzy mapping on FLVQ (Fuzzy Learning Vector Quantization). The FLVQ is used to design both input and reference speaker's codebook. This algorithm is incorporated fuzzy membership function into the LVQ(learning vector quantization) networks. Unlike the LVQ algorithm, this algorithm minimizes the network output errors which are the differences of clas s membership target and actual membership values, and results to minimize the distances between training patterns and competing neurons. Speaker Adaptation in speech synthesis is performed as follow;input speaker's codebook is mapped a reference speaker's codebook in fuzzy concepts. The Fuzzy VQ mapping replaces a codevector preserving its fuzzy membership function. The codevector correspondence histogram is obtained by accumulating the vector correspondence along the DTW optimal path. We use the Fuzzy VQ mapping to design a mapped codebook. The mapped codebook is defined as a linear combination of reference speaker's vectors using each fuzzy histogram as a weighting function with membership values. In adaptive-speech synthesis stage, input speech is fuzzy vector-quantized by the mapped codcbook, and then FCM arithmetic is used to synthesize speech adapted to input speaker. The speaker adaption experiments are carried out using speech of males in their thirties as input speaker's speech, and a female in her twenties as reference speaker's speech. Speeches used in experiments are sentences /anyoung hasim nika/ and /good morning/. As a results of experiments, we obtained a synthesized speech adapted to input speaker.

  • PDF

Robust Speaker Identification Exploiting the Advantages of PCA and LDA (주성분분석과 선형판별분석의 장점을 이용한 강인한 화자식별)

  • Kim, Min-Seok;Yu, Ha-Jin;Kim, Sung-Joo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.319-322
    • /
    • 2007
  • The goal of our research is to build a textindependent speaker identification system that can be used in mobile devices without any additional adaptation process. In this paper, we show that exploiting the advantages of both PCA(Principle Component Analysis) and LDA(Linear Discriminant Analysis) can increase the performance in the situation. The proposed method reduced the relative recognition error by 13.5%

  • PDF

Adaptation Mode Controller for Adaptive Microphone Array System (마이크로폰 어레이를 위한 적응 모드 컨트롤러)

  • Jung Yang-Won;Kang Hong-Goo;Lee Chungyong;Hwang Youngsoo;Youn Dae Hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1573-1580
    • /
    • 2004
  • In this paper, an adaptation mode controller for adaptive microphone array system is proposed for high-quality speech acquisition in real environments. To ensure proper adaptation of the adaptive array algorithm, the proposed adaptation mode controller uses not only temporal information, but also spatial information. The proposed adaptation mode controller is constructed with two processing stages: an initialization stage and a running stage. In the initialization stage, a sound source localization technique is adopted, and a signal correlation characteristic is used in the running stage. For the adaptive may algorithm, a generalized sidelobe canceller with an adaptive blocking matrix is used. The proposed adaptation mode controller can be used even when the adaptive blocking matrix is not adapted, and is much stable than the power ratio method. The proposed algorithm is evaluated in real environment, and simulation results show 13dB SINR improvement with the speaker sitting 2m distance from the may.

A Closed-Form Solution of Linear Spectral Transformation for Robust Speech Recognition

  • Kim, Dong-Hyun;Yook, Dong-Suk
    • ETRI Journal
    • /
    • v.31 no.4
    • /
    • pp.454-456
    • /
    • 2009
  • The maximum likelihood linear spectral transformation (ML-LST) using a numerical iteration method has been previously proposed for robust speech recognition. The numerical iteration method is not appropriate for real-time applications due to its computational complexity. In order to reduce the computational cost, the objective function of the ML-LST is approximated and a closed-form solution is proposed in this paper. It is shown experimentally that the proposed closed-form solution for the ML-LST can provide rapid speaker and environment adaptation for robust speech recognition.

Robust Correlation Estimation for Rapid Speaker Adaptation (EMAP에 기반한 화자적응을 위한 강인한 상관계수의 예측)

  • 전유진;김동국;김남수
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.113-116
    • /
    • 2000
  • 본 논문에서는 probabilistic principal component analysis (PPCA)를 이용하여 extended maximum a posteriori (EMAP)에 기반한 화자적응 시스템의 성능을 향상시키는 방법을 제시하고자 한다. PPCA는 각각의 hidden Markov model (HMM) 사이의 상관계수 행렬을 강인하게 예측하는데 적용된다. 이렇게 구한 상관계수 행렬은 화자적응 시스템에 사용된다. PPCA는 연산이 효율적이고, EMAP에서 기존에 사용되었던 방법에 비해 향상된 성능을 보여준다. 여러 차례의 음성인식 실험을 통하여, PPCA를 적용한 EMAP은 적은 양의 적응 데이타에서 좋은 성능을 보인다는 것을 확인할 수 있다.

  • PDF

A Study on Speaker Adaptation in Continuous Digits Speech Recognition (연속숫자 음성인식에서 화자 적응에 관한 연구)

  • 최광표
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.319.2-322
    • /
    • 1998
  • 본 논문에서는 반음절 단위 HMM을 이용한 연속 숫자 음성인식 시스템의 2단계로 이루어지는 화자 적응 알고리즘을 수행하였다. 음성인식 시스템에서 사용되는 훈련데이터의 양이 많더라도 발성속도, 발성크기 등의 화자 발성 습관에 따라 화자독립 음성인식 시스템에서는 많은 문제점들이 발생하게 된다. 불특정 화자를 대상으로 한 음성 인식에 있어서 개인차에 의한 변동을 대처하는 방법으로 유효한 음향적 특성을 추출하기 위해 스펙트럼의 동적인(Dynamic) 특성을 주로 이용하고 있다. 따라서 본 논문에서는 화자 적을 기법의 하나인 frequency warped spectral matching 방법을 연속숫자 음성 인식시스템에 적용하였으며, 이때 인식에 의한 적절한 화자별 스케일링 계수 선정 방법을 수행하여 오인식률이 감소함을 확인하였다.

  • PDF