• Title/Summary/Keyword: fast speaker adaptation

Search Result 11, Processing Time 0.028 seconds

Fast Speaker Adaptation and Environment Compensation Based on Eigenspace-based MLLR (Eigenspace-based MLLR에 기반한 고속 화자적응 및 환경보상)

  • Song Hwa-Jeon;Kim Hyung-Soon
    • MALSORI
    • /
    • no.58
    • /
    • pp.35-44
    • /
    • 2006
  • Maximum likelihood linear regression (MLLR) adaptation experiences severe performance degradation with very tiny amount of adaptation data. Eigenspace- based MLLR, as an alternative to MLLR for fast speaker adaptation, also has a weak point that it cannot deal with the mismatch between training and testing environments. In this paper, we propose a simultaneous fast speaker and environment adaptation based on eigenspace-based MLLR. We also extend the sub-stream based eigenspace-based MLLR to generalize the eigenspace-based MLLR with bias compensation. A vocabulary-independent word recognition experiment shows the proposed algorithm is superior to eigenspace-based MLLR regardless of the amount of adaptation data in diverse noisy environments. Especially, proposed sub-stream eigenspace-based MLLR with bias compensation yields 67% relative improvement with 10 adaptation words in 10 dB SNR environment, in comparison with the conventional eigenspace-based MLLR.

  • PDF

Fast speaker adaptation using extended diagonal linear transformation for deep neural networks

  • Kim, Donghyun;Kim, Sanghun
    • ETRI Journal
    • /
    • v.41 no.1
    • /
    • pp.109-116
    • /
    • 2019
  • This paper explores new techniques that are based on a hidden-layer linear transformation for fast speaker adaptation used in deep neural networks (DNNs). Conventional methods using affine transformations are ineffective because they require a relatively large number of parameters to perform. Meanwhile, methods that employ singular-value decomposition (SVD) are utilized because they are effective at reducing adaptive parameters. However, a matrix decomposition is computationally expensive when using online services. We propose the use of an extended diagonal linear transformation method to minimize adaptation parameters without SVD to increase the performance level for tasks that require smaller degrees of adaptation. In Korean large vocabulary continuous speech recognition (LVCSR) tasks, the proposed method shows significant improvements with error-reduction rates of 8.4% and 17.1% in five and 50 conversational sentence adaptations, respectively. Compared with the adaptation methods using SVD, there is an increased recognition performance with fewer parameters.

A Study on Phoneme Recognition using Neural Networks and Fuzzy logic (신경망과 퍼지논리를 이용한 음소인식에 관한 연구)

  • Han, Jung-Hyun;Choi, Doo-Il
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2265-2267
    • /
    • 1998
  • This paper deals with study of Fast Speaker Adaptation Type Speech Recognition, and to analyze speech signal efficiently in time domain and time-frequency domain, utilizes SCONN[1] with Speech Signal Process suffices for Fast Speaker Adaptation Type Speech Recognition, and examined Speech Recognition to investigate adaptation of system, which has speech data input after speaker dependent recognition test.

  • PDF

Performance Improvement of Fast Speaker Adaptation Based on Dimensional Eigenvoice and Adaptation Mode Selection (차원별 Eigenvoice와 화자적응 모드 선택에 기반한 고속화자적응 성능 향상)

  • 송화전;이윤근;김형순
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1
    • /
    • pp.48-53
    • /
    • 2003
  • Eigenvoice method is known to be adequate for fast speaker adaptation, but it hardly shows additional improvement with increased amount of adaptation data. In this paper, to deal with this problem, we propose a modified method estimating the weights of eigenvoices in each feature vector dimension. We also propose an adaptation mode selection scheme that one method with higher performance among several adaptation methods is selected according to the amount of adaptation data. We used POW DB to construct the speaker independent model and eigenvoices, and utterances(ranging from 1 to 50) from PBW 452 DB and the remaining 400 utterances were used for adaptation and evaluation, respectively. With the increased amount of adaptation data, proposed dimensional eigenvoice method showed higher performance than both conventional eigenvoice method and MLLR. Up to 26% of word error rate was reduced by the adaptation mode selection between eigenvoice and dimensional eigenvoice methods in comparison with conventional eigenvoice method.

Fast Speaker Adaptation in Noisy Environment using Environment Clustering (잡음 환경하에서 환경 군집화를 이용한 고속화자 적응)

  • Kim, Young-Kuk;Song, Hwa-Jeon;Kim, Hyung-Soon
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.33-36
    • /
    • 2007
  • In this paper, we investigate a fast speaker adaptation method based on eigenvoice in several noisy environments. In order to overcome its weakness against noise, we propose a noisy environment clustering method which divides the noisy adaptation utterances into utterance groups with similar environments by the vector quantization based clustering using a cepstral mean as a feature vector. Then each utterance group is used for adaptation to make an environment dependent model. According to our experiment, we obtained 19-37 % relative improvement in error rate compared with the simultaneous speaker adaptation and environmental compensation method

  • PDF

Fast Speaker Adaptation Based on Eigenspace-based MLLR Using Artificially Distorted Speech in Car Noise Environment (차량 잡음 환경에서 인위적 왜곡 음성을 이용한 Eigenspace-based MLLR에 기반한 고속 화자 적응)

  • Song, Hwa-Jeon;Jeon, Hyung-Bae;Kim, Hyung-Soon
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.119-125
    • /
    • 2009
  • This paper proposes fast speaker adaptation method using artificially distorted speech in telematics terminal under the car noise environment based on eigenspace-based maximum likelihood linear regression (ES-MLLR). The artificially distorted speech is built from adding the various car noise signals collected from a driving car to the speech signal collected from an idling car. Then, in every environment, the transformation matrix is estimated by ES-MLLR using the artificially distorted speech corresponding to the specific noise environment. In test mode, an online model is built by weighted sum of the environment transformation matrices depending on the driving condition. In 3k-word recognition task in the telematics terminal, we achieve a performance superior to ES-MLLR even using the adaptation data collected from the driving condition.

  • PDF

Fast Speaker Adaptation Using Sub-Stream Based Eigenvoice (Sub-Stream 기반의 Eigenvoice를 이용한 고속 화자적응)

  • Song, Hwa-Jeon;Lee, Jong-Seok;Kim, Hyung-Soon
    • MALSORI
    • /
    • v.55
    • /
    • pp.93-102
    • /
    • 2005
  • In this paper, sub-stream based eigenvoice method is proposed to overcome the weak points of conventional eigenvoice and dimensional eigenvoice. In the proposed method, sub-streams are automatically constructed by the statistical clustering analysis that uses the correlation information between dimensions. To obtain the reliable distance matrix from covariance matrix for dividing into optimal sub-streams, MAP adaptation technique is employed to the covariance matrix of training data and the sample covariance of adaptation data. According to our experiments, the proposed method shows $41\%$ error rate reduction when the number of adaptation data is 50.

  • PDF

Simultaneous Speaker and Environment Adaptation by Environment Clustering in Various Noise Environments (다양한 잡음 환경하에서 환경 군집화를 통한 화자 및 환경 동시 적응)

  • Kim, Young-Kuk;Song, Hwa-Jeon;Kim, Hyung-Soon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.566-571
    • /
    • 2009
  • This paper proposes noise-robust fast speaker adaptation method based on the eigenvoice framework in various noisy environments. The proposed method is focused on de-noising and environment clustering. Since the de-noised adaptation DB still has residual noise in itself, environment clustering divides the noisy adaptation data into similar environments by a clustering method using the cepstral mean of non-speech segments as a feature vector. Then each adaptation data in the same cluster is used to build an environment-clustered speaker adapted (SA) model. After selecting multiple environmentally clustered SA models which are similar to test environment, the speaker adaptation based on an appropriate linear combination of clustered SA models is conducted. According to our experiments, we observe that the proposed method provides error rate reduction of $40{\sim}59%$ over baseline with speaker independent model.

One-shot multi-speaker text-to-speech using RawNet3 speaker representation (RawNet3를 통해 추출한 화자 특성 기반 원샷 다화자 음성합성 시스템)

  • Sohee Han;Jisub Um;Hoirin Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • Recent advances in text-to-speech (TTS) technology have significantly improved the quality of synthesized speech, reaching a level where it can closely imitate natural human speech. Especially, TTS models offering various voice characteristics and personalized speech, are widely utilized in fields such as artificial intelligence (AI) tutors, advertising, and video dubbing. Accordingly, in this paper, we propose a one-shot multi-speaker TTS system that can ensure acoustic diversity and synthesize personalized voice by generating speech using unseen target speakers' utterances. The proposed model integrates a speaker encoder into a TTS model consisting of the FastSpeech2 acoustic model and the HiFi-GAN vocoder. The speaker encoder, based on the pre-trained RawNet3, extracts speaker-specific voice features. Furthermore, the proposed approach not only includes an English one-shot multi-speaker TTS but also introduces a Korean one-shot multi-speaker TTS. We evaluate naturalness and speaker similarity of the generated speech using objective and subjective metrics. In the subjective evaluation, the proposed Korean one-shot multi-speaker TTS obtained naturalness mean opinion score (NMOS) of 3.36 and similarity MOS (SMOS) of 3.16. The objective evaluation of the proposed English and Korean one-shot multi-speaker TTS showed a prediction MOS (P-MOS) of 2.54 and 3.74, respectively. These results indicate that the performance of our proposed model is improved over the baseline models in terms of both naturalness and speaker similarity.

Selective Attentive Learning for Fast Speaker Adaptation in Multilayer Perceptron (다층 퍼셉트론에서의 빠른 화자 적응을 위한 선택적 주의 학습)

  • 김인철;진성일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.4
    • /
    • pp.48-53
    • /
    • 2001
  • In this paper, selectively attentive learning method has been proposed to improve the learning speed of multilayer Perceptron based on the error backpropagation algorithm. Three attention criterions are introduced to effectively determine which set of input patterns is or which portion of network is attended to for effective learning. Such criterions are based on the mean square error function of the output layer and class-selective relevance of the hidden nodes. The acceleration of learning time is achieved by lowering the computational cost per iteration. Effectiveness of the proposed method is demonstrated in a speaker adaptation task of isolated word recognition system. The experimental results show that the proposed selective attention technique can reduce the learning time more than 60% in an average sense.

  • PDF