• Title/Summary/Keyword: speech process

Search Result 526, Processing Time 0.035 seconds

Enhancement of Excitation in Low-bit-rate Speech Coders (저 전송률 음성 부호화기를 위한 여기 신호 개선 알고리즘에 관한 연구)

  • 이미숙;김홍국;최승호;김도영
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.57-60
    • /
    • 2003
  • In this paper, we propose a new excitation enhancement technique to improve the speech quality of low bit rate speech coders. The proposed technique is based on a harmonic model and it is employed only in the decoding process of speech coders without any additional bits. We develop the procedure of harmonic model parameters estimation and harmonic generation. and apply the technique to a current state of the art low bit rate speech coder, ITU-T G.729 Annex D. Also its performance is measured by using the ITU-T P.862 PESQ score and compared to those of the phase dispersion filter and the long-term postfilter applied to the decoded excitation. It is shown that the proposed excitation enhancement technique can improve the quality of decoded speech and provide better quality for male speech than other techniques.

  • PDF

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Speech Enhancement Using Multiple Kalman Filter (다중칼만필터를 이용한 음성향상)

  • 이기용
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.225-230
    • /
    • 1998
  • In this paper, a Kalman filter approach for enhancing speech signals degraded by statistically independent additive nonstationary noise is developed. The autoregressive hidden markov model is used for modeling the statistical characteristics of both the clean speech signal and the nonstationary noise process. In this case, the speech enhancement comprises a weighted sum of conditional mean estimators for the composite states of the models for the speech and noise, where the weights equal to the posterior probabilities of the composite states, given the noisy speech. The conditional mean estimators use a smoothing spproach based on two Kalmean filters with Markovian switching coefficients, where one of the filters propagates in the forward-time direction with one frame. The proposed method is tested against the noisy speech signals degraded by Gaussian colored noise or nonstationary noise at various input signal-to-noise ratios. An app개ximate improvement of 4.7-5.2 dB is SNR is achieved at input SNR 10 and 15 dB. Also, in a comparison of conventional and the proposed methods, an improvement of the about 0.3 dB in SNR is obtained with our proposed method.

  • PDF

Excitation Enhancement Based on a Selective-Band Harmonic Model for Low-Bit-Rate Code-Excited Linear Prediction Coders (저전송률 코드여기 선형 예측 부호화기를 위한 선택적 대역 하모닉 모델 기반 여기신호 개선 알고리즘)

  • Lee, Mi-Suk;Kim, Hong-Kook;Choi, Seung-Ho;Kim, Do-Young
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.259-269
    • /
    • 2004
  • In this paper, we propose a new excitation enhancement technique to improve the speech quality of low bit-rate code-excited linear prediction (CELP) coders. The proposed technique is based on a harmonic model and it is employed only in the decoding process of speech coders without any additional bits. We develop the procedure of harmonic model parameter estimation and harmonic generation, and apply this technique to a current state-of-the-art low bit rate speech coder, ITU-T G.729 Annex D. Also, its performance is measured by using the ITU-T P.862 PESQ score and compared to those of the phase dispersion filter and the long-term postfilter applied to the decoded excitation. It is shown that the proposed excitation enhancement technique can improve the quality of decoded speech and provide better quality for male speech than other techniques.

  • PDF

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

English-Korean speech translation corpus (EnKoST-C): Construction procedure and evaluation results

  • Jeong-Uk Bang;Joon-Gyu Maeng;Jun Park;Seung Yun;Sang-Hun Kim
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.18-27
    • /
    • 2023
  • We present an English-Korean speech translation corpus, named EnKoST-C. End-to-end model training for speech translation tasks often suffers from a lack of parallel data, such as speech data in the source language and equivalent text data in the target language. Most available public speech translation corpora were developed for European languages, and there is currently no public corpus for English-Korean end-to-end speech translation. Thus, we created an EnKoST-C centered on TED Talks. In this process, we enhance the sentence alignment approach using the subtitle time information and bilingual sentence embedding information. As a result, we built a 559-h English-Korean speech translation corpus. The proposed sentence alignment approach showed excellent performance of 0.96 f-measure score. We also show the baseline performance of an English-Korean speech translation model trained with EnKoST-C. The EnKoST-C is freely available on a Korean government open data hub site.

Phonological awareness skills in terms of visual and auditory stimulus and syllable position in typically developing children (청각적, 시각적 자극제시 방법과 음절위치에 따른 일반아동의 음운인식 능력)

  • Choi, Yu Mi;Ha, Seunghee
    • Phonetics and Speech Sciences
    • /
    • v.9 no.4
    • /
    • pp.123-128
    • /
    • 2017
  • This study aims to compare the performance of syllable identification task according to auditory and visual stimuli presentation methods and syllable position. Twenty-two typically developing children (age 4-6) participated in the study. Three-syllable words were used to identify the first syllable and the final syllable in each word with auditory and visual stimuli. For the auditory stimuli presentation, the researcher presented the test word only with oral speech. For the visual stimuli presentation, the test words were presented as a picture, and asked each child to choose appropriate pictures for the task. The results showed that when tasks were presented visually, the performances of phonological awareness were significantly higher than in presenting with auditory stimuli. Also, the performances of the first syllable identification were significantly higher than those of the last syllable identification. When phonological awareness task are presented by auditory stimuli, it is necessary to go through all the steps of the speech production process. Therefore, the phonological awareness performance by auditory stimuli may be low due to the weakness of the other stages in the speech production process. When phonological awareness tasks are presented using visual picture stimuli, it can be performed directly at the phonological representation stage without going through the peripheral auditory processing, phonological recognition, and motor programming. This study suggests that phonological awareness skills can be different depending on the methods of stimulus presentation and syllable position of the tasks. The comparison of performances between visual and auditory stimulus tasks will help identify where children may show weakness and vulnerability in speech production process.

Voice Activity Detection Algorithm using Fuzzy Membership Shifted C-means Clustering in Low SNR Environment (낮은 신호 대 잡음비 환경에서의 퍼지 소속도 천이 C-means 클러스터링을 이용한 음성구간 검출 알고리즘)

  • Lee, G.H.;Lee, Y.J.;Cho, J.H.;Kim, M.N.
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.3
    • /
    • pp.312-323
    • /
    • 2014
  • Voice activity detection is very important process that find voice activity from noisy speech signal for noise cancelling and speech enhancement. Over the past few years, many studies have been made on voice activity detection, it has poor performance for speech signal of sentence form in a low SNR environment. In this paper, it proposed new voice activity detection algorithm that has beginning VAD process using entropy and main VAD process using fuzzy membership shifted c-means clustering. We conduct an experiment in various SNR environment of white noise to evaluate performance of the proposed algorithm and confirmed good performance of the proposed algorithm.

The Effects of Misalignment between Syllable and Word Onsets on Word Recognition in English (음절의 시작과 단어 시작의 불일치가 영어 단어 인지에 미치는 영향)

  • Kim, Sun-Mi;Nam, Ki-Chun
    • Phonetics and Speech Sciences
    • /
    • v.1 no.4
    • /
    • pp.61-71
    • /
    • 2009
  • This study aims to investigate whether the misalignment between syllable and word onsets due to the process of resyllabification affects Korean-English late bilinguals perceiving English continuous speech. Two word-spotting experiments were conducted. In Experiment 1, misalignment conditions (resyllabified conditions) were created by adding CVC contexts at the beginning of vowel-initial words and alignment conditions (non-resyllabified conditions) were made by putting the same CVC contexts at the beginning of consonant-initial words. The results of Experiment 1 showed that detections of targets in alignment conditions were faster and more correct than in misalignment conditions. Experiment 2 was conducted in order to avoid any possibilities that the results of Experiment 1 were due to consonant-initial words being easier to recognize than vowel-initial words. For this reason, all the experimental stimuli of Experiment 2 were vowel-initial words preceded by CVC contexts or CV contexts. Experiment 2 also showed misalignment cost when recognizing words in resyllabified conditions. These results indicate that Korean listeners are influenced by misalignment between syllable and word onsets triggered by a resyllabification process when recognizing words in English connected speech.

  • PDF

The Effects of Syllable Boundary Ambiguity on Spoken Word Recognition in Korean Continuous Speech

  • Kang, Jinwon;Kim, Sunmi;Nam, Kichun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.11
    • /
    • pp.2800-2812
    • /
    • 2012
  • The purpose of this study was to examine the syllable-word boundary misalignment cost on word segmentation in Korean continuous speech. Previous studies have demonstrated the important role of syllabification in speech segmentation. The current study investigated whether the resyllabification process affects word recognition in Korean continuous speech. In Experiment I, under the misalignment condition, participants were presented with stimuli in which a word-final consonant became the onset of the next syllable. (e.g., /k/ in belsak ingan becomes the onset of the first syllable of ingan 'human'). In the alignment condition, they heard stimuli in which a word-final vowel was also the final segment of the syllable (e.g., /eo/ in heulmeo ingan is the end of both the syllable and word). The results showed that word recognition was faster and more accurate in the alignment condition. Experiment II aimed to confirm that the results of Experiment I were attributable to the resyllabification process, by comparing only the target words from each condition. The results of Experiment II supported the findings of Experiment I. Therefore, based on the current study, we confirmed that Korean, a syllable-timed language, has a misalignment cost of resyllabification.