• Title/Summary/Keyword: coarticulation

Search Result 39, Processing Time 0.02 seconds

Acoustic and Pronunciation Model Adaptation Based on Context dependency for Korean-English Speech Recognition (한국인의 영어 인식을 위한 문맥 종속성 기반 음향모델/발음모델 적응)

  • Oh, Yoo-Rhee;Kim, Hong-Kook;Lee, Yeon-Woo;Lee, Seong-Ro
    • MALSORI
    • /
    • v.68
    • /
    • pp.33-47
    • /
    • 2008
  • In this paper, we propose a hybrid acoustic and pronunciation model adaptation method based on context dependency for Korean-English speech recognition. The proposed method is performed as follows. First, in order to derive pronunciation variant rules, an n-best phoneme sequence is obtained by phone recognition. Second, we decompose each rule into a context independent (CI) or a context dependent (CD) one. To this end, it is assumed that a different phoneme structure between Korean and English makes CI pronunciation variabilities while coarticulation effects are related to CD pronunciation variabilities. Finally, we perform an acoustic model adaptation and a pronunciation model adaptation for CI and CD pronunciation variabilities, respectively. It is shown from the Korean-English speech recognition experiments that the average word error rate (WER) is decreased by 36.0% when compared to the baseline that does not include any adaptation. In addition, the proposed method has a lower average WER than either the acoustic model adaptation or the pronunciation model adaptation.

  • PDF

Connected Korean Digit Speech Recognition Using Vowel String and Number of Syllables (음절수와 모음 열을 이용한 한국어 연결 숫자 음성인식)

  • Youn, Jeh-Seon;Hong, Kwang-Seok
    • The KIPS Transactions:PartA
    • /
    • v.10A no.1
    • /
    • pp.1-6
    • /
    • 2003
  • In this paper, we present a new Korean connected digit recognition based on vowel string and number of syllables. There are two steps to reduce digit candidates. The first one is to determine the number and interval of digit. Once the number and interval of digit are determined, the second is to recognize the vowel string in the digit string. The digit candidates according to vowel string are recognized based on CV (consonant vowel), VCCV and VC unit HMM. The proposed method can cope effectively with the coarticulation effects and recognize the connected digit speech very well.

Monophone and Biphone Compuond Unit for Korean Vocabulary Speech Recognition (한국어 어휘 인식을 위한 혼합형 음성 인식 단위)

  • 이기정;이상운;홍재근
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.6
    • /
    • pp.867-874
    • /
    • 2001
  • In this paper, considering the pronunciation characteristic of Korean, recognition units which can shorten the recognition time and reflect the coarticulation effect simultaneously are suggested. These units are composed of monophone and hipbone ones. Monophone units are applied to the vowels which represent stable characteristic. Biphones are used to the consonant which vary according to adjacent vowel. In the experiment of word recognition of PBW445 database, the compound units result in comparable recognition accuracy with 57% speed up compared with triphone units and better recognition accuracy with similar speed. In addition, we can reduce the memory size because of fewer units.

  • PDF

Pitch and Formant Trajectories of English Vowels by American Males with Different Speaking Styles (발화방식에 따른 미국인 남성 영어모음의 피치와 포먼트 궤적)

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.21-28
    • /
    • 2012
  • Many previous studies reported acoustic parameters of English vowels produced by a clear speaking style. In everyday usage, we actually produce speech sounds with various speaking styles. Different styles may yield different acoustic measurements. This study attempts to examine pitch and formant trajectories of eleven English vowels produced by nine American males in order to understand acoustic variations depending on clear and conversational speaking styles. The author used Praat to obtain trajectories systematically at seven equidistant time points over the vowel segment while checking measurement validity. Results showed that pitch trajectories indicated distinct patterns depending on four speaking styles. Generally, higher pitch values were observed in the higher vowels and the pitch was higher in the clear speaking styles than that in the conversational styles. The same trend was observed in the three formant trajectories of front vowels and the first formant trajectories of back vowels. The second and third trajectories of back vowels revealed an opposite or inconsistent trend, which might be attributable to the coarticulation of the following consonant or lip rounding gestures. The author made a tentative conclusion that people tend to produce vowels to enhance pitch and formant differences to transmit their information clearly. Further perceptual studies on synthesized vowels with varying pitch and formant values are desirable to address the conclusion.

Speech and Music Discrimination Using Spectral Transition Rate (주파수 변화율을 이용한 음성과 음악의 구분)

  • Yang, Kyong-Chul;Bang, Yong-Chan;Cho, Sun-Ho;Yook, Dong-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.273-278
    • /
    • 2009
  • In this paper, we propose the spectral transition rate (STR) as a novel feature for speech and music discrimination (SMD). We observed that the spectral peaks of speech signal are gradually changing due to coarticulation effect. However, the sound of musical instruments in general tends to keep the peak frequencies and energies unchanged for relatively long period of time compared to speech. The STR of speech is much higher than that of music. The experimental results show that the STR based SMD method outperforms a conventional method. Especially, the STR based SMD gives relatively fast output without any performance degradation.

/W/-Variants in Korean

  • Oh, Mi-Ra
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.65-73
    • /
    • 2010
  • No systematic study has examined the relationship between acoustic variability and /w/-deletion in Korean. Most previous studies on /w/-deletion have described /w/-variants in categorical terms, i.e., /w/-deletion or a full glide (Silva 1991; Kang 1997; Yun 2005). These studies are based either on impressionistic judgements without a systematic acoustic analysis or on an exclusive examination of internal acoustic variability of /w/ such as F2, without examining the availability of external acoustic cues such as voice onset time (VOT) of a consonant. However, given the important influence of the adjacent sounds for segmental realizations, it is necessary to examine possible acoustic variability in the differentiation of /w/-variants. The present study aims to address this issue by evaluating the acoustic properties of /CwV/, including VOT and formant transitions. In the analysis, 432 tokens in word-initial position (216 /CwV/ words and 216 /CV/ words) were examined. The results indicated that /w/ exhibits four different variants. Firstly, /w/ is realized as a full glide. Such a variant is characterized by a VOT difference and significant differences in F1 and F2 at voicing onset compared with /CwV/ and /CV/. Secondly, /w/ can be maintained but coarticulated with the following vowel. Such a variant is demonstrated by differences in VOT and F2. Thirdly, /w/ is categorically deleted, which is indicated by the absence of any differences in VOT, F1, and F2. Fourthly, /w/ overlaps a consonant. The F2 difference without VOT difference is manifested in the variant. In contrast to VOT, F1, and F2 differences, pitch plays little role in determining /w/-variants in Korean. These findings suggest that allophones can be produced along a gradient continuum of acoustic cues, exhibiting sounds intermediate between the full realization of a given category and its deletion. Furthermore, each variant can be cued by a set of internal and external acoustic cues.

  • PDF

Coarticulation and vowel reduction in the neutral tone of Beijing Mandarin

  • Lin Maocan
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.207-207
    • /
    • 1996
  • The neutral tone is one of the most important distinguishing features in Beijing Mandarin, but there are two completely different views on its linguistic function: a special tone(Xu, 1980) versus weak stress(Chao, 1968). In this paper, the acoustic manifestation of the neutral tone will be explored to show that it is closely related to weak stress. 122 disyllabic words in which the second syllable carries the neutral tone, including 22 stress pairs, were uttered by a native male speaker of Beijing dialect and analysed by Kay Digital Sonagraph 5500-1. The results of the acoustic analysis are presented as follows: 1) The first two formants of the medial and the syllabic vowel moves towards that of central vowel with a greater magnitude in the syllable with the neutral tone than in the syllable with any of the four normal tones. Also the vowel ending, and nasal coda /n/ and / / in the syllable with the neutral tone tends to be deleted. 2) In the syllables with the neutral tone, there are strong carryover coarticulations between the medial and syllabic vowel and the preceding unvoiced consonant. In general, the vowel is affected to move towards the position of the central vowel with more greater magnitude by coronal consonant than by labial or velar consonant. 3) In the syllable with the neutral tone, when and only when it precedes a syllable with tone-4, the high vowel following [f], [ts'], [s], [ts'], [s], [tc'] or [c] tends to be voiceless. 4) It can be seen from the acoustical results of 22 stress pairs that the duration of the syllable with the neutral tone is on the average reduced to 55% of that of the syllable with the four normal tones, and the duration of the final in the syllable with neutral tone is on the average reduced to 45% of that of the final in the syllable with the four normal tones(Lin & Yan 1980). 5) The FO contour of the neutral tone is highly dependent on the preceding normal tone(Lin & Yan 1993). For a number of languages it has been found that the vowel space is reduced as the level of stress placed upon the vowel is reduced(Nord 1986). Therefore we reach the conclusion that the syllable with neutral tone is related to weak stress(Lin & Yan 1990). The neutral tone is not a special tone because the preceding normal tone.

  • PDF

On the Development of a Continuous Speech Recognition System Using Continuous Hidden Markov Model for Korean Language (연속분포 HMM을 이용한 한국어 연속 음성 인식 시스템 개발)

  • Kim, Do-Yeong;Park, Yong-Kyu;Kwon, Oh-Wook;Un, Chong-Kwan;Park, Seong-Hyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.24-31
    • /
    • 1994
  • In this paper, we report on the development of a speaker independent continuous speech recognition system using continuous hidden Markov models. The continuous hidden Markov model consists of mean and covariance matrices and directly models speech signal parameters, therefore does not have quantization error. Filter bank coefficients with their 1st and 2nd-order derivatives are used as feature vectors to represent the dynamic features of speech signal. We use the segmental K-means algorithm as a training algorithm and triphone as a recognition unit to alleviate performance degradation due to coarticulation problems critical in continuous speech recognition. Also, we use the one-pass search algorithm that Is advantageous in speeding-up the recognition time. Experimental results show that the system attains the recognition accuracy of $83\%$ without grammar and $94\%$ with finite state networks in speaker-indepdent speech recognition.

  • PDF

Speech Animation Synthesis based on a Korean Co-articulation Model (한국어 동시조음 모델에 기반한 스피치 애니메이션 생성)

  • Jang, Minjung;Jung, Sunjin;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.49-59
    • /
    • 2020
  • In this paper, we propose a speech animation synthesis specialized in Korean through a rule-based co-articulation model. Speech animation has been widely used in the cultural industry, such as movies, animations, and games that require natural and realistic motion. Because the technique for audio driven speech animation has been mainly developed for English, however, the animation results for domestic content are often visually very unnatural. For example, dubbing of a voice actor is played with no mouth motion at all or with an unsynchronized looping of simple mouth shapes at best. Although there are language-independent speech animation models, which are not specialized in Korean, they are yet to ensure the quality to be utilized in a domestic content production. Therefore, we propose a natural speech animation synthesis method that reflects the linguistic characteristics of Korean driven by an input audio and text. Reflecting the features that vowels mostly determine the mouth shape in Korean, a coarticulation model separating lips and the tongue has been defined to solve the previous problem of lip distortion and occasional missing of some phoneme characteristics. Our model also reflects the differences in prosodic features for improved dynamics in speech animation. Through user studies, we verify that the proposed model can synthesize natural speech animation.