• Title/Summary/Keyword: Prosody

Search Result 208, Processing Time 0.025 seconds

Development of a Lipsync Algorithm Based on Audio-visual Corpus (시청각 코퍼스 기반의 립싱크 알고리듬 개발)

  • 김진영;하영민;이화숙
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.63-69
    • /
    • 2001
  • A corpus-based lip sync algorithm for synthesizing natural face animation is proposed in this paper. To get the lip parameters, some marks were attached some marks to the speaker's face, and the marks' positions were extracted with some Image processing methods. Also, the spoken utterances were labeled with HTK and prosodic information (duration, pitch and intensity) were analyzed. An audio-visual corpus was constructed by combining the speech and image information. The basic unit used in our approach is syllable unit. Based on this Audio-visual corpus, lip information represented by mark's positions was synthesized. That is. the best syllable units are selected from the audio-visual corpus and each visual information of selected syllable units are concatenated. There are two processes to obtain the best units. One is to select the N-best candidates for each syllable. The other is to select the best smooth unit sequences, which is done by Viterbi decoding algorithm. For these process, the two distance proposed between syllable units. They are a phonetic environment distance measure and a prosody distance measure. Computer simulation results showed that our proposed algorithm had good performances. Especially, it was shown that pitch and intensity information is also important as like duration information in lip sync.

  • PDF

Learning of Artificial Neural Networks about the Prosody of Korean Sentences. (인공 신경망의 한국어 운율 학습)

  • Shin Dong-Yup;Min Kyung-Joong;Lim Un-Cheon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.121-124
    • /
    • 2001
  • 음성 합성기의 합성음의 자연감을 높이기 위해 자연음에 내재하는 정확한 운율 법칙을 구하여 음성합성 시스템에서 이를 구현해 주어야 한다 무제한 어휘 음성합성 시스템의 문-음성 합성기에서 필요한 운율 법칙은 언어학적 정보를 이용해 구하거나, 자연음에서 추출하고 있다 그러나 추출한 운율 법칙이 자연음에 내재하는 모든 운율 법칙을 반영하지 못했거나, 잘못 구현되는 경우에는 합성음의 자연성이 떨어지게 된다. 이런 점을 고려하여 본 논문에서는 한국어 자연음을 분석하여 추출한 운율 정보를 인공 신경망이 학습하도록 하고 훈련을 마친 인공 신경망에 문장을 입력하고, 출력으로 나오는 운율 정보와 자연음의 운율 정보를 비교한 결과 제안한 인공 신경망이 자연음에 내재하고 있는 운율을 학습할 수 있음을 알 수 있었다. 운율의 3대 요소는 피치 , 지속시간, 크기의 변화이다. 제안한 인공 신경망이 한국어 문장의 음소 열을 입력으로 받아들이고, 각 음소의 지속시간에 따른 피치변화와 크기 변화를 출력으로 내보내면 자연음을 분석해 구한 각 음소의 운율 정보인 목표 패턴과 출력 패턴 의 오차를 최소화하도록 인공 신경망의 가중치를 조절할 수 있도록 설계하였다. 지속시간에 따른 각 음소의 피치와 크기 변화를 학습시키기 위해 피치 및 크기 인공 신경망을 구성하였다. 이들 인공 신경망을 훈련시키기 위해 먼저 음소 균형 문장 군을 구축하여야 하고, 이들 언어 자료를 특정 화자가 일정 환경에서 읽고 이를 녹음하여 , 분석하여 구한운율 정보를 운율 데이터베이스로 구축하였다. 문장 내의 각 음소에 대해 지속 시간과 피치 변화 그리고 크기 변화를 구하고, 곡선 적응 방법을 이용하여 각 변화 곡선에 대한 다항식 계수와 초기 값을 구해 운율 데이터베이스를 구축한다. 이 운율 데이터베이스의 일부는 인공 신경망을 훈련시키는데 이용하고, 나머지로 인공 신경망의 성능을 평가하여 인공 신경망이 운율 법칙을 학습할 수 있었다. 언어 자료의 문장 수를 늘리고 발음 횟수를 늘려 운율 데이터베이스를 확장하면 인공 신경망의 성능을 높일 수 있고, 문장 내의 음소의 수를 감안하여 인공 신경망의 입력 단자의 수는 계산량과 초분절 요인을 감안하여 결정해야 할 것이다

  • PDF

Durational Interaction of Stops and Vowels in English and Korean Child-Directed Speech

  • Choi, Han-Sook
    • Phonetics and Speech Sciences
    • /
    • v.4 no.2
    • /
    • pp.61-70
    • /
    • 2012
  • The current study observes the durational interaction of tautosyllabic consonants and vowels in the word-initial position of English and Korean child-directed speech (CDS). The effect of phonological laryngeal contrasts in stops on the following vowel duration, and the effect of the intrinsic vowel duration on the release duration of preceding stops in addition to the acoustic realization of the contrastive segments are explored in different prosodic contexts - phrase-initial/medial, focal accented/non-focused - in a marked speech style of CDS. A trade-off relationship between Voice Onset Time (VOT), as consonant release duration, and voicing phonation time, as vowel duration, reported from adult-to-adult speech, and patterns of durational variability are investigated in CDS of two languages with different linguistic rhythms, under systematically controlled prosodic contexts. Speech data were collected from four native English mothers and four native Korean mothers who were talking to their one-word staged infants. In addition to the acoustic measurements, the transformed delta measure is employed as a variability index of individual tokens. Results confirm the durational correlation between prevocalic consonants and following vowels. The interaction is revealed in a compensatory pattern such as longer VOTs followed by shorter vowel durations in both languages. An asymmetry is found in CV interaction in that the effect of consonant on vowel duration is greater than the VOT differences induced by the vowel. Prosodic effects are found such that the acoustic difference is enhanced between the contrastive segments under focal accent, supporting the paradigmatic strengthening effect. Positional variation, however, does not show any systematic effects on the variations of the measured acoustic quantities. Overall vowel duration and syllable duration are longer in English tokens but involve less variability across the prosodic variations. The constancy of syllable duration, therefore, is not found to be more strongly sustained in Korean CDS. The stylistic variation is discussed in relation to the listener under linguistic development in CDS.

Voice Personality Transformation Using a Probabilistic Method (확률적 방법을 이용한 음성 개성 변환)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.150-159
    • /
    • 2005
  • This paper addresses a voice personality transformation algorithm which makes one person's voices sound as if another person's voices. In the proposed method, one person's voices are represented by LPC cepstrum, pitch period and speaking rate, the appropriate transformation rules for each Parameter are constructed. The Gaussian Mixture Model (GMM) is used to model one speaker's LPC cepstrums and conditional probability is used to model the relationship between two speaker's LPC cepstrums. To obtain the parameters representing each probabilistic model. a Maximum Likelihood (ML) estimation method is employed. The transformed LPC cepstrums are obtained by using a Minimum Mean Square Error (MMSE) criterion. Pitch period and speaking rate are used as the parameters for prosody transformation, which is implemented by using the ratio of the average values. The proposed method reveals the superior performance to the previous VQ-based method in subjective measures including average cepstrum distance reduction ratio and likelihood increasing ratio. In subjective test. we obtained almost the same correct identification ratio as the previous method and we also confirmed that high qualify transformed speech is obtained, which is due to the smoothly evolving spectral contours over time.

The f0 distribution of Korean speakers in a spontaneous speech corpus

  • Yang, Byunggon
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.31-37
    • /
    • 2021
  • The fundamental frequency, or f0, is an important acoustic measure in the prosody of human speech. The current study examined the f0 distribution of a corpus of spontaneous speech in order to provide normative data for Korean speakers. The corpus consists of 40 speakers talking freely about their daily activities and their personal views. Praat scripts were created to collect f0 values, and a majority of obvious errors were corrected manually by watching and listening to the f0 contour on a narrow-band spectrogram. Statistical analyses of the f0 distribution were conducted using R. The results showed that the f0 values of all the Korean speakers were right-skewed, with a pointy distribution. The speakers produced spontaneous speech within a frequency range of 274 Hz (from 65 Hz to 339 Hz), excluding statistical outliers. The mode of the total f0 data was 102 Hz. The female f0 range, with a bimodal distribution, appeared wider than that of the male group. Regression analyses based on age and f0 values yielded negligible R-squared values. As the mode of an individual speaker could be predicted from the median, either the median or mode could serve as a good reference for the individual f0 range. Finally, an analysis of the continuous f0 points of intonational phrases revealed that the initial and final segments of the phrases yielded several f0 measurement errors. From these results, we conclude that an examination of a spontaneous speech corpus can provide linguists with useful measures to generalize acoustic properties of f0 variability in a language by an individual or groups. Further studies would be desirable of the use of statistical measures to secure reliable f0 values of individual speakers.

School Phonetics and How to Teach Prosody of English in Japan

  • Tsuzuki, Masaki
    • Proceedings of the KSPS conference
    • /
    • 1997.07a
    • /
    • pp.11-25
    • /
    • 1997
  • This presentation will focus on building basic English Prosodic Skills which are very useful and helpful for Japanese learners of English. The focus first will be on recognizing the seven basic nuclear tones, analysing intonation structures, distinguishing intonation patterns and then on the way of improving speaking ability using sufficient verbal contents of intonation (mini-dialogue). My presentation deals mainly with some difficulties which Japanese learners of English have in the field of RP intonation, It is chiefly concerned with identifying, describing and analysing tone-group sequences. It sometimes happens that Japanese learners of English can pronounce isolated bounds correctly and read phonetic symbols sufficiently, bet have difficult problems in carrying out accurate prosodic features. The use of wrong intonation is sometimes the cause of misunderstanding of speaker's attitude, connotation and shades of meaning, etc.. However accurately students can pronounce the nuclear tone or tone-group of English, they have to learn how to connect tone-groups properly for suitable sequences in respect to meaning or implication. We are faced with the complicated theory of RF intonation on the one hand and difficult realization of it on the other. Japanese learners of English have special difficulties in employing "rising tune" and "falling + rising tune". If students are taught pitch movements by indicating dots graphically between two horizontal lines, they can easily understand the whole shape of pitch movements. In this presentation, I illuminate several tone-group sequences which are very useful for Japanese learning English intonation. Among them, four similar Pitch Patterns, such as, (1) (equation omitted)- type, (2) (equation omitted) - type, (3) (equation omitted) - type and (4) (Rising Head) (equation omitted)- type are clarified and other important tone-group sequences aye also highlighted from the point of view of teaching English as a foreign language. The intonation theory, tone marks and technical terms are, in all essentials, those of Intonation of Colloquial English by O'Connor, J. D. and Arnold, G. F., Longman, 2nd ed., 1982. The changes of tone are shown graphically between two horizontal lines representing the ordinary high and low zones of the utterance. A.C.Gimson (1981:314) : The intonation of English has been studied in greater detail and for longer than that of any other language. No definitive analysis, classifying the features of RP intonation, has yet appeared (though that presented by O'Connor and Arnold (1973) provides the most comprehensive and useful account from the foreign learner's point of view).

  • PDF

Study on the realization of pause groups and breath groups (휴지 단위와 호흡 단위의 실현 양상 연구)

  • Yoo, Doyoung;Shin, Jiyoung
    • Phonetics and Speech Sciences
    • /
    • v.12 no.1
    • /
    • pp.19-31
    • /
    • 2020
  • The purpose of this study is to observe the realization of pause and breath groups from adult speakers and to examine how gender, generation, and tasks can affect this realization. For this purpose, we analyzed forty-eight male or female speakers. Their generation was divided into two groups: young, old. Task and gender affected both the realization of pause and breath groups. The length of the pause groups was longer in the read speech than in the spontaneous speech and female speech. On the other hand, the length of the breath group was longer in the spontaneous speech and the male speech. In the spontaneous speech, which requires planning, the speaker produced shorter length of pause group. The short sentence length of the reading material influenced the reason for which the length of the breath group was shorter in the reading speech. Gender difference resulted from difference in pause patterns between genders. In the case of the breath groups, the male speaker produced longer duration of pause than the female speaker did, which may be due to difference in lung capacity between genders. On the other hand, generation did not affect either the pause groups or the breath groups. The generation factor only influenced the number of syllables and the eojeols, which can be interpreted as the result of the difference in speech rate between generations.

Improvement of Naturalness for a HMM-based Korean TTS using the prosodic boundary information (운율경계정보를 이용한 HMM기반 한국어 TTS 자연성 향상 연구)

  • Lim, Gi-Jeong;Lee, Jung-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.9
    • /
    • pp.75-84
    • /
    • 2012
  • HMM-based Text-to-Speech systems generally utilize context dependent tri-phone units from a large corpus speech DB to enhance the synthetic speech. To downsize a large corpus speech DB, acoustically similar tri-phone units are clustered based on the decision tree using context dependent information. Context dependent information includes phoneme sequence as well as prosodic information because the naturalness of synthetic speech highly depends on the prosody such as pause, intonation pattern, and segmental duration. However, if the prosodic information was complicated, many context dependent phonemes would have no examples in the training data, and clustering would provide a smoothed feature which will generate unnatural synthetic speech. In this paper, instead of complicate prosodic information we propose a simple three prosodic boundary types and decision tree questions that use rising tone, falling tone, and monotonic tone to improve naturalness. Experimental results show that our proposed method can improve naturalness of a HMM-based Korean TTS and get high MOS in the perception test.

Performance Comparison of State-of-the-Art Vocoder Technology Based on Deep Learning in a Korean TTS System (한국어 TTS 시스템에서 딥러닝 기반 최첨단 보코더 기술 성능 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.2
    • /
    • pp.509-514
    • /
    • 2020
  • The conventional TTS system consists of several modules, including text preprocessing, parsing analysis, grapheme-to-phoneme conversion, boundary analysis, prosody control, acoustic feature generation by acoustic model, and synthesized speech generation. But TTS system with deep learning is composed of Text2Mel process that generates spectrogram from text, and vocoder that synthesizes speech signals from spectrogram. In this paper, for the optimal Korean TTS system construction we apply Tacotron2 to Tex2Mel process, and as a vocoder we introduce the methods such as WaveNet, WaveRNN, and WaveGlow, and implement them to verify and compare their performance. Experimental results show that WaveNet has the highest MOS and the trained model is hundreds of megabytes in size, but the synthesis time is about 50 times the real time. WaveRNN shows MOS performance similar to that of WaveNet and the model size is several tens of megabytes, but this method also cannot be processed in real time. WaveGlow can handle real-time processing, but the model is several GB in size and MOS is the worst of the three vocoders. From the results of this study, the reference criteria for selecting the appropriate method according to the hardware environment in the field of applying the TTS system are presented in this paper.

An analysis of emotional English utterances using the prosodic distance between emotional and neutral utterances (영어 감정발화와 중립발화 간의 운율거리를 이용한 감정발화 분석)

  • Yi, So-Pae
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.25-32
    • /
    • 2020
  • An analysis of emotional English utterances with 7 emotions (calm, happy, sad, angry, fearful, disgust, surprised) was conducted using the measurement of prosodic distance between 672 emotional and 48 neutral utterances. Applying the technique proposed in the automatic evaluation model of English pronunciation to the present study on emotional utterances, Euclidean distance measurement of 3 prosodic elements such as F0, intensity and duration extracted from emotional and neutral utterances was utilized. This paper, furthermore, extended the analytical methods to include Euclidean distance normalization, z-score and z-score normalization resulting in 4 groups of measurement schemes (sqrF0, sqrINT, sqrDUR; norsqrF0, norsqrINT, norsqrDUR; sqrzF0, sqrzINT, sqrzDUR; norsqrzF0, norsqrzINT, norsqrzDUR). All of the results from perceptual analysis and acoustical analysis of emotional utteances consistently indicated the greater effectiveness of norsqrF0, norsqrINT and norsqrDUR, among 4 groups of measurement schemes, which normalized the Euclidean measurement. The greatest acoustical change of prosodic information influenced by emotion was shown in the values of F0 followed by duration and intensity in descending order according to the effect size based on the estimation of distance between emotional utterances and neutral counterparts. Tukey Post Hoc test revealed 4 homogeneous subsets (calm