• Title/Summary/Keyword: speech style

Search Result 85, Processing Time 0.033 seconds

On the Rising Tone of Intermediate Phrase in Standard Korean (한국어의 중간구 오름조 현상에 대하여)

  • Kwack Dong-gi
    • MALSORI
    • /
    • no.40
    • /
    • pp.13-27
    • /
    • 2000
  • It is generally accepted that there appears the rising tone at the end of the intermediate phrase in standard Korean. There have been discussions about whether the syllable with the rising tone, even if it is a particle or an ending, might be accented or not. The accented syllable is the most prominent one in the given phonological strings. It is determined by the nondistinctive stress which is located on the first or second syllable of lexical word according to vowel length and syllable weight. So pitch does not have any close relationship with accent. The intermediate phrase-final rising tone, therefore, is not associated with accent, but used to convey other pragmatic meanings, that is, i) speech style is more friendly, ii) the speaker tries to send the information for the hearer to hear more clearly, and iii) the speaker wants the hearer to keep on listening to him or her because the speaker's utterance is not complete.

  • PDF

Allophonic Rules and Determining Factors of Allophones in Korean (한국어의 변이음 규칙과 변이음의 결정 요인들)

  • Lee Ho-Young
    • MALSORI
    • /
    • no.21_24
    • /
    • pp.144-175
    • /
    • 1992
  • This paper aims to discuss determining factors of Korean allophones and to formulate and classify Korean allophonic rules systematically. The relationship between allophones and coarticulation, the most. influential factor of allophonic variation, is thoroughly investigated. Other factors -- speech tempo and style, dialect, and social factors such as age, set, class etc. -- are also briefly discussed. Allophonic rules are classified into two groups -- 3) those relevant to coarticulation and 2) those irrelevant to coarticulation. Rules of the first group are further classified into four subgroups according to the directionality of the coarticulation. Each allophonic nile formulation is explained and discussed in detai1. The allophonic rules formulated and classified in this paper are 1) Devoicing of Voiced Consonants, 2) Devoicing of Vowels, 3) Nasal Approach and Lateral Approach, 4) Uvularization, 5) Palatalization, 6) Voicing of Voiceless Lax Consonants, 7) Frication, 8) Labialization, 9) Nasalization, 10) Release Withholding and Release Masking, 11) Glottalization, 12) Flap Rule, 13) Vowel Weakening, and 14) Allophones of /ㅚ, ㅟ, ㅢ/ (which are realized as diphthongs or as monophthongs depending on phonetic contexts).

  • PDF

Pitch and Formant Trajectories of English Vowels by American Males with Different Speaking Styles (발화방식에 따른 미국인 남성 영어모음의 피치와 포먼트 궤적)

  • Yang, Byung-Gon
    • Phonetics and Speech Sciences
    • /
    • v.4 no.1
    • /
    • pp.21-28
    • /
    • 2012
  • Many previous studies reported acoustic parameters of English vowels produced by a clear speaking style. In everyday usage, we actually produce speech sounds with various speaking styles. Different styles may yield different acoustic measurements. This study attempts to examine pitch and formant trajectories of eleven English vowels produced by nine American males in order to understand acoustic variations depending on clear and conversational speaking styles. The author used Praat to obtain trajectories systematically at seven equidistant time points over the vowel segment while checking measurement validity. Results showed that pitch trajectories indicated distinct patterns depending on four speaking styles. Generally, higher pitch values were observed in the higher vowels and the pitch was higher in the clear speaking styles than that in the conversational styles. The same trend was observed in the three formant trajectories of front vowels and the first formant trajectories of back vowels. The second and third trajectories of back vowels revealed an opposite or inconsistent trend, which might be attributable to the coarticulation of the following consonant or lip rounding gestures. The author made a tentative conclusion that people tend to produce vowels to enhance pitch and formant differences to transmit their information clearly. Further perceptual studies on synthesized vowels with varying pitch and formant values are desirable to address the conclusion.

A study on the improvement of generation speed and speech quality for a granularized emotional speech synthesis system (세밀한 감정 음성 합성 시스템의 속도와 합성음의 음질 개선 연구)

  • Um, Se-Yun;Oh, Sangshin;Jang, Inseon;Ahn, Chung-hyun;Kang, Hong-Goo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.453-455
    • /
    • 2020
  • 본 논문은 시각 장애인을 위한 감정 음성 자막 서비스를 생성하는 종단 간(end-to-end) 감정 음성 합성 시스템(emotional text-to-speech synthesis system, TTS)의 음성 합성 속도를 높이면서도 합성음의 음질을 향상시키는 방법을 제안한다. 기존에 사용했던 전역 스타일 토큰(Global Style Token, GST)을 이용한 감정 음성 합성 방법은 다양한 감정을 표현할 수 있는 장점을 갖고 있으나, 합성음을 생성하는데 필요한 시간이 길고 학습할 데이터의 동적 영역을 효과적으로 처리하지 않으면 합성음에 클리핑(clipping) 현상이 발생하는 등 음질이 저하되는 양상을 보였다. 이를 보안하기 위해 본 논문에서는 새로운 데이터 전처리 과정을 도입하였고 기존의 보코더(vocoder)인 웨이브넷(WaveNet)을 웨이브알엔엔(WaveRNN)으로 대체하여 생성 속도와 음질 측면에서 개선됨을 보였다.

  • PDF

Neutralization of Vowels /ɨ/ and /u/ after a Labial Consonant in Korean: A Cross-generational Study

  • Kang, Hyunsook
    • Phonetics and Speech Sciences
    • /
    • v.6 no.1
    • /
    • pp.3-10
    • /
    • 2014
  • This study investigated whether Korean vowels, /ɨ/ and /u/, are distinctively perceived after a labial consonant given the fact that native and Sino-Korean nouns showed only vowel /u/ after a labial consonant while this pattern was massively broken by the recent introduction of loanwords. For this purpose, a perception experiment was conducted with $V_1C_1V_2$ sequences in which different vowels /a, i, u/ and consonants /p, t, k/ occurred in $V_1$ and $C_1$ before the target $V_2$, /ɨ/ and /u/. The data was produced by six speakers each from two different age groups, Age20 and Age40/50 in the read speech style. The results showed that consonant /p/ attracted significantly more responses of /u/ from /VCɨ/ sequences and significantly less responses of /u/ from /VCu/ sequence than the other consonants did in both age groups. Furthermore, Age20 group showed significantly less percentage of /u/ responses than Age40 group when the preceding consonant was /p/ regardless of the target vowel. We suggest therefore that unlike the traditional belief of labial assimilation, there is neutralization after a labial consonant in which vowels /ɨ/ and /u/ are often realized as any sound between two vowels, /ɨ/ and /u/. That is, this vowel change is not categorial but it rather produces an ambiguous stimulus which attracts different responses from different listeners. Ambiguous stimulus was produced due to coarticulatory efforts in speech production and perceptual compensation. We also argue that there is generational difference such that Age40/50 group speakers showed stronger tendency to produce /u/ after a labial consonant regardless of whether the target vowel was /ɨ/ or /u/.

A Study on Hybrid Structure of Semi-Continuous HMM and RBF for Speaker Independent Speech Recognition (화자 독립 음성 인식을 위한 반연속 HMM과 RBF의 혼합 구조에 관한 연구)

  • 문연주;전선도;강철호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.8
    • /
    • pp.94-99
    • /
    • 1999
  • It is the hybrid structure of HMM and neural network(NN) that shows high recognition rate in speech recognition algorithms. And it is a method which has majorities of statistical model and neural network model respectively. In this study, we propose a new style of the hybrid structure of semi-continuous HMM(SCHMM) and radial basis function(RBF), which re-estimates weighting coefficients probability affecting observation probability after Baum-Welch estimation. The proposed method takes account of the similarity of basis Auction of RBF's hidden layer and SCHMM's probability density functions so as to discriminate speech signals sensibly through the learned and estimated weighting coefficients of RBF. As simulation results show that the recognition rates of the hybrid structure SCHMM/RBF are higher than those of SCHMM in unlearned speakers' recognition experiment, the proposed method has been proved to be one which has more sensible property in recognition than SCHMM.

  • PDF

Analyzing Vocabulary Characteristics of Colloquial Style Corpus and Automatic Construction of Sentiment Lexicon (구어체 말뭉치의 어휘 사용 특징 분석 및 감정 어휘 사전의 자동 구축)

  • Kang, Seung-Shik;Won, HyeJin;Lee, Minhaeng
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.144-151
    • /
    • 2020
  • In a mobile environment, communication takes place via SMS text messages. Vocabularies used in SMS texts can be expected to use vocabularies of different classes from those used in general Korean literary style sentence. For example, in the case of a typical literary style, the sentence is correctly initiated or terminated and the sentence is well constructed, while SMS text corpus often replaces the component with an omission and a brief representation. To analyze these vocabulary usage characteristics, the existing colloquial style corpus and the literary style corpus are used. The experiment compares and analyzes the vocabulary use characteristics of the colloquial corpus SMS text corpus and the Naver Sentiment Movie Corpus, and the written Korean written corpus. For the comparison and analysis of vocabulary for each corpus, the part of speech tag adjective (VA) was used as a standard, and a distinctive collexeme analysis method was used to measure collostructural strength. As a result, it was confirmed that adjectives related to emotional expression such as'good-','sorry-', and'joy-' were preferred in the SMS text corpus, while adjectives related to evaluation expressions were preferred in the Naver Sentiment Movie Corpus. The word embedding was used to automatically construct a sentiment lexicon based on the extracted adjectives with high collostructural strength, and a total of 343,603 sentiment representations were automatically built.

Prosodic Break Index Estimation using LDA and Tri-tone Model (LDA와 tri-tone 모델을 이용한 운율경계강도 예측)

  • 강평수;엄기완;김진영
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.7
    • /
    • pp.17-22
    • /
    • 1999
  • In this paper we propose a new mixed method of LDA and tri-tone model to predict Korean prosodic break indices(PBI) for a given utterance. PBI can be used as an important cue of syntactic discontinuity in continuous speech recognition(CSR). The model consists of three steps. At the first step, PBI was predicted with the information of syllable and pause duration through the linear discriminant analysis (LDA) method. At the second step, syllable tone information was used to estimate PBI. In this step we used vector quantization (VQ) for coding the syllable tones and PBI is estimated by tri-tone model. In the last step, two PBI predictors were integrated by a weight factor. The proposed method was tested on 200 literal style spoken sentences. The experimental results showed 72% accuracy.

  • PDF

A Study on Rhythmic Units in Korean -with Respect to Syntactic Structure- (한국어의 리듬 단위에 관한 연구 - 문법 구조와 관련하여)

  • Kim, Sun-Mi
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.224-228
    • /
    • 1996
  • This paper is intended as a study on how an utterance is divided into rhythmic units in Standard Korean with respect to its syntactic structure. With respect to the data in this study I used 150 sentences which contained similar number of words and various syntactic structures. Those sentences were read by 7 speakers of Seoul dialect in a conversation style. Each sentence was read twice in a normal speed and twice in a fast speed. As a total, 4200 sentences were recorded. Then listening to them, the author marked the sentences with two kinds of boundaries i.e. strong and weak. To explore the relationship between rhythmic units and syntactic structure I devised a framework of grammatical symbols. Each symbol is designed to have both syntactic and morphological information at the same time. So I assigned those grammatical symbols to the sentences. Having sentences marked with grammatical symbols on the one hand, and with the rhythmic boundaries on the other hand, 1 could show the relationship between rhythmic units and syntactic structure; which syntactic structures are likely to be pronounced as one rhythmic unit, and which are on the rhythmic boundaries.

  • PDF

Emotion Transfer with Strength Control for End-to-End TTS (감정 제어 가능한 종단 간 음성합성 시스템)

  • Jeon, Yejin;Lee, Gary Geunbae
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.423-426
    • /
    • 2021
  • 본 논문은 전역 스타일 토큰(Global Style Token)을 기준으로 하여 감정의 세기를 조절할 수 있는 방법을 소개한다. 기존의 전역 스타일 토큰 연구에서는 원하는 스타일이 포함된 참조 오디오(reference audio)을 사용하여 음성을 합성하였다. 그러나, 참조 오디오의 스타일대로만 음성합성이 가능하기 때문에 세밀한 감정 조절에 어려움이 있었다. 이 문제를 해결하기 위해 본 논문에서는 전역 스타일 토큰의 레퍼런스 인코더 부분을 잔여 블록(residual block)과 컴퓨터 비전 분야에서 사용되는 AlexNet으로 대체하였다. AlexNet은 5개의 함성곱 신경망(convolutional neural networks) 으로 구성되어 있지만, 본 논문에서는 1개의 신경망을 제외한 4개의 레이어만 사용했다. 청취 평가(Mean Opinion Score)를 통해 제시된 방법으로 감정 세기의 조절 가능성을 보여준다.

  • PDF