• Title/Summary/Keyword: Prosody

Search Result 208, Processing Time 0.02 seconds

Prosodic aspects of structural ambiguous sentences in Korean produced by Japanese intermediate Korean learners (한국어 구조적 중의성 문장에 대한 일본인 중급 한국어 학습자들의 발화양상)

  • Yune, YoungSook
    • Phonetics and Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.89-97
    • /
    • 2015
  • The aim of this study is to investigate the prosodic aspects of structural ambiguous sentences in Korean produced by Japanese Korean learners and the influence of their first language prosody. Previous studies reported that structural ambiguous sentences in Korean are different especially in prosodic phrasing. So we examined whether Japanese Korean leaners can also distinguish, in production, between two types of structural ambiguous sentences on the basis of prosodic features. For this purpose 4 Korean native speakers and 8 Japanese Korean learners participated in the production test. Analysis materials are 6 sentences where a relative clause modify either NP1 or NP1+NP2. The results show that Korean native speakers produced ambiguous sentences by different prosodic structure depending on their semantic and syntactic structure (left branching or right branching sentence). Japanese speakers also show distinct prosodic structure for two types of ambiguous sentences in most cases, but they have more errors in producing left branching sentences than right branching sentences. In addition to that, interference of Japanese pitch accent in the production of Korean ambiguous sentences was observed.

A Study on the Artificial Neural Networks for the Sentence-level Prosody Generation (문장단위 운율발생용 인공신경망에 관한 연구)

  • 신동엽;민경중;강찬구;임운천
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.53-56
    • /
    • 2000
  • 무제한 어휘 음성합성 시스템의 문-음성 합성기는 합성음의 자연감을 높이기 위해 여러 가지 방법을 사용하게되는데 그중 하나가 자연음에 내재하는 운을 법칙을 정확히 구현하는 것이다. 합성에 필요한 운율법칙은 언어학적 정보를 이용해 구현하거나, 자연음을 분석해 구한 운을 정보로부터 운율 법칙을 추출하여 합성에 이용하고 있다. 이와 같이 구한 운을 법칙이 자연음에 존재하는 운율 법칙을 전부 반영하지 못했거나, 잘못 구현되는 경우에는 합성음의 자연성이 떨어지게 된다. 이런 점을 고려하여 우리는 자연음의 운율 정보를 이용해 인공 신경망을 훈련시켜, 문장단위 운율을 발생시킬 수 있는 방식을 제안하였다. 운율의 세 가지 요소는 피치, 지속시간, 크기 변화가 있는데, 인공 신경망은 문장이 입력되면, 각 해당 음소의 지속시간에 따른 피치 변화와 크기 변화를 학습할 수 있도록 설계하였다. 신경망을 훈련시키기 위해 고립 단어 군과 음소균형 문장 군을 화자로 하여금 발성하게 하여, 녹음하고, 분석하여 구한 운을 정보를 데이터베이스로 구축하였다. 문장 내의 각 음소에 대해 지속시간과 피치 변화 그리고 크기 변화를 구하고, 곡선적응 방법을 이용하여 각 변화 곡선에 대한 다항식 계수와 초기치를 구해 운을 데이터베이스를 구축한다. 이 운을 데이터베이스의 일부를 인공 신경망을 훈련시키는데 이용하고, 나머지를 이용해 인공 신경망의 성능을 평가한 결과 운을 데이터베이스를 계속 확장하면 좀더 자연스러운 운율을 발생시킬 수 있음을 관찰하였다.

  • PDF

On the Control of Energy Flow between the Connection Parts of Syllables for the Korean Multi-Syllabic Speech Synthesis in the Time Domain Using Mono-syllables as a Synthesis Unit (단음절 합성단위음을 사용한 시간영역에서의 한국어 다음절어 규칙합성을 위한 음절간 접속구간에서의 에너지 흐름 제어에 관한 연구)

  • 강찬희;김윤석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.9B
    • /
    • pp.1767-1774
    • /
    • 1999
  • This paper is to synthesize Korean multi-syllabic speeches in the time domain using mono-syllables as a synthesis unit. Specially it is to control the shape forms of speech energy flows between the connection parts of syllables in the case of concatenation mono-syllables. For this it is controlled with the prosody parameters1) extracted from speech waveforms in the time domains and presented the experimental results controlled the energy flows by using the induced concatenation rules from the korean syllable shapeforms in connetion parts of syllables. In the results of experiments, it is removed the incontinuities of energy follows in the connection parts produced by concatenating the mono-syllables in the time domain and also improved the qualities and naturalites of synthesized speeches.

  • PDF

Voice Frequency Synthesis using VAW-GAN based Amplitude Scaling for Emotion Transformation

  • Kwon, Hye-Jeong;Kim, Min-Jeong;Baek, Ji-Won;Chung, Kyungyong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.713-725
    • /
    • 2022
  • Mostly, artificial intelligence does not show any definite change in emotions. For this reason, it is hard to demonstrate empathy in communication with humans. If frequency modification is applied to neutral emotions, or if a different emotional frequency is added to them, it is possible to develop artificial intelligence with emotions. This study proposes the emotion conversion using the Generative Adversarial Network (GAN) based voice frequency synthesis. The proposed method extracts a frequency from speech data of twenty-four actors and actresses. In other words, it extracts voice features of their different emotions, preserves linguistic features, and converts emotions only. After that, it generates a frequency in variational auto-encoding Wasserstein generative adversarial network (VAW-GAN) in order to make prosody and preserve linguistic information. That makes it possible to learn speech features in parallel. Finally, it corrects a frequency by employing Amplitude Scaling. With the use of the spectral conversion of logarithmic scale, it is converted into a frequency in consideration of human hearing features. Accordingly, the proposed technique provides the emotion conversion of speeches in order to express emotions in line with artificially generated voices or speeches.

Evaluation of English speaking proficiency under fixed speech rate: Focusing on utterances produced by Korean child learners of English

  • Narah Choi;Tae-Yeoub Jang
    • Phonetics and Speech Sciences
    • /
    • v.15 no.1
    • /
    • pp.47-54
    • /
    • 2023
  • This study attempted to test the hypothesis that Korean evaluators can score L2 speech appropriately, even when speech rate features are unavailable. Two perception experiments-preliminary and main-were conducted sequentially. The purpose of the preliminary experiment was to categorize English-as-a-foreign-language (EFL) speakers into two groups-advanced learners and lower-level learners-based on the proficiency scores given by five human raters. In the main experiment, a set of stimuli was prepared such that the speech rate of all data tokens was modified to have a uniform speech rate. Ten human evaluators were asked to score the stimulus tokens on a 5-point scale. These scores were statistically analyzed to determine whether there was a significant difference in utterance production between the two groups. The results of the preliminary experiment confirm that higher-proficiency learners speak faster than lower-proficiency learners. The results of the main experiment indicate that under controlled speech-rate conditions, human raters can appropriately assess learner proficiency, probably thanks to the linguistic features that the raters considered during the evaluation process.

ToBI and beyond: Phonetic intonation of Seoul Korean ani in Korean Intonation Corpus (KICo)

  • Ji-eun Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.1-9
    • /
    • 2024
  • This study investigated the variation in the intonation of Seoul Korean interjection ani across different meanings ("no" and "really?") and speech levels (Intimate and Polite) using data from Korean Intonation Corpus (KICo). The investigation was conducted in two stages. First, IP-final tones in the dataset were categorized according to the K-ToBI convention (Jun, 2000). While significant relationships were observed between the meaning of ani and its IP-final tones, substantial overlap between groups was notable. Second, the F0 characteristics of the final syllable of ani were analyzed to elucidate the apparent many-to-many relationships between intonation and meaning/speech level. Results indicated that these seemingly overlapping relationships could be significantly distinguished. Overall, this study advocates for a deeper analysis of phonetic intonation beyond ToBI-based categorical labels. By examining the F0 characteristics of the IP-final syllable, previously unclear connections between meaning/speech level and intonation become more comprehensible. Although ToBI remains a valuable tool and framework for studying intonation, it is imperative to explore beyond these categories to grasp the "distinctiveness" of intonation, thereby enriching our understanding of prosody.

On a Pitch Alteration Method by Time-axis Scaling Compensated with the Spectrum for High Quality Speech Synthesis (고음질 합성용 스펙트럼 보상된 시간축조절 피치 변경법)

  • Bae, Myung-Jin;Lee, Won-Cheol;Im, Sung-Bin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.89-95
    • /
    • 1995
  • The waveform coding technique has concerned with simply preserving the waveform shape of speech signal through a redundancy reduction process. In the case of speech synthesis, the waveform coding with high sound quality is mainly used to the synthesis by analysis. However, since the parameters of this coding are not classified into either excitation or vocal tract parameters, it is difficult to applying the waveform coding to the synthesis by rule. In order to apply the waveform coding to the synthesis by rule, the pitch alteration technique is required in prosody control. In this paper, we propose a new pitch alteration method that can change the pitch period in waveform coding by scaling the time-axis and compensating the spectrum. This is relevant to the time-frequency domain method were the phase components of the waveform is preserved with a little spectrum distortion of 2.5 % and less for 50% pitch change.

  • PDF

Prosodic Boundary Effects on the V-to-V Lingual Movement in Korean

  • Cho, Tae-Hong;Yoon, Yeo-Min;Kim, Sa-Hyang
    • Phonetics and Speech Sciences
    • /
    • v.2 no.3
    • /
    • pp.101-113
    • /
    • 2010
  • The present study investigated how the kinematics of the /a/-to-/i/ tongue movement in Korean would be influenced by prosodic boundary. The /a/-to-/i/ sequence was used as 'transboundary' test materials which occurred across a prosodic boundary as in /ilnjəʃ$^h$a/ # / minsakwae/ ('일년차#민사과에' 'the first year worker' # 'dept. of civil affairs'). It also tested whether the V-to-V tongue movement would be further influenced by its syllable structure with /m/ which was placed either in the coda condition (/am#i/) or in the onset condition (/a#mi). Results of an EMA (Electromagnetic Articulagraphy) study showed that kinematical parameters such as the movement distance (displacement), the movement duration, and the movement velocity (speed) all varied as a function of the boundary strength, showing an articulatory strengthening pattern of a "larger, longer and faster" movement. Interestingly, however, the larger, longer and faster pattern associated with boundary marking in Korean has often been observed with stress (prominence) marking in English. It was proposed that language-specific prosodic systems induce different ways in which phonetics and prosody interact: Korean, as a language without lexical stress and pitch accent, has more degree of freedom to express prosodic strengthening, while languages such as English have constraints, so that some strengthening patterns are reserved for lexical stress. The V-to-V tongue movement was also found to be influenced by the intervening consonant /m/'s syllable affiliation, showing a more preboundary lengthening of the tongue movement when /m/ was part of the preboundary syllable (/am#i/). The results, together, show that the fine-grained phonetic details do not simply arise as low-level physical phenomena, but reflect higher-level linguistic structures, such as syllable and prosodic structures. It was also discussed how the boundary-induced kinematic patterns could be accounted for in terms of the task dynamic model and the theory of the prosodic gesture ($\pi$-gesture).

  • PDF

A Study on Implementation of Emotional Speech Synthesis System using Variable Prosody Model (가변 운율 모델링을 이용한 고음질 감정 음성합성기 구현에 관한 연구)

  • Min, So-Yeon;Na, Deok-Su
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3992-3998
    • /
    • 2013
  • This paper is related to the method of adding a emotional speech corpus to a high-quality large corpus based speech synthesizer, and generating various synthesized speech. We made the emotional speech corpus as a form which can be used in waveform concatenated speech synthesizer, and have implemented the speech synthesizer that can be generated various synthesized speech through the same synthetic unit selection process of normal speech synthesizer. We used a markup language for emotional input text. Emotional speech is generated when the input text is matched as much as the length of intonation phrase in emotional speech corpus, but in the other case normal speech is generated. The BIs(Break Index) of emotional speech is more irregular than normal speech. Therefore, it becomes difficult to use the BIs generated in a synthesizer as it is. In order to solve this problem we applied the Variable Break[3] modeling. We used the Japanese speech synthesizer for experiment. As a result we obtained the natural emotional synthesized speech using the break prediction module for normal speech synthesize.

A Study on the Prosody Generation of Korean Sentences using Neural Networks (신경망을 이용한 한국어 운율 발생에 관한 연구)

  • Lee Il-Goo;Min Kyoung-Joong;Kang Chan-Koo;Lim Un-Cheon
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.65-69
    • /
    • 1999
  • 합성단위, 합성기, 합성방식 등에 따라 여러 가지 다양한 음성합성시스템이 있으나 순수한 법칙합성 시스템이 아니고 기본 합성단위를 연결하여 합성음을 발생시키는 연결합성 시스템은 연결단위사이의 매끄러운 합성계수의 변화를 구현하지 못해 자연감이 떨어지는 실정이다. 자연음에 존재하는 운율법칙을 정확히 구현하면 합성음의 자연감을 높일 수 있으나 존재하는 모든 운율법칙을 추출하기 위해서는 방대한 분량의 언어자료 구축이 필요하다. 일반 의미 문장으로부터 운율법칙을 추출하는 것이 바람직하겠으나, 모든 운율 현상이 포함된 언어자료는 그 문장 수가 극히 방대하여 처리하기 힘들기 때문에 가능하면 문장 수를 줄이면서 다양한 운율 현상을 포함하는 문장 군을 구축하는 것이 중요하다. 본 논문에서는 음성학적으로 균형 잡힌 고립단어 412 단어를 기반으로 의미문장들을 만들었다. 이들 단어를 각 그룹으로 구분하여 각 그룹에서 추출한 단어들을 조합시켜 의미 문장을 만들도록 하였다. 의미 문장을 만들기 위해 단어 목록에 없는 단어를 첨가하였다. 단어의 문장 내에서의 상대위치에 따른 운율 변화를 살펴보기위해 각 문장의 변형을 만들어 언어자료에 포함시켰다. 자연감을 높이기 위해 구축된 언어자료를 바탕으로 음성데이타베이스를 작성하여 운율분석을 통해 신경망을 훈련시키기 위한 목표패턴을 작성하였다 문장의 음소열을 입력으로 하고 특정음소의 운율정보를 발생시키는 신경망을 구성하여 언어자료를 기반으로 작성한 목표패턴을 이용해 신경망을 훈련시켰다. 신경망의 입력패턴은 문장의 음소열 중 11개 음소열로 구성된다. 이 중 가운데 음소의 운율정보가 출력으로 나타난다. 분절요인에 의한 영향을 고려해주기 위해 전후 5음소를 동시에 입력시키고 문장내에서의 구문론적인 영향을 고려해주기 위해 해당 음소의 문장내에서의 위치, 운율구에 관한 정보등을 신경망의 입력 패턴으로 구성하였다. 특정화자로 하여금 언어자료를 발성하게 한 음성시료의 운율정보를 추출하여 신경망을 훈련시킨 결과 자연음의 운율과 유사한 합성음의 운율을 발생시켰다.

  • PDF