• 제목/요약/키워드: Prosodic Corpus

검색결과 33건 처리시간 0.028초

가변 Break를 이용한 코퍼스 기반 일본어 음성 합성기의 성능 향상 방법 (A Performance Improvement Method using Variable Break in Corpus Based Japanese Text-to-Speech System)

  • 나덕수;민소연;이종석;배명진
    • 한국음향학회지
    • /
    • 제28권2호
    • /
    • pp.155-163
    • /
    • 2009
  • Text-to-speech 시스템에서 입력 텍스트로부터 운율 정보를 생성하기 위해서는 운율구 경계, 음소 지속시간, 기본주파수 포락선 설정의 3가지 기본적인 모듈이 필요하다. Break 인덱스 (BI; Break Index)는 합성기에서 운율구의 경계를 나타내고, 자연스러운 합성음을 생성하기 위해서는 BI를 정확히 예측하여야 한다. 그러나 BI는 문장의 의미나 화자의 읽기 습관(reading style)에 따라 임의적으로 결정되는 경우가 많아 정확한 예측이 매우 어렵다. 특히 일본어 합성기에서는 악센트 구 경계 (APB; Accentual Phrase Boundary)와 major phrase 경계 (MPB; Major Phrase Boundary)의 정확한 예측이 어렵다. 따라서 본 논문에서는 APB와 MPB 예측 오류를 보완할 수 있는 방법을 제안한다. BI를 고정 break (FB; Fixed Break)와 가변 break (VB; Variable Break)로 분류하여 합성단위 선택을 수행한다. 일반적으로 BI는 한번 생성되면 변하지 않는다. 따라서 BI가 잘못 생성된 경우 최적의 합성음을 생성할 수 없게 되는데, VB는 생성된 BI와 그것과 유사한 BI를 함께 이용하여 합성단위 선택을 수행함으로써 합성음의 BI가 생성된 BI와 다를 수 있는 것을 의미한다. APB와 MPB에 해당하는 BI에 대하여 VB인지 FB인지 CART(Classification and Regression Tree)를 이용하여 예측하고, VB인 경우 기본 주파수와 음소 지속시간에 대해 다중 운율 모델을 생성하여 합성단위 선택을 수행하였다. MOS 테스트 결과 원음이 4.99, 제안한 방법을 4.25, 기존의 방법은 4.01로 합성음의 자연성을 향상시킬 수 있었다.

Computer Codes for Korean Sounds: K-SAMPA

  • Kim, Jong-mi
    • The Journal of the Acoustical Society of Korea
    • /
    • 제20권4E호
    • /
    • pp.3-16
    • /
    • 2001
  • An ASCII encoding of Korean has been developed for extended phonetic transcription of the Speech Assessment Methods Phonetic Alphabet (SAMPA). SAMPA is a machine-readable phonetic alphabet used for multilingual computing. It has been developed since 1987 and extended to more than twenty languages. The motivating factor for creating Korean SAMPA (K-SAMPA) is to label Korean speech for a multilingual corpus or to transcribe native language (Ll) interfered pronunciation of a second language learner for bilingual education. Korean SAMPA represents each Korean allophone with a particular SAMPA symbol. Sounds that closely resemble it are represented by the same symbol, regardless of the language they are uttered in. Each of its symbols represents a speech sound that is spectrally and temporally so distinct as to be perceptually different when the components are heard in isolation. Each type of sound has a separate IPA-like designation. Korean SAMPA is superior to other transcription systems with similar objectives. It describes better the cross-linguistic sound quality of Korean than the official Romanization system, proclaimed by the Korean government in July 2000, because it uses an internationally shared phonetic alphabet. It is also phonetically more accurate than the official Romanization in that it dispenses with orthographic adjustments. It is also more convenient for computing than the International Phonetic Alphabet (IPA) because it consists of the symbols on a standard keyboard. This paper demonstrates how the Korean SAMPA can express allophonic details and prosodic features by adopting the transcription conventions of the extended SAMPA (X-SAMPA) and the prosodic SAMPA(SAMPROSA).

  • PDF

연속 음성으로부터 추출한 CVC 음성세그먼트 기반의 음성합성 (Speech Synthesis Based on CVC Speech Segments Extracted from Continuous Speech)

  • 김재홍;조관선;이철희
    • 한국음향학회지
    • /
    • 제18권7호
    • /
    • pp.10-16
    • /
    • 1999
  • 본 논문에서는 설계하지 않은 연속 음성 코퍼스로부터 추출된 CVC 음성 세그먼트를 사용하는 연결 기반 음성 합성기를 제안한다. 연속 음성은 각 음운간의 상호조음효과가 비교적 잘 반영되고, 자연스러운 억양 변화를 포함하고 있으므로 이를 적절하게 활용할 수 있는 합성 단위를 선택하면 자연스런 음성합성이 가능하다. 여러 가지 합성단위 가운데 CVC 합성 단위는 자음의 안정 부분에서 접속이 일어나므로 연결부에서의 음질 저하가 적고, 전후 자음과 모음간의 조음 현상을 잘 반영하는 장점이 있다. 본 논문에서는 CVC 합성 단위를 사용하는 경우 나타나는 문장 세그먼트들의 조합을 4가지로 분류하여 각각의 통계적 특성과 합성음성의 품질을 분석하고, CVC에 근거한 새로운 복합 합성 단위를 사용하는 방식을 제안한다. 제안된 방식을 사용하여 설계하지 않은 연속 음성 코퍼스로부터 CVC 음성 세그먼트를 추출하여 다양한 예제 문장을 합성하였다. 만일 필요한 CVC 음성 세그먼트가 음성 코퍼스에 존재하지 않는 경우 반음절 음성 세그먼트로 대치하여 합성하였다. 실험 결과 약 100 Mbytes의 연속 음성 코퍼스로 비교적 자연스러운 음성합성이 가능함을 알 수 있었다.

  • PDF

감정 인식을 위한 음성의 특징 파라메터 비교 (The Comparison of Speech Feature Parameters for Emotion Recognition)

  • 김원구
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2004년도 춘계학술대회 학술발표 논문집 제14권 제1호
    • /
    • pp.470-473
    • /
    • 2004
  • In this paper, the comparison of speech feature parameters for emotion recognition is studied for emotion recognition using speech signal. For this purpose, a corpus of emotional speech data recorded and classified according to the emotion using the subjective evaluation were used to make statical feature vectors such as average, standard deviation and maximum value of pitch and energy. MFCC parameters and their derivatives with or without cepstral mean subfraction are also used to evaluate the performance of the conventional pattern matching algorithms. Pitch and energy Parameters were used as a Prosodic information and MFCC Parameters were used as phonetic information. In this paper, In the Experiments, the vector quantization based emotion recognition system is used for speaker and context independent emotion recognition. Experimental results showed that vector quantization based emotion recognizer using MFCC parameters showed better performance than that using the Pitch and energy parameters. The vector quantization based emotion recognizer achieved recognition rates of 73.3% for the speaker and context independent classification.

  • PDF

Vowel Fundamental Frequency in Manner Differentiation of Korean Stops and Affricates

  • Jang, Tae-Yeoub
    • 음성과학
    • /
    • 제7권1호
    • /
    • pp.217-232
    • /
    • 2000
  • In this study, I investigate the role of post-consonantal fundamental frequency (F0) as a cue for automatic distinction of types of Korean stops and affricates. Rather than examining data obtained by restricting contexts to a minimum to prevent the interference of irrelevant factors, a relatively natural speaker independent speech corpus is analysed. Automatic and statistical approaches are adopted to annotate data, to minimise speaker variability, and to evaluate the results. In spite of possible loss of information during those automatic analyses, statistics obtained suggest that vowel F0 is a useful cue for distinguishing manners of articulation of Korean non-continuant obstruents having the same place of articulation, especially of lax and aspirated stops and affricates. On the basis of the statistics, automatic classification is attempted over the relevant consonants in a specific context where the micro-prosodic effects appear to be maximised. The results confirm the usefulness of this effect in application for Korean phone recognition.

  • PDF

Analysis and Interpretation of Intonation Contours of Slovene

  • Ales Dobnikar
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.542-547
    • /
    • 1996
  • Prosodic characteristics of natural speech, especially intonation, in many cases represent specific feelings of the speaker at the time of the utterance, with relatively vast variations of speaking styles over the same text. We analyzed a collected speech corpus, recorded with ten Slovene speakers. Interpretation of observed intonation contours was done for the purpose of modelling the intonation contour in synthesis process. We devised a scheme for modeling the intonation contour for different types of intonation units based on the results of analyzing intonation contours. The intonation scheme uses a superpositional approach, which defines the intonation contour as the sum of global (intonation unit) and local (accented syllables or syntactic boundaries) components. Near-to-natural intonation contour was obtained by rules, using only the text of the utterance as input.

  • PDF

Acoustic analysis of Korean trisyllabic words produced by English and Korean speakers

  • Lee, Jeong-Hwa;Rhee, Seok-Chae
    • 말소리와 음성과학
    • /
    • 제10권2호
    • /
    • pp.1-6
    • /
    • 2018
  • The current study aimed to investigate the transfer of English word stress rules to the production of Korean trisyllabic words by L1 English learners of Korean. It compared English and Korean speakers' productions of seven Korean words from the corpus L2KSC (Rhee et al., 2005). To this end, it analyzed the syllable duration, intensity, and pitch. The results showed that English and Korean speakers' pronunciations differed markedly in duration and intensity. English learners produced word-initial syllables of greater intensity than Korean speakers, while Korean speakers produced word-final syllables of longer duration than English learners. However, these differences between the two speaker groups were not related to the expected L1 transfer. The tonal patterns produced by English and Korean speakers were similar, reflecting L1 English speakers' learning of the L2 Korean prosodic system.

Analysis of the Timing of Spoken Korean Using a Classification and Regression Tree (CART) Model

  • Chung, Hyun-Song;Huckvale, Mark
    • 음성과학
    • /
    • 제8권1호
    • /
    • pp.77-91
    • /
    • 2001
  • This paper investigates the timing of Korean spoken in a news-reading speech style in order to improve the naturalness of durations used in Korean speech synthesis. Each segment in a corpus of 671 read sentences was annotated with 69 segmental and prosodic features so that the measured duration could be correlated with the context in which it occurred. A CART model based on the features showed a correlation coefficient of 0.79 with an RMSE (root mean squared prediction error) of 23 ms between actual and predicted durations in reserved test data. These results are comparable with recent published results in Korean and similar to results found in other languages. An analysis of the classification tree shows that phrasal structure has the greatest effect on the segment duration, followed by syllable structure and the manner features of surrounding segments. The place features of surrounding segments only have small effects. The model has application in Korean speech synthesis systems.

  • PDF

대화체 억양구말 형태소의 경계성조 연구 (Boundary Tones of Intonational Phrase-Final Morphemes in Dialogues)

  • 한선희
    • 음성과학
    • /
    • 제7권4호
    • /
    • pp.219-234
    • /
    • 2000
  • The study of boundary tones in connected speech or dialogues is one of the most underdeveloped areas of Korean prosody. This. paper concerns the boundary tones of intonational phrase-final morphemes which are shown in the speech corpus of dialogues. Results of phonetic analysis show that different kinds of boundary tones are realized, depending on the positions of the intonational phrase-final morphemes in the sentences.. This study has also shown that boundary tone patterning is somewhat related to the sentence structure, and for better speech recognition and speech synthesis, it presents a simple model of boundary tones based on the fundamental frequency contour. The results of this study will contribute to our understanding of the prosodic pattern of Korean connected speech or dialogues.

  • PDF

Feature Extraction Based on DBN-SVM for Tone Recognition

  • Chao, Hao;Song, Cheng;Lu, Bao-Yun;Liu, Yong-Li
    • Journal of Information Processing Systems
    • /
    • 제15권1호
    • /
    • pp.91-99
    • /
    • 2019
  • An innovative tone modeling framework based on deep neural networks in tone recognition was proposed in this paper. In the framework, both the prosodic features and the articulatory features were firstly extracted as the raw input data. Then, a 5-layer-deep deep belief network was presented to obtain high-level tone features. Finally, support vector machine was trained to recognize tones. The 863-data corpus had been applied in experiments, and the results show that the proposed method helped improve the recognition accuracy significantly for all tone patterns. Meanwhile, the average tone recognition rate reached 83.03%, which is 8.61% higher than that of the original method.