• 제목/요약/키워드: Speech Synthesis

검색결과 381건 처리시간 0.03초

단음절 합성단위음을 사용한 시간영역에서의 한국어 다음절어 규칙합성을 위한 음절간 접속구간에서의 에너지 흐름 제어에 관한 연구 (On the Control of Energy Flow between the Connection Parts of Syllables for the Korean Multi-Syllabic Speech Synthesis in the Time Domain Using Mono-syllables as a Synthesis Unit)

  • 강찬희;김윤석
    • 한국통신학회논문지
    • /
    • 제24권9B호
    • /
    • pp.1767-1774
    • /
    • 1999
  • 본 논문은 시간영역 상에서의 단음절 단위합성음을 사용한 다음절어 합성에 관한 연구이다. 특히, 파형 연접시 접속구간에서의 에너지 흐름의 형태를 제어하기 위한 연구이다. 이를 위하여 시간영역 상에서 추출한 운율요소 제어용 매개변수1)를 사용하여 제어하였으며, 음절간 파형 형태의 접속규칙을 도출하여 합성시킴으로써 에너지 흐름의 형태를 시간영역 상에서 제어시킨 결과를 제시하였다. 실험결과, 단음절 단위의 저장된 파형을 연접시킴으로서 발생되는 에너지 흐름의 불연속성을 제거할 수 있었으며, 또한 합성음의 음절 및 자연성이 향상되었다.

  • PDF

음성부호화 방식에 있어서 FIR-STREAK 필터를 사용한 개별 피치펄스에 관한 연구 (A Study on Individual Pitch Pulse using FIR-STREAK Filter in Speech Coding Method)

  • 이시우
    • 한국콘텐츠학회논문지
    • /
    • 제4권4호
    • /
    • pp.65-70
    • /
    • 2004
  • 본 연구에서는 음성부호화 방식에서 피치추출 오류를 줄이고 피치간격의 변위에 적응할 수 있도록 피치간격을 정규화하지 않은 개별 피치펄스 추출법을 제안하였다. 개별피치 펄스의 추출율은 남자음성에서 $96\%$, 여자음성에서 $85\%$를 얻을 수 있었으며, 이 방법은 음성부호화방식, 음성분석, 음성합성, 음성인식 등에 활용할 수 있을 것으로 기대된다.

  • PDF

Modality-Based Sentence-Final Intonation Prediction for Korean Conversational-Style Text-to-Speech Systems

  • Oh, Seung-Shin;Kim, Sang-Hun
    • ETRI Journal
    • /
    • 제28권6호
    • /
    • pp.807-810
    • /
    • 2006
  • This letter presents a prediction model for sentence-final intonations for Korean conversational-style text-to-speech systems in which we introduce the linguistic feature of 'modality' as a new parameter. Based on their function and meaning, we classify tonal forms in speech data into tone types meaningful for speech synthesis and use the result of this classification to build our prediction model using a tree structured classification algorithm. In order to show that modality is more effective for the prediction model than features such as sentence type or speech act, an experiment is performed on a test set of 970 utterances with a training set of 3,883 utterances. The results show that modality makes a higher contribution to the determination of sentence-final intonation than sentence type or speech act, and that prediction accuracy improves up to 25% when the feature of modality is introduced.

  • PDF

Performance Evaluation of Novel AMDF-Based Pitch Detection Scheme

  • Kumar, Sandeep
    • ETRI Journal
    • /
    • 제38권3호
    • /
    • pp.425-434
    • /
    • 2016
  • A novel average magnitude difference function (AMDF)-based pitch detection scheme (PDS) is proposed to achieve better performance in speech quality. A performance evaluation of the proposed PDS is carried out through both a simulation and a real-time implementation of a speech analysis-synthesis system. The parameters used to compare the performance of the proposed PDS with that of PDSs that are based on either a cepstrum, an autocorrelation function (ACF), an AMDF, or circular AMDF (CAMDF) methods are as follows: percentage gross pitch error (%GPE); a subjective listening test; an objective speech quality assessment; a speech intelligibility test; a synthesized speech waveform; computation time; and memory consumption. The proposed PDS results in lower %GPE and better synthesized speech quality and intelligibility for different speech signals as compared to the cepstrum-, ACF-, AMDF-, and CAMDF-based PDSs. The computational time of the proposed PDS is also less than that for the cepstrum-, ACF-, and CAMDF-based PDSs. Moreover, the total memory consumed by the proposed PDS is less than that for the ACF- and cepstrum-based PDSs.

Text-driven Speech Animation with Emotion Control

  • Chae, Wonseok;Kim, Yejin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3473-3487
    • /
    • 2020
  • In this paper, we present a new approach to creating speech animation with emotional expressions using a small set of example models. To generate realistic facial animation, two example models called key visemes and expressions are used for lip-synchronization and facial expressions, respectively. The key visemes represent lip shapes of phonemes such as vowels and consonants while the key expressions represent basic emotions of a face. Our approach utilizes a text-to-speech (TTS) system to create a phonetic transcript for the speech animation. Based on a phonetic transcript, a sequence of speech animation is synthesized by interpolating the corresponding sequence of key visemes. Using an input parameter vector, the key expressions are blended by a method of scattered data interpolation. During the synthesizing process, an importance-based scheme is introduced to combine both lip-synchronization and facial expressions into one animation sequence in real time (over 120Hz). The proposed approach can be applied to diverse types of digital content and applications that use facial animation with high accuracy (over 90%) in speech recognition.

Speech Feature Extraction Based on the Human Hearing Model

  • Chung, Kwang-Woo;Kim, Paul;Hong, Kwang-Seok
    • 대한음성학회:학술대회논문집
    • /
    • 대한음성학회 1996년도 10월 학술대회지
    • /
    • pp.435-447
    • /
    • 1996
  • In this paper, we propose the method that extracts the speech feature using the hearing model through signal processing techniques. The proposed method includes the following procedure ; normalization of the short-time speech block by its maximum value, multi-resolution analysis using the discrete wavelet transformation and re-synthesize using the discrete inverse wavelet transformation, differentiation after analysis and synthesis, full wave rectification and integration. In order to verify the performance of the proposed speech feature in the speech recognition task, korean digit recognition experiments were carried out using both the DTW and the VQ-HMM. The results showed that, in the case of using DTW, the recognition rates were 99.79% and 90.33% for speaker-dependent and speaker-independent task respectively and, in the case of using VQ-HMM, the rate were 96.5% and 81.5% respectively. And it indicates that the proposed speech feature has the potential for use as a simple and efficient feature for recognition task

  • PDF

HMM 기반 혼용 언어 음성합성을 위한 모델 파라메터의 음절 경계에서의 평활화 기법 (Syllable-Level Smoothing of Model Parameters for HMM-Based Mixed-Lingual Text-to-Speech)

  • 양종열;김홍국
    • 말소리와 음성과학
    • /
    • 제2권1호
    • /
    • pp.87-95
    • /
    • 2010
  • In this paper, we address issues associated with mixed-lingual text-to-speech based on context-dependent HMMs, where there are multiple sets of HMMs corresponding to each individual language. In particular, we propose smoothing techniques of synthesis parameters at the boundaries between different languages to obtain more natural quality of speech. In other words, mel-frequency cepstral coefficients (MFCCs) at the language boundaries are smoothed by applying several linear and nonlinear approximation techniques. It is shown from an informal listening test that synthesized speech smoothed by a modified version of linear least square approximation (MLLSA) and a quadratic interpolation (QI) method is preferred than that without using any smoothing technique.

  • PDF

ZINC 함수 여기신호를 이용한 분석-합성 구조의 초 저속 음성 부호화기 (Very Low Bit Rate Speech Coder of Analysis by Synthesis Structure Using ZINC Function Excitation)

  • 서상원;김영준;김종학;김영주;이인성
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2006년도 하계종합학술대회
    • /
    • pp.349-350
    • /
    • 2006
  • This paper presents very low bit rate speech coder, ZFE-CELP(ZINC Function Excitation-Code Excited Linear Prediction). The ZFE-CELP speech codec is based on a ZINC function and CELP modeling of the excitation signal respectively according to the frame characteristic such as a voiced speech and an unvoiced speech. And this paper suggest strategies to improve the speech quality of the very low bit rate speech coder.

  • PDF

PROSODY IN SPEECH TECHNOLOGY - National project and some of our related works -

  • Hirose Keikichi
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 2002년도 하계학술발표대회 논문집 제21권 1호
    • /
    • pp.15-18
    • /
    • 2002
  • Prosodic features of speech are known to play an important role in the transmission of linguistic information in human conversation. Their roles in the transmission of para- and non- linguistic information are even much more. In spite of their importance in human conversation, from engineering viewpoint, research focuses are mainly placed on segmental features, and not so much on prosodic features. With the aim of promoting research works on prosody, a research project 'Prosody and Speech Processing' is now going on. A rough sketch of the project is first given in the paper. Then, the paper introduces several prosody-related research works, which are going on in our laboratory. They include, corpus-based fundamental frequency contour generation, speech rate control for dialogue-like speech synthesis, analysis of prosodic features of emotional speech, reply speech generation in spoken dialogue systems, and language modeling with prosodic boundaries.

  • PDF

대화체 억양구말 형태소의 경계성조 연구 (Boundary Tones of Intonational Phrase-Final Morphemes in Dialogues)

  • 한선희
    • 음성과학
    • /
    • 제7권4호
    • /
    • pp.219-234
    • /
    • 2000
  • The study of boundary tones in connected speech or dialogues is one of the most underdeveloped areas of Korean prosody. This. paper concerns the boundary tones of intonational phrase-final morphemes which are shown in the speech corpus of dialogues. Results of phonetic analysis show that different kinds of boundary tones are realized, depending on the positions of the intonational phrase-final morphemes in the sentences.. This study has also shown that boundary tone patterning is somewhat related to the sentence structure, and for better speech recognition and speech synthesis, it presents a simple model of boundary tones based on the fundamental frequency contour. The results of this study will contribute to our understanding of the prosodic pattern of Korean connected speech or dialogues.

  • PDF