• 제목/요약/키워드: Speech Synthesis

검색결과 381건 처리시간 0.021초

Unit Generation Based on Phrase Break Strength and Pruning for Corpus-Based Text-to-Speech

  • Kim, Sang-Hun;Lee, Young-Jik;Hirose, Keikichi
    • ETRI Journal
    • /
    • 제23권4호
    • /
    • pp.168-176
    • /
    • 2001
  • This paper discusses two important issues of corpus-based synthesis: synthesis unit generation based on phrase break strength information and pruning redundant synthesis unit instances. First, the new sentence set for recording was designed to make an efficient synthesis database, reflecting the characteristics of the Korean language. To obtain prosodic context sensitive units, we graded major prosodic phrases into 5 distinctive levels according to pause length and then discriminated intra-word triphones using the levels. Using the synthesis unit with phrase break strength information, synthetic speech was generated and evaluated subjectively. Second, a new pruning method based on weighted vector quantization (WVQ) was proposed to eliminate redundant synthesis unit instances from the synthesis database. WVQ takes the relative importance of each instance into account when clustering similar instances using vector quantization (VQ) technique. The proposed method was compared with two conventional pruning methods through objective and subjective evaluations of synthetic speech quality: one to simply limit the maximum number of instances, and the other based on normal VQ-based clustering. For the same reduction rate of instance number, the proposed method showed the best performance. The synthetic speech with reduction rate 45% had almost no perceptible degradation as compared to the synthetic speech without instance reduction.

  • PDF

인터넷 웹페이지의 음성합성을 위한 엔진 및 플러그-인 설계 및 구현 (Design and Implementation of a Speech Synthesis Engine and a Plug-in for Internet Web Page)

  • 이희만;김지영
    • 한국정보처리학회논문지
    • /
    • 제7권2호
    • /
    • pp.461-469
    • /
    • 2000
  • 본 논문은 인터넷 웹페이지의 텍스트 정보를 추출하여 이를 음성으로 합성하기 위한 음성합성 엔진 및 넷스케이프 플러그인의 설계 및 구현에 관한 것이다. 인터넷 웹페이지를 음성으로 합성하는 방법은 audio/x-esp MIME 타입을 임베딩한 웹페이지가 발견되면서 이에 상응하는 플러그-인이 작되며 해당 플러그인은 URL로 지정된 HTML 문서를 네트워크에서 가져와 컴맨더 모브젝트에 보내교, 컴맨더 오브젝트는 HTML 문서를 파싱하여 합성엔진 제어용 TAG를 추출한다. 제어용 TAG에는 음성합성 데이터베이스 변경 및 합성음의 길이 또는 피치조절 파라미터 등의 정보를 갖고 있어 동적으로 합성음을 제어할 수 있다. 또한 컴맨더 오브젝트는 HTML 문서 내부의 특정 태그로 지정된 문장을 추출하여 전처리 과정을 수행한 후 합성엔진을 위한 컴맨드 스트림을 발생한다. 음성합성엔진은 컴맨드 스트림을 훼치(Fetch)하여 명령어를 해석하고 해당 명령어를 상응하는 멤버함수를 실행하여 음성을 합성한다. 컴맨더 오브젝트와 음성합성엔진은 각각 독립적인 객체로 설계하여 이식성과 유연성을 높인다.

  • PDF

다이폰을 이용한 한국어 문자-음성 변환 시스템의 설계 및 구현 (Design and Implementation of Korean Tet-to-Speech System)

  • 정준구
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1994년도 제11회 음성통신 및 신호처리 워크샵 논문집 (SCAS 11권 1호)
    • /
    • pp.91-94
    • /
    • 1994
  • This paper is a study on the design and implementation of the Korean Tet-to-Speech system. In this paper, parameter symthesis method is chosen for speech symthesis method and PARCOR coeffient, one of the LPC analysis, is used as acoustic parameter, We use a diphone as synthesis unit, it include a basic naturalness of human speech. Diphone DB is consisted of 1228 PCM files. LPC synthesis method has defect that decline clearness of synthesis speech, during synthesizing unvoiced sound In this paper, we improve clearness of synthesized speech, using residual signal as ecitation signal of unvoiced sound. Besides, to improve a naturalness, we control the prosody of synthesized speech through controlling the energy and pitch pattern. Synthesis system is implemented at PC/486 and use a 70Hz-4.5KHz band pass filter for speech imput/output, amplifier and TMS320c30 DSP board.

  • PDF

HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구 (A Study on the Voice Conversion with HMM-based Korean Speech Synthesis)

  • 김일환;배건성
    • 대한음성학회지:말소리
    • /
    • 제68권
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

조음 합성과 연결 합성 방식을 결합한 개선된 문서-음성 합성 시스템 (Improved Text-to-Speech Synthesis System Using Articulatory Synthesis and Concatenative Synthesis)

  • 이근희;김동주;홍광석
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(4)
    • /
    • pp.369-372
    • /
    • 2002
  • In this paper, we present an improved TTS synthesis system using articulatory synthesis and concatenative synthesis. In concatenative synthesis, segments of speech are excised from spoken utterances and connected to form the desired speech signal. We adopt LPC as a parameter, VQ to reduce the memory capacity, and TD-PSOLA to solve the naturalness problem.

  • PDF

'Hanmal' Korean Language Diphone Database for Speech Synthesis

  • Chung, Hyun-Song
    • 음성과학
    • /
    • 제12권1호
    • /
    • pp.55-63
    • /
    • 2005
  • This paper introduces a 'Hanmal' Korean language diphone database for speech synthesis, which has been publicly available since 1999 in the MBROLA web site and never been properly published in a journal. The diphone database is compatible with the MBROLA programme of high-quality multilingual speech synthesis systems. The usefulness of the diphone database is introduced in the paper. The paper also describes the phonetic and phonological structure of the database, showing the process of creating a text corpus. A machine-readable Korean SAMPA convention for the control data input to the MBROLA application is also suggested. Diphone concatenation and prosody manipulation are performed using the MBR-PSOLA algorithm. A set of segment duration models can be applied to the diphone synthesis of Korean.

  • PDF

대용량 운율 음성데이타를 이용한 자동합성방식 (Automatic Synthesis Method Using Prosody-Rich Database)

  • 김상훈
    • 한국음향학회:학술대회논문집
    • /
    • 한국음향학회 1998년도 제15회 음성통신 및 신호처리 워크샵(KSCSP 98 15권1호)
    • /
    • pp.87-92
    • /
    • 1998
  • In general, the synthesis unit database was constructed by recording isolated word. In that case, each boundary of word has typical prosodic pattern like a falling intonation or preboundary lengthening. To get natural synthetic speech using these kinds of database, we must artificially distort original speech. However, that artificial process rather resulted in unnatural, unintelligible synthetic speech due to the excessive prosodic modification on speech signal. To overcome these problems, we gathered thousands of sentences for synthesis database. To make a phone level synthesis unit, we trained speech recognizer with the recorded speech, and then segmented phone boundaries automatically. In addition, we used laryngo graph for the epoch detection. From the automatically generated synthesis database, we chose the best phone and directly concatenated it without any prosody processing. To select the best phone among multiple phone candidates, we used prosodic information such as break strength of word boundaries, phonetic contexts, cepstrum, pitch, energy, and phone duration. From the pilot test, we obtained some positive results.

  • PDF

포만트 합성방식을 이용한 문자-음성 변환에 관한 연구 (A Study on the Text-to-Speech Conversion Using the Formant Synthesis Method)

  • 최진산;김민년;서정욱;배건성
    • 음성과학
    • /
    • 제2권
    • /
    • pp.9-23
    • /
    • 1997
  • Through iterative analysis and synthesis experiments on Korean monosyllables, the Korean text-to-speech system was implemented using the phoneme-based formant synthesis method. Since the formants of initial and final consonants in this system showed many variations depending on the medial vowels, the database for each phoneme was made up of formants depending on the medial vowels as well as duration information of transition region. These techniques were needed to improve the intelligibility of synthetic speech. This paper investigates also methods of concatenating the synthesis units to improve the quality of synthetic speech.

  • PDF

서브밴드 스케일링에 의한 음성신호의 피치변경법에 관한 연구 (A Study on the Pitch Alteration Technique by Subband Scaling in Speech Signal)

  • 김영구;배명진
    • 음성과학
    • /
    • 제10권4호
    • /
    • pp.137-147
    • /
    • 2003
  • Speech synthesis can classify by synthesis way, that is waveform coding, source coding and mixture coding. Specially, waveform coding is suitable for high quality synthesis. However, it is not desirable by synthesis techniques of syllable or phoneme unit because it do not separate and handles excitation and formant part. Therefore, there is a need for pitch alteration method applied in synthesis by the rule in waveform coding. This study propose about pitch alteration method that use spectrum scaling after do to flatten spectra by subband linear approximation to minimize spectrum distortion. This paper show evaluation whether show excellency of some measure compared with LPC, Cepstrum, lifter function and method that propose. estimation method seeks distribution of each flattened signal and measured degree of flattened spectra Signal flattened is normalized, So that highest point amounts to zero, and distribution of signal ,whose average is zero, is calculated. this show result that measure the spectrum distortion rate to estimate performance of method that propose. The average spectrum distortion rate was kept below the average 2.12%, so the method that propose is superiors than existent method.

  • PDF

음성합성시스템을 위한 음색제어규칙 연구 (A Study on Voice Color Control Rules for Speech Synthesis System)

  • 김진영;엄기완
    • 음성과학
    • /
    • 제2권
    • /
    • pp.25-44
    • /
    • 1997
  • When listening the various speech synthesis systems developed and being used in our country, we find that though the quality of these systems has improved, they lack naturalness. Moreover, since the voice color of these systems are limited to only one recorded speech DB, it is necessary to record another speech DB to create different voice colors. 'Voice Color' is an abstract concept that characterizes voice personality. So speech synthesis systems need a voice color control function to create various voices. The aim of this study is to examine several factors of voice color control rules for the text-to-speech system which makes natural and various voice types for the sounding of synthetic speech. In order to find such rules from natural speech, glottal source parameters and frequency characteristics of the vocal tract for several voice colors have been studied. In this paper voice colors were catalogued as: deep, sonorous, thick, soft, harsh, high tone, shrill, and weak. For the voice source model, the LF-model was used and for the frequency characteristics of vocal tract, the formant frequencies, bandwidths, and amplitudes were used. These acoustic parameters were tested through multiple regression analysis to achieve the general relation between these parameters and voice colors.

  • PDF