• Title/Summary/Keyword: Speech Synthesis

Search Result 381, Processing Time 0.022 seconds

Korean Sentence Symbol Preprocess System for the Improvement of Speech Synthesis Quality (음성 합성 시스템의 품질 향상을 위한 한국어 문장 기호 전처리 시스템)

  • Lee, Ho-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.2
    • /
    • pp.149-156
    • /
    • 2015
  • In this paper, we propose a Korean sentence symbol preprocessor for a SSML (speech synthesis markup language) supported speech synthesis system in order to improve the quality of the synthesized result. After the analysis of Korean Wikipedia documents, we propose 8 categories for the meaning of sentence symbols and 11 regular expression for the classification of each category. After the development of a Korean sentence symbol preprocess system we archived 56% of precision and 71.45% of recall ratio for 63,000 sentences.

Cyber Character Implementation with Recognition and Synthesis of Speech/lmage (음성/영상의 인식 및 합성 기능을 갖는 가상캐릭터 구현)

  • Choe, Gwang-Pyo;Lee, Du-Seong;Hong, Gwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.5
    • /
    • pp.54-63
    • /
    • 2000
  • In this paper, we implemented cyber character that can do speech recognition, speech synthesis, Motion tracking and 3D animation. For speech recognition, we used Discrete-HMM algorithm with K-means 128 level vector quantization and MFCC feature vector. For speech synthesis, we used demi-syllables TD-PSOLA algorithm. For PC based Motion tracking, we present Fast Optical Flow like Method. And for animating 3D model, we used vertex interpolation with DirectSD retained mode. Finally, we implemented cyber character integrated above systems, which game calculating by the multiplication table with user and the cyber character always look at user using of Motion tracking system.

  • PDF

A Study on Fuzziness Parameter Selection in Fuzzy Vector Quantization for High Quality Speech Synthesis (고음질의 음성합성을 위한 퍼지벡터양자화의 퍼지니스 파라메타선정에 관한 연구)

  • 이진이
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.60-69
    • /
    • 1998
  • This paper proposes a speech synthesis method using Fuzzy VQ, and then study how to make choice of fuzziness value which optimizes (controls) the performance of FVQ in order to obtain the synthesized speech which is closer to the original speech. When FVQ is used to synthesize a speech, analysis stage generates membership function values which represents the degree to which an input speech pattern matches each speech patterns in codebook, and synthesis stage reproduces a synthesized speech, using membership function values which is obtained in analysis stage, fuzziness value, and fuzzy-c-means operation. By comparsion of the performance of the FVQ and VQ synthesizer with simmulation, we show that, although the FVQ codebook size is half of a VQ codebook size, the performance of FVQ is almost equal to that of VQ. This results imply that, when Fuzzy VQ is used to obtain the same performance with that of VQ in speech synthesis, we can reduce by half of memory size at a codebook storage. And then we have found that, for the optimized FVQ with maximum SQNR in synthesized speech, the fuzziness value should be small when the variance of analysis frame is relatively large, while fuzziness value should be large, when it is small. As a results of comparsion of the speeches synthesized by VQ and FVQ in their spectrogram of frequency domain, we have found that spectrum bands(formant frequency and pitch frequency) of FVQ synthesized speech are closer to the original speech than those using VQ.

  • PDF

An emotional speech synthesis markup language processor for multi-speaker and emotional text-to-speech applications (다음색 감정 음성합성 응용을 위한 감정 SSML 처리기)

  • Ryu, Se-Hui;Cho, Hee;Lee, Ju-Hyun;Hong, Ki-Hyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.523-529
    • /
    • 2021
  • In this paper, we designed and developed an Emotional Speech Synthesis Markup Language (SSML) processor. Multi-speaker emotional speech synthesis technology that can express multiple voice colors and emotional expressions have been developed, and we designed Emotional SSML by extending SSML for multiple voice colors and emotional expressions. The Emotional SSML processor has a graphic user interface and consists of following four components. First, a multi-speaker emotional text editor that can easily mark specific voice colors and emotions on desired positions. Second, an Emotional SSML document generator that creates an Emotional SSML document automatically from the result of the multi-speaker emotional text editor. Third, an Emotional SSML parser that parses the Emotional SSML document. Last, a sequencer to control a multi-speaker and emotional Text-to-Speech (TTS) engine based on the result of the Emotional SSML parser. Based on SSML which is a programming language and platform independent open standard, the Emotional SSML processor can easily integrate with various speech synthesis engines and facilitates the development of multi-speaker emotional text-to-speech applications.

The usefulness of the depth images in image-based speech synthesis (영상 기반 음성합성에서 심도 영상의 유용성)

  • Ki-Seung Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.1
    • /
    • pp.67-74
    • /
    • 2023
  • The images acquired from the speaker's mouth region revealed the unique patterns according to the corresponding voices. By using this principle, the several methods were proposed in which speech signals were recognized or synthesized from the images acquired at the speaker's lower face. In this study, an image-based speech synthesis method was proposed in which the depth images were cooperatively used. Since depth images yielded depth information that cannot be acquired from optical image, it can be used for the purpose of supplementing flat optical images. In this paper, the usefulness of depth images from the perspective of speech synthesis was evaluated. The validation experiment was carried out on 60 Korean isolated words, it was confirmed that the performance in terms of both subjective and objective evaluation was comparable to the optical image-based method. When the two images were used in combination, performance improvements were observed compared with when each image was used alone.

A Korean TTS System for Educational Purpose (교육용 한국어 TTS 플랫폼 개발)

  • Lee Jungchul;Lee Sangho
    • MALSORI
    • /
    • no.50
    • /
    • pp.41-50
    • /
    • 2004
  • Recently, there has been considerable progress in the natural language processing and digital signal processing components and this progress has led to the improved synthetic speech qualify of many commercial TTS systems. But there still remain many obstacles to overcome for the practical application of TTS. To resolve the problems, the cooperative research among the related areas is highly required and a common Korean TTS platform is essential to promote these activities. This platform offers a general framework for building Korean speech synthesis systems and a full C/C++ source for modules supports to implement and test his own algorithm. In this paper we described the aspect of a Korean TTS platform to be developed and a developing plan.

  • PDF

An Optimization of Speech Database in Corpus-based speech synthesis sytstem (코퍼스기반 음성합성기의 데이터베이스 최적화 방안)

  • Jang Kyung-Ae;Chung Min-Hwa
    • Proceedings of the KSPS conference
    • /
    • 2002.11a
    • /
    • pp.209-213
    • /
    • 2002
  • This paper describes the reduction of DB without degradation of speech quality in Corpus-based Speech synthesizer of Korean language. In this paper, it is proposed that the frequency of every unit in reduced DB should reflect the frequency of units in Korean language. So, the target population of every unit is set to be proportional to their frequency in Korean large corpus(780K sentences, 45Mega phonemes). Second, the frequent instances during synthesis should be also maintained in reduced DB. To the last, it is proposed that frequency of every instance should be reflected in clustering criterion and used as criterion for selection of representative instances. The evaluation result with proposed methods reveals better quality than using conventional methods.

  • PDF

A Reduction of Speech Database in Corpus-based Speech Synthesis System (코퍼스기반 음성합성기의 데이터베이스 감축방안)

  • Jang Kyung-Ae;Chung Min-Hwa;Kim Jae-In;Koo Myoung-Wan
    • MALSORI
    • /
    • no.44
    • /
    • pp.145-156
    • /
    • 2002
  • This paper describes the reduction of DB without degradation of speech quality in Corpus-based Speech synthesizer of the Korean language. In this paper, it is proposed that the frequency of every unit in reduced DB reflect the frequency of units in the Korean language. So, the target population of every unit is set to be proportional to its frequency in Korean large corpus (780k sentences, 45Mega phones). Secondly, the frequent instances during synthesis should be also maintained in reduced DB. To the last, it is proposed that frequency of every instance be reflected in clustering criteria and used as another important criterion for selection of representative instances. The evaluation result with proposed methods reveals better quality than that using conventional methods.

  • PDF

Basic consideration for assessment of Korean TTS system (한국어 TTS 시스템의 객관적인 성능평가를 위한 기초검토)

  • Ko, Lag-Hwan;Kim, Young-Il;Kim, Bong-Wan;Lee, Yong-Ju
    • Proceedings of the KSPS conference
    • /
    • 2005.04a
    • /
    • pp.37-40
    • /
    • 2005
  • Recently due to the rapid development of speech synthesis based on the corpora, the performance of TTS systems, which convert text into speech through synthesis, has enhanced, and they are applied in various fields. However, the procedure for objective assessment of the performance of systems is not well established in Korea. The establishment of the procedure for objective assessment of the performance of systems is essential for the assessment of development systems for the developers and as the standard for choosing the suitable systems for the users. In this paper we will report on the results of the basic research for the establishment of the systematic standard for the procedure of objective assessment of the performance of Korean TTS systems with reference to the various attempts for this project in Korea and other countries.

  • PDF

Multi-Pulse Amplitude and Location Estimation by Maximum-Likelihood Estimation in MPE-LPC Speech Synthesis (MPE-LPC음성합성에서 Maximum- Likelihood Estimation에 의한 Multi-Pulse의 크기와 위치 추정)

  • 이기용;최홍섭;안수길
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.9
    • /
    • pp.1436-1443
    • /
    • 1989
  • In this paper, we propose a maximum-likelihood estimation(MLE) method to obtain the location and the amplitude of the pulses in MPE( multi-pulse excitation)-LPC speech synthesis using multi-pulses as excitation source. This MLE method computes the value maximizing the likelihood function with respect to unknown parameters(amplitude and position of the pulses) for the observed data sequence. Thus in the case of overlapped pulses, the method is equivalent to Ozawa's crosscorrelation method, resulting in equal amount of computation and sound quality with the cross-correlation method. We show by computer simulation: the multi-pulses obtained by MLE method are(1) pseudo-periodic in pitch in the case of voicde sound, (2) the pulses are random for unvoiced sound, (3) the pulses change from random to periodic in the interval where the original speech signal changes from unvoiced to voiced. Short time power specta of original speech and syunthesized speech obtained by using multi-pulses as excitation source are quite similar to each other at the formants.

  • PDF