• Title/Summary/Keyword: Automatic Speech Synthesizer

Search Result 3, Processing Time 0.017 seconds

A Study of Speech Control Tags Based on Semantic Information of a Text (텍스트의 의미 정보에 기반을 둔 음성컨트롤 태그에 관한 연구)

  • Chang, Moon-Soo;Chung, Kyeong-Chae;Kang, Sun-Mee
    • Speech Sciences
    • /
    • v.13 no.4
    • /
    • pp.187-200
    • /
    • 2006
  • The speech synthesis technology is widely used and its application area is also being broadened to an automatic response service, a learning system for handicapped person, etc. However, the sound quality of the speech synthesizer has not yet reached to the satisfactory level of users. To make a synthesized speech, the existing synthesizer generates rhythms only by the interval information such as space and comma or by several punctuation marks such as a question mark and an exclamation mark so that it is not easy to generate natural rhythms of people even though it is based on mass speech database. To make up for the problem, there is a way to select rhythms after processing language from a higher level information. This paper proposes a method for generating tags for controling rhythms by analyzing the meaning of sentence with speech situation information. We use the Systemic Functional Grammar (SFG) [4] which analyzes the meaning of sentence with speech situation information considering the sentence prior to the given one, the situation of a conversation, the relationship among people in the conversation, etc. In this study, we generate Semantic Speech Control Tag (SSCT) by the result of SFG's meaning analysis and the voice wave analysis.

  • PDF

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF

Fundamental Acoustic Investigation of Korean Male 5 Monophthongs (한국 남성의 단모음 [아, 에, 이, 오, 우]에 대한 음향음성학적 기반연구)

  • Choi, Yae-Lin
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.6
    • /
    • pp.373-377
    • /
    • 2010
  • Numerous quantitative and qualitative studies have already been published related to English vowels. However, only minimal amounts of studies based on the acoustic analysis of Korean vowels have been accomplished. The purpose of this study is to obtain sufficient quantitative data based on the acoustic aspects of Korean vowels produced by males between the ages of 20s and 30s. A total of 31 males in their 20s and 30s produced the five fundamental vowels /a, e, i, o, u/ by repeating each of them three times in the standard Korean dialect. Such speech productions were recorded with 'Cool edit' and F1, F2, F3, F4 were extracted through the MATLAB acoustic analysis program. Results indicated that the overall patterns of formants were similar to previous studies, except that the formant levels of F1 and F2 of the vowels produced in this study were generally lower than that in previous studies. Future studies need to focus on obtaining vowel data by considering other factors such as age and other speech materials.