• 제목/요약/키워드: Text-to-speech

검색결과 505건 처리시간 0.028초

A Voice-enabled Chatbot Mobile Application (음성지원 챗봇 모바일 애플리케이션)

  • Choi, In-Kyung;Choi, Yun-Jeong;Lee, Ye-Rin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.438-439
    • /
    • 2019
  • 사회적 문제와 인공지능 기술의 발달로 챗봇 서비스에 대한 관심이 점점 증가하고 있으며, 그 결과 TTS(Text to Speech) 및 STT(Speech to Text) 기술을 기반으로 한 보조형 프로그램에 대한 개발이 다양한 모바일 환경에서 진행중이다. 본 논문에서는 문자를 소리로 변환해주는 TTS(Text to Speech) 기술과 소리를 문자로 변환해주는 STT(Speech to Text) 기술을 사용하여 음성지원 챗봇 시스템을 제작하고 이를 안드로이드 기반의 모바일 애플리케이션으로 구현한 '음성지원 챗봇 모바일 애플리케이션'을 제안하고, 이와 관련하여 관련 기술 및 기대효과에 대해 소개한다.

Disfluencies and Speech Rates of Standard Korean Speakers in Story-telling and Reading Contexts

  • Shim, Hong-Im;Chon, Hee-Cheong;Ko, Do-Heung
    • Speech Sciences
    • /
    • 제12권1호
    • /
    • pp.45-51
    • /
    • 2005
  • The purpose of this study is to compare disfluencies and speech rates (overall speech rate and articulation rate) of normal adult speakers who use the standard Korean according to dissimilar speech tasks (story-telling and text-reading). Participants were 100 Korean adult speakers. The results are summarized as follows: First, the most frequent type of disfluency in the story-telling task was 'interjection', whereas that in the text-reading task was 'revision'. Second, the overall speech rates (syllables per second and syllables per minute) showed significant differences depending on the speech tasks. Third, the articulation rates (syllables per second and syllables per minute) showed significant differences depending on the speech tasks.

  • PDF

PROSODY CONTROL BASED ON SYNTACTIC INFORMATION IN KOREAN TEXT-TO-SPEECH CONVERSION SYSTEM

  • Kim, Yeon-Jun;Oh, Yung-Hwan
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 한국음향학회 1994년도 FIFTH WESTERN PACIFIC REGIONAL ACOUSTICS CONFERENCE SEOUL KOREA
    • /
    • pp.937-942
    • /
    • 1994
  • Text-to-Speech(TTS) conversion system can convert any words or sentences into speech. To synthesize the speech like human beings do, careful prosody control including intonation, duration, accent, and pause is required. It helps listeners to understand the speech clearly and makes the speech sound more natural. In this paper, a prosody control scheme which makes use of the information of the function word is proposed. Among many factors of prosody, intonation, duration, and pause are closely related to syntactic structure, and their relations have been formalized and embodied in TTS. To evaluate the synthesized speech with the proposed prosody control, one of the subjective evaluation methods-MOS(Mean Opinion Score) method has been used. Synthesized speech has been tested on 10 listeners and each listener scored the speech between 1 and 5. Through the evaluation experiments, it is observed that the proposed prosody control helps TTS system synthesize the more natural speech.

  • PDF

Implementation of Music Broadcasting Service System in the Shopping Center Using Text-To-Speech Technology (TTS를 이용한 매장 음악 방송 서비스 시스템 구현)

  • Chang, Moon-Soo;Kang, Sun-Mee
    • Speech Sciences
    • /
    • 제14권4호
    • /
    • pp.169-178
    • /
    • 2007
  • This thesis describes the development of a service system for small-sized shops which support not only music broadcasting, but editing and generating voice announcement using the TTS(Text-To-Speech) technology. The system has been developed based on web environments with an easy access whenever and wherever it is needed. The system is able to control the sound using silverlight media player based on the ASP .NET 2.0 technology without any additional application software. Use of the Ajax control allows for multiple users to get the maximum load when needed. TTS is built in the server side so that the service can be provided without user's computer. Due to convenience and usefulness of the system, the business sector can provide better service to many shops. Further additional functions such as statistical analysis will undoubtedly help shop management provide desirable services.

  • PDF

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • 제7권1호
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.

GENERATION OF MULTI-SYLLABLE NONSENSE WORDS FOR THE ASSESSMENT OF KOREAN TEXT-TO SPEECH SYSTEM (한국어 문장음성합성 시스템의 평가를 위한 다음절 무의미단어의 생성 및 평가에 관한 연구)

  • 조철우
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 한국음향학회 1994년도 제11회 음성통신 및 신호처리 워크샵 논문집 (SCAS 11권 1호)
    • /
    • pp.338-341
    • /
    • 1994
  • In this paper we propose a method to generate a multisyllable onsense wordest for the purpose of synthetic speech assessment and applies th ewordest to assess one commercial text-to-speech system. Some results about the experiment is suggested and it is verified that the generated nonsense wordset can be used to assess the intelligibility of the synthesizer in phoneme level or in phonemic environmental level. From the experimental results it is verified that such multi-syllable nonsense wordset can be useful for the assessment of synthesized speech.

  • PDF

Performance improvement of text-dependent speaker verification system using blind speech segmentation and energy weight (Blind speech segmentation과 에너지 가중치를 이용한 문장 종속형 화자인식기의 성능 향상)

  • Kim Jung-Gon;Kim Hyung Soon
    • MALSORI
    • /
    • 제47호
    • /
    • pp.131-140
    • /
    • 2003
  • We propose a new method of generating client models for HMM based text-dependent speaker verification system with only a small amount of training data. To make a client model, statistical methods such as segmental K-means algorithm are widely used, but they do not guarantee the quality or reliability of a model when only limited data are avaliable. In this paper, we propose a blind speech segmentation based on level building DTW algorithm as an alternative method to make a client model with limited data. In addition, considering the fact that voiced sounds have much more speaker-specific information than unvoiced sounds and energy of the former is higher than that of the latter, we also propose a new score evaluation method using the observation probability raised to the power of weighting factor estimated from the normalized log energy. Our experiment shows that the proposed methods are superior to conventional HMM based speaker verification system.

  • PDF

Perceptual Evaluation of Duration Models in Spoken Korean

  • Chung, Hyun-Song
    • Speech Sciences
    • /
    • 제9권1호
    • /
    • pp.207-215
    • /
    • 2002
  • Perceptual evaluation of duration models of spoken Korean was carried out based on the Classification and Regression Tree (CART) model for text-to-speech conversion. A reference set of durations was produced by a commercial text-to-speech synthesis system for comparison. The duration model which was built in the previous research (Chung & Huckvale, 2001) was applied to a Korean language speech synthesis diphone database, 'Hanmal (HN 1.0)'. The synthetic speech produced by the CART duration model was preferred in the subjective preference test by a small margin and the synthetic speech from the commercial system was superior in the clarity test. In the course of preparing the experiment, a labeled database of spoken Korean with 670 sentences was constructed. As a result of the experiment, a trained duration model for speech synthesis was obtained. The 'Hanmal' diphone database for Korean speech synthesis was also developed as a by-product of the perceptual evaluation.

  • PDF

Text-to-speech with linear spectrogram prediction for quality and speed improvement (음질 및 속도 향상을 위한 선형 스펙트로그램 활용 Text-to-speech)

  • Yoon, Hyebin
    • Phonetics and Speech Sciences
    • /
    • 제13권3호
    • /
    • pp.71-78
    • /
    • 2021
  • Most neural-network-based speech synthesis models utilize neural vocoders to convert mel-scaled spectrograms into high-quality, human-like voices. However, neural vocoders combined with mel-scaled spectrogram prediction models demand considerable computer memory and time during the training phase and are subject to slow inference speeds in an environment where GPU is not used. This problem does not arise in linear spectrogram prediction models, as they do not use neural vocoders, but these models suffer from low voice quality. As a solution, this paper proposes a Tacotron 2 and Transformer-based linear spectrogram prediction model that produces high-quality speech and does not use neural vocoders. Experiments suggest that this model can serve as the foundation of a high-quality text-to-speech model with fast inference speed.

Development and Evaluation of an English Speaking Task Using Smartphone and Text-to-Speech (스마트폰과 음성합성을 활용한 영어 말하기 과제의 개발과 평가)

  • Moon, Dosik
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제16권5호
    • /
    • pp.13-20
    • /
    • 2016
  • This study explores the effects of an video-recording English speaking task model on learners. The learning model, a form of mobile learning, was developed to facilitate the learners' output practice applying advantages of a smartphone and Text-to Speech. The survey results shows the positive effects of the speaking task on the domain of pronunciation, speaking, listening, writing in terms of students' confidence, as well as general English ability. The study further examines the possibilities and limitations of the speaking task in assisting Korean learners improve their speaking ability, who do not have sufficient exposure to English input or output practice due to the situational limitations where English is learned as a foreign language.