• Title/Summary/Keyword: Voice Synthesis

Search Result 103, Processing Time 0.028 seconds

Sensor Control and Aquisition Information Using Voice I/O (음성 입출력을 이용한 센서 제어 및 정보 획득)

  • Youn, Hyung Jin;Lee, Chang Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.495-496
    • /
    • 2018
  • As more and more companies introduce artificial intelligent(AI) speakers, the price of the speakers has become a burden to someone. Based on some knowledge and dexterity, it is not difficult to make an AI speaker that acquires sensor information and environmental information of the house in accordance with your own taste. In this paper, we implement an AI speaker using Raspberry Pie, Google Cloud Speech (GCS) and Naver's Clova Speech Synthesis (CSS) API.

  • PDF

Control of Duration Model Parameters in HMM-based Korean Speech Synthesis (HMM 기반의 한국어 음성합성에서 지속시간 모델 파라미터 제어)

  • Kim, Il-Hwan;Bae, Keun-Sung
    • Speech Sciences
    • /
    • v.15 no.4
    • /
    • pp.97-105
    • /
    • 2008
  • Nowadays an HMM-based text-to-speech system (HTS) has been very widely studied because it needs less memory and low computation complexity and is suitable for embedded systems in comparison with a corpus-based unit concatenation text-to-speech one. It also has the advantage that voice characteristics and the speaking rate of the synthetic speech can be converted easily by modifying HMM parameters appropriately. We implemented an HMM-based Korean text-to-speech system using a small size Korean speech DB and proposes a method to increase the naturalness of the synthetic speech by controlling duration model parameters in the HMM-based Korean text-to speech system. We performed a paired comparison test to verify that theses techniques are effective. The test result with the preference scores of 73.8% has shown the improvement of the naturalness of the synthetic speech through controlling the duration model parameters.

  • PDF

Design of Low Bits Rate Transform Excitation Wide Band Speech and Audio Coder of Analysis-by-Synthesis Structure (분석/합성 구조의 저 전송률 변환여기 광대역 음성/오디오 부호화기 설계)

  • Jang, Sunghoon;Hong, Kibong;Lee, Insung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.31 no.7
    • /
    • pp.472-479
    • /
    • 2012
  • This paper is aimed to design 9.2 kbps low bits late transform excitation coder that target to voice and audio signal. To set up low bit rate, we used Band-selection in frequency domain and gain-shape quantization and AbS structure. To decrease lots of calculation from ABS structure, we used each band IDFT and synthesis. And we designed non-transfer band for performance by inserting comfort noise. We propose coder that has low bit rate and similar performance comparing with original 10.4 kbps AMR-WB+ TCX mode.

An end-to-end synthesis method for Korean text-to-speech systems (한국어 text-to-speech(TTS) 시스템을 위한 엔드투엔드 합성 방식 연구)

  • Choi, Yeunju;Jung, Youngmoon;Kim, Younggwan;Suh, Youngjoo;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.10 no.1
    • /
    • pp.39-48
    • /
    • 2018
  • A typical statistical parametric speech synthesis (text-to-speech, TTS) system consists of separate modules, such as a text analysis module, an acoustic modeling module, and a speech synthesis module. This causes two problems: 1) expert knowledge of each module is required, and 2) errors generated in each module accumulate passing through each module. An end-to-end TTS system could avoid such problems by synthesizing voice signals directly from an input string. In this study, we implemented an end-to-end Korean TTS system using Google's Tacotron, which is an end-to-end TTS system based on a sequence-to-sequence model with attention mechanism. We used 4392 utterances spoken by a Korean female speaker, an amount that corresponds to 37% of the dataset Google used for training Tacotron. Our system obtained mean opinion score (MOS) 2.98 and degradation mean opinion score (DMOS) 3.25. We will discuss the factors which affected training of the system. Experiments demonstrate that the post-processing network needs to be designed considering output language and input characters and that according to the amount of training data, the maximum value of n for n-grams modeled by the encoder should be small enough.

A Pre-Selection of Candidate Units Using Accentual Characteristic In a Unit Selection Based Japanese TTS System (일본어 악센트 특징을 이용한 합성단위 선택 기반 일본어 TTS의 후보 합성단위의 사전선택 방법)

  • Na, Deok-Su;Min, So-Yeon;Lee, Kwang-Hyoung;Lee, Jong-Seok;Bae, Myung-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.4
    • /
    • pp.159-165
    • /
    • 2007
  • In this paper, we propose a new pre-selection of candidate units that is suitable for the unit selection based Japanese TTS system. General pre-selection method performed by calculating a context-dependent cost within IP (Intonation Phrase). Different from other languages, however. Japanese has an accent represented as the height of a relative pitch, and several words form a single accentual phrase. Also. the prosody in Japanese changes in accentual phrase units. By reflecting such prosodic change in pre-selection. the qualify of synthesized speech can be improved. Furthermore, by calculating a context-dependent cost within accentual phrase, synthesis speed can be improved than calculating within intonation phrase. The proposed method defines AP. analyzes AP in context and performs pre-selection using accentual phrase matching which calculates CCL (connected context length) of the Phoneme's candidates that should be synthesized in each accentual phrase. The baseline system used in the proposed method is VoiceText, which is a synthesizer of Voiceware. Evaluations were made on perceptual error (intonation error, concatenation mismatch error) and synthesis time. Experimental result showed that the proposed method improved the qualify of synthesized speech. as well as shortened the synthesis time.

Analysis of Delay Characteristics in Advanced Intelligent Network-Intelligent Peripheral (AIN IP) (차세대 지능망 지능형 정보제공 시스템의 지연 특성 분석)

  • 이일우;최고봉
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.8A
    • /
    • pp.1124-1133
    • /
    • 2000
  • Advanced Intelligent Network Intelligent Peripheral (AIN IP) is one of the AIN elements which consist of Service Control Point (SCP), Service Switching Point (SSP), and IP for AIN services, such as play announcement, digit collect, voice recognition/synthesis, voice prompt and receipt. This paper, featuring ISUP/INAP protocols, describes the procedures for call setup/release bearer channels between SSP/SCP and IP, todeliver specialized resources through the bearer channels, and it describes the structure and procedure for AIN services such as Automatic Collect Call (ACC), Universal Personal Telecommunication (UPT), and teleVOTing(VOT). In this environments, the delay characteristics of If system is investigated as the performance analysis, Policy establishment.

  • PDF

A comparison of CPP analysis among breathiness ranks (기식 등급에 따른 CPP (Cepstral Peak Prominence) 분석 비교)

  • Kang, Youngae;Koo, Bonseok;Jo, Cheolwoo
    • Phonetics and Speech Sciences
    • /
    • v.7 no.1
    • /
    • pp.21-26
    • /
    • 2015
  • The aim of this study is to synthesize pathological breathy voice and to make a cepstral peak prominence (CPP) table following breathiness ranks by cepstral analysis to supplement reliability of the perceptual auditory judgment task. KlattGrid synthesizer included in Praat was used. Synthesis parameters consist of two groups, i.e., constants and variables. Constant parameters are pitch, amplitude, flutter, open phase, oral formant and bandwidth. Variable parameters are breathiness (BR), aspiration amplitude (AH), and spectral tilt (TL). Five hundred sixty samples of synthetic breathy vowel /a/ for male were created. Three raters participated in ranking of the breathiness. 217 were proved to be inadequate samples from perceptual judgment and cepstral analysis. Finally, 343 samples were selected. These CPP values and other related parameters from cepstral analysis are classified under four breathiness ranks (B0~B3). The mean and standard deviation of CPP is $16.10{\pm}1.15$ dB(B0), $13.68{\pm}1.34$ dB(B1), $10.97{\pm}1.41$ dB(B2), and $3.03{\pm}4.07$ dB(B3). The value of CPP decreases toward the severe group of breathiness because there is a lot of noise and a small quantity of harmonics.

ETRI small-sized dialog style TTS system (ETRI 소용량 대화체 음성합성시스템)

  • Kim, Jong-Jin;Kim, Jeong-Se;Kim, Sang-Hun;Park, Jun;Lee, Yun-Keun;Hahn, Min-Soo
    • Proceedings of the KSPS conference
    • /
    • 2007.05a
    • /
    • pp.217-220
    • /
    • 2007
  • This study outlines a small-sized dialog style ETRI Korean TTS system which applies a HMM based speech synthesis techniques. In order to build the VoiceFont, dialog-style 500 sentences were used in training HMM. And the context information about phonemes, syllables, words, phrases and sentence were extracted fully automatically to build context-dependent HMM. In training the acoustic model, acoustic features such as Mel-cepstrums, logF0 and its delta, delta-delta were used. The size of the VoiceFont which was built through the training is 0.93Mb. The developed HMM-based TTS system were installed on the ARM720T processor which operates 60MHz clocks/second. To reduce computation time, the MLSA inverse filtering module is implemented with Assembly language. The speed of the fully implemented system is the 1.73 times faster than real time.

  • PDF

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

One-shot multi-speaker text-to-speech using RawNet3 speaker representation (RawNet3를 통해 추출한 화자 특성 기반 원샷 다화자 음성합성 시스템)

  • Sohee Han;Jisub Um;Hoirin Kim
    • Phonetics and Speech Sciences
    • /
    • v.16 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • Recent advances in text-to-speech (TTS) technology have significantly improved the quality of synthesized speech, reaching a level where it can closely imitate natural human speech. Especially, TTS models offering various voice characteristics and personalized speech, are widely utilized in fields such as artificial intelligence (AI) tutors, advertising, and video dubbing. Accordingly, in this paper, we propose a one-shot multi-speaker TTS system that can ensure acoustic diversity and synthesize personalized voice by generating speech using unseen target speakers' utterances. The proposed model integrates a speaker encoder into a TTS model consisting of the FastSpeech2 acoustic model and the HiFi-GAN vocoder. The speaker encoder, based on the pre-trained RawNet3, extracts speaker-specific voice features. Furthermore, the proposed approach not only includes an English one-shot multi-speaker TTS but also introduces a Korean one-shot multi-speaker TTS. We evaluate naturalness and speaker similarity of the generated speech using objective and subjective metrics. In the subjective evaluation, the proposed Korean one-shot multi-speaker TTS obtained naturalness mean opinion score (NMOS) of 3.36 and similarity MOS (SMOS) of 3.16. The objective evaluation of the proposed English and Korean one-shot multi-speaker TTS showed a prediction MOS (P-MOS) of 2.54 and 3.74, respectively. These results indicate that the performance of our proposed model is improved over the baseline models in terms of both naturalness and speaker similarity.