• Title/Summary/Keyword: embedded TTS

Search Result 18, Processing Time 0.028 seconds

Comparison of Korean Real-time Text-to-Speech Technology Based on Deep Learning (딥러닝 기반 한국어 실시간 TTS 기술 비교)

  • Kwon, Chul Hong
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.640-645
    • /
    • 2021
  • The deep learning based end-to-end TTS system consists of Text2Mel module that generates spectrogram from text, and vocoder module that synthesizes speech signals from spectrogram. Recently, by applying deep learning technology to the TTS system the intelligibility and naturalness of the synthesized speech is as improved as human vocalization. However, it has the disadvantage that the inference speed for synthesizing speech is very slow compared to the conventional method. The inference speed can be improved by applying the non-autoregressive method which can generate speech samples in parallel independent of previously generated samples. In this paper, we introduce FastSpeech, FastSpeech 2, and FastPitch as Text2Mel technology, and Parallel WaveGAN, Multi-band MelGAN, and WaveGlow as vocoder technology applying non-autoregressive method. And we implement them to verify whether it can be processed in real time. Experimental results show that by the obtained RTF all the presented methods are sufficiently capable of real-time processing. And it can be seen that the size of the learned model is about tens to hundreds of megabytes except WaveGlow, and it can be applied to the embedded environment where the memory is limited.

A Study on the Voice Conversion with HMM-based Korean Speech Synthesis (HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구)

  • Kim, Il-Hwan;Bae, Keun-Sung
    • MALSORI
    • /
    • v.68
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

Ubiquitous Car Maintenance Services Using Augmented Reality and Context Awareness (증강현실을 활용한 상황인지기반의 편재형 자동차 정비 서비스)

  • Rhee, Gue-Won;Seo, Dong-Woo;Lee, Jae-Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.3
    • /
    • pp.171-181
    • /
    • 2007
  • Ubiquitous computing is a vision of our future computing lifestyle in which computer systems seamlessly integrate into our everyday lives, providing services and information in anywhere and anytime fashion. Augmented reality (AR) can naturally complement ubiquitous computing by providing an intuitive and collaborative visualization and simulation interface to a three-dimensional information space embedded within physical reality. This paper presents a service framework and its applications for providing context-aware u-car maintenance services using augmented reality, which can support a rich set of ubiquitous services and collaboration. It realizes bi-augmentation between physical and virtual spaces using augmented reality. It also offers a context processing module to acquire, interpret and disseminate context information. In particular, the context processing module considers user's preferences and security profile for providing private and customer-oriented services. The prototype system has been implemented to support 3D animation, TTS (Text-to-Speech), augmented manual, annotation, and pre- and post-augmentation services in ubiquitous car service environments.

The Recent Trends and Applications of Embedded TTS Technologies (내장형 음성합성 기술 동향 및 사례)

  • Kim, Jong-Jin;Kim, Jeong-Se;Kim, Sang-Hun;Park, Jun
    • Electronics and Telecommunications Trends
    • /
    • v.23 no.1 s.109
    • /
    • pp.77-88
    • /
    • 2008
  • 음성합성 기술은 1990년대 중반 음편접합 방법론이 출현하면서 괄목한 만한 기술적 발전을 이루어, 2000년 전후에는 전화망을 이용한 ARS, VMS, UMS 서비스를 중심으로 폭넓게 사용되면서 일반 사용자들에게 매우 친숙한 서비스를 제공하여 왔다. 그러나 최근 텔레포니 기반의 음성 기술 시장은 기업고객 위주로 그 성장이 더딘 반면, 지능형 로봇, 텔레매틱스, 홈네트워크, 차세대 PC와 같은 전략적 국가 신성장동력 산업분야나 MP3 플레이어, 휴대폰, PMP 단말기, 휴대용 단말기와 같은 임베디드 분야가 음성 기술의 새로운 시장으로 주목을 받고 있다. 임베디드 분야에서 요구하는 음성 기술은 기존 서버급 시스템에서 운영되었던 기술과는 상당히 다른 기술 특성을 가지고 있다. 이에 본 고에서는 음성 기술 중 특히 음성합성 기술에 관한 임베디드 분야의 요구사항을 고찰하고, 이를 해결하기 위한 최근의 기술적 발전 동향 및 응용 사례에 대해서 기술하고자 한다.

Development of IoT System Based on Context Awareness to Assist the Visually Impaired

  • Song, Mi-Hwa
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.320-328
    • /
    • 2021
  • As the number of visually impaired people steadily increases, interest in independent walking is also increasing. However, there are various inconveniences in the independent walking of the visually impaired at present, reducing the quality of life of the visually impaired. The white cane, which is an existing walking aid for the visually impaired, has difficulty in recognizing upper obstacles and obstacles outside the effective distance. In addition, it is inconvenient to cross the street because the sound signal to help the visually impaired cross the crosswalk is lacking or damaged. These factors make it difficult for the visually impaired to walk independently. Therefore, we propose the design of an embedded system that provides traffic light recognition through object recognition technology, voice guidance using TTS, and upper obstacle recognition through ultrasonic sensors so that blind people can realize safe and high-quality independent walking.

Hand-Gesture Dialing System for Safe Driving (안전성 확보를 위한 손동작 전화 다이얼링 시스템)

  • Jang, Won-Ang;Kim, Jun-Ho;Lee, Do Hoon;Kim, Min-Jung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.10
    • /
    • pp.4801-4806
    • /
    • 2012
  • There are still problems have to solve for safety of driving comparing to the upgraded convenience of advanced vehicle. Most traffic accident is by uncareful driving cause of interface operations which are directive reasons of it in controlling the complicate multimedia device. According to interesting in smart automobile, various approaches for safe driving have been studied. The current multimedia interface embedded in vehicle is lacking the safety due to loss the sense and operation capacity by instantaneous view movement. In this paper, we propose a safe dialing system for safe driving to control dial and search dictionary by hand-gesture. The proposed system improved the user convenience and safety in automobile operation using intuitive gesture and TTS(Text to Speech).

Contents Development of IrobiQ on School Violence Prevention Program for Young Children (지능형 로봇 아이로비큐(IrobiQ)를 활용한 학교폭력 예방 프로그램 개발)

  • Hyun, Eunja;Lee, Hawon;Yeon, Hyemin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.9
    • /
    • pp.455-466
    • /
    • 2013
  • The purpose of this study was to develop a school violence prevention program "Modujikimi" for young children to be embedded in IrobiQ, the teacher assistive robot. The themes of this program consisted of basic character education, bullying prevention education and sexual violence prevention education. The activity types included large group, individual and small group activities, free choice activities, and finally parents' education, which included poems, fairy tales, music, art, sharing stories. Finally, the multi modal functions of the robot were employed: image on the screen, TTS (Text To Speech), touch function, recognition of sound and recording system. The robot content was demonstrated to thirty early childhood educators whose acceptability of the content was measured using questionnaires. And also the content was applied to children in daycare center. As a result, majority of them responded positively in acceptability. The results of this study suggest that the further research is needed to improve two-way interactivity of teacher assistive robot.

HearCAM Embedded Platform Design (히어 캠 임베디드 플랫폼 설계)

  • Hong, Seon Hack;Cho, Kyung Soon
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.4
    • /
    • pp.79-87
    • /
    • 2014
  • In this paper, we implemented the HearCAM platform with Raspberry PI B+ model which is an open source platform. Raspberry PI B+ model consists of dual step-down (buck) power supply with polarity protection circuit and hot-swap protection, Broadcom SoC BCM2835 running at 700MHz, 512MB RAM solered on top of the Broadcom chip, and PI camera serial connector. In this paper, we used the Google speech recognition engine for recognizing the voice characteristics, and implemented the pattern matching with OpenCV software, and extended the functionality of speech ability with SVOX TTS(Text-to-speech) as the matching result talking to the microphone of users. And therefore we implemented the functions of the HearCAM for identifying the voice and pattern characteristics of target image scanning with PI camera with gathering the temperature sensor data under IoT environment. we implemented the speech recognition, pattern matching, and temperature sensor data logging with Wi-Fi wireless communication. And then we directly designed and made the shape of HearCAM with 3D printing technology.