• Title/Summary/Keyword: Text-to-speech

Search Result 501, Processing Time 0.03 seconds

Fluency and Speech Rate for the Standard Korean Speakers (한국 표준어 화자의 유창성과 말속도에 관한 연구)

  • Shim, Hong-Im
    • Speech Sciences
    • /
    • v.11 no.3
    • /
    • pp.193-200
    • /
    • 2004
  • This was a preliminary study for standardizing speech rate and fluency of normal adult Korean speakers and comparing speech rate and fluency of normal speakers with those of professional speakers. The purposes of this study were to investigate (a) the speech rates (the overall speech rate and the articulation rate) and the disfluency characteristics of normnal adult speakers and (b) the speech rates (the overall speech rate and the articulation rate) and the disfluency characteristics between normal adult speakers and professional speakers. The results were as follows: The most frequent disfluency type was 'interjection' in story-telling, 'revision' in text reading and announcing of professional speakers. The professional speakers had the fastest speech rates (overall speech rate and articulation rate) among the 3 groups.

  • PDF

Development of technology to improve information accessibility of information vulnerable class using crawling & clipping

  • Jeong, Seong-Bae;Kim, Kyung-Shin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.2
    • /
    • pp.99-107
    • /
    • 2018
  • This study started from the public interest purpose to help accessibility for the information acquisition of the vulnerable groups due to visual difficulties such as the elderly and the visually impaired. In this study, the server resources are minimized and implemented in most of the user smart phones. In addition, we implement a method to gather necessary information by collecting only pattern information by utilizing crawl & clipping without having to visit the site of the information of the various sites having the data necessary for the user, and to have it in the server. Especially, we applied the TTS(Text-To-Speech) service composed of smart phone apps and tried to develop a unified customized information collection service based on voice-based information collection method.

Speech and Textual Data Fusion for Emotion Detection: A Multimodal Deep Learning Approach (감정 인지를 위한 음성 및 텍스트 데이터 퓨전: 다중 모달 딥 러닝 접근법)

  • Edward Dwijayanto Cahyadi;Mi-Hwa Song
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.526-527
    • /
    • 2023
  • Speech emotion recognition(SER) is one of the interesting topics in the machine learning field. By developing multi-modal speech emotion recognition system, we can get numerous benefits. This paper explain about fusing BERT as the text recognizer and CNN as the speech recognizer to built a multi-modal SER system.

APPLICATION OF KOREAN TEXT-TO-SPEECH FOR X.400 MHS SYSTEM

  • Kim, Hee-Dong;Koo, Jun-Mo;Choi, Ho-Joon;Kim, Sang-Taek
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.885-892
    • /
    • 1994
  • This paper presents the Korean text-to-speech (TTS) algorithm with speed and intonation control capability, and describes the development of the Voice message delivery system employing this TTS algorithm. This system allows the Interpersonal Messaging (IPM) Service users of Message Handling System (MHS) to send his/her text messages to user via telephone line using synthetic voice. In the X.400 MHS recommendation, the protocols and service elements are not specified for the voice message delivery system. Thus, we defined access protocol and service elements for Voice Access Unit based on the application program interface for message transfers between X.400 Message Transfer Agent and Voice Access Unit. The system architecture and operations will be provided.

  • PDF

Acoustic Modeling and Energy-Based Postprocessing for Automatic Speech Segmentation (자동 음성 분할을 위한 음향 모델링 및 에너지 기반 후처리)

  • Park Hyeyoung;Kim Hyungsoon
    • MALSORI
    • /
    • no.43
    • /
    • pp.137-150
    • /
    • 2002
  • Speech segmentation at phoneme level is important for corpus-based text-to-speech synthesis. In this paper, we examine acoustic modeling methods to improve the performance of automatic speech segmentation system based on Hidden Markov Model (HMM). We compare monophone and triphone models, and evaluate several model training approaches. In addition, we employ an energy-based postprocessing scheme to make correction of frequent boundary location errors between silence and speech sounds. Experimental results show that our system provides 71.3% and 84.2% correct boundary locations given tolerance of 10 ms and 20 ms, respectively.

  • PDF

Voice Recognition Speech Correction Application Using Big Data Analysis (빅데이터 분석을 활용한 음성 인식 스피치 교정 애플리케이션)

  • Kim, Han-Kyeol;Kim, Do-Woo;Lim, Sae-Myung;Hong, Du-Pyo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.533-535
    • /
    • 2019
  • 최근 청년 실업률의 증가에 따른 취업 경쟁이 날이 갈수록 심해지고 있다. 채용과정에서 면접의 비중을 높이는 기업도 갈수록 증가하고 있다. 또한 대기업에서는 면접의 객관성을 확보하기 위해 AI 면접을 도입했다. 이러한 면접의 도입으로 인해 취업 준비생들의 면접 준비에 드는 비용 부담이 증가하였다. 최근 AI분야에서 음성 인식과 자연어 처리에 대한 개발이 활발히 이루어지고 있다. 본 논문은 녹음된 면접 음성을 음성 인식 기술 중 STT(Speech To Text) 와 TTS(Text To Speech)를 활용하여 면접의 음성을 문자로, 면접 질문의 문장을 음성으로 변환한다. 또한 자연어 처리 및 감성어 사전(KNU)을 활용하여 면접 문장의 형태소 분석하고 긍정 및 부정 단어별 정보를 시각화 하여 나타낼 수 있게 구현하였다.

A Low-Cost Speech to Sign Language Converter

  • Le, Minh;Le, Thanh Minh;Bui, Vu Duc;Truong, Son Ngoc
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.3
    • /
    • pp.37-40
    • /
    • 2021
  • This paper presents a design of a speech to sign language converter for deaf and hard of hearing people. The device is low-cost, low-power consumption, and it can be able to work entirely offline. The speech recognition is implemented using an open-source API, Pocketsphinx library. In this work, we proposed a context-oriented language model, which measures the similarity between the recognized speech and the predefined speech to decide the output. The output speech is selected from the recommended speech stored in the database, which is the best match to the recognized speech. The proposed context-oriented language model can improve the speech recognition rate by 21% for working entirely offline. A decision module based on determining the similarity between the two texts using Levenshtein distance decides the output sign language. The output sign language corresponding to the recognized speech is generated as a set of sequential images. The speech to sign language converter is deployed on a Raspberry Pi Zero board for low-cost deaf assistive devices.

Primary Study for dialogue based on Ordering Chatbot

  • Kim, Ji-Ho;Park, JongWon;Moon, Ji-Bum;Lee, Yulim;Yoon, Andy Kyung-yong
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.209-214
    • /
    • 2018
  • Today is the era of artificial intelligence. With the development of artificial intelligence, machines have begun to impersonate various human characteristics today. Chatbot is one instance of this interactive artificial intelligence. Chatbot is a computer program that enables to conduct natural conversations with people. As mentioned above, Chatbot conducted conversations in text, but Chatbot, in this study evolves to perform commands based on speech-recognition. In order for Chatbot to perfectly emulate a human dialogue, it is necessary to analyze the sentence correctly and extract appropriate response. To accomplish this, the sentence is classified into three types: objects, actions, and preferences. This study shows how objects is analyzed and processed, and also demonstrates the possibility of evolving from an elementary model to an advanced intelligent system. By this study, it will be evaluated that speech-recognition based Chatbot have improved order-processing time efficiency compared to text based Chatbot. Once this study is done, speech-recognition based Chatbot have the potential to automate customer service and reduce human effort.

A Study on the Voice Conversion with HMM-based Korean Speech Synthesis (HMM 기반의 한국어 음성합성에서 음색변환에 관한 연구)

  • Kim, Il-Hwan;Bae, Keun-Sung
    • MALSORI
    • /
    • v.68
    • /
    • pp.65-74
    • /
    • 2008
  • A statistical parametric speech synthesis system based on the hidden Markov models (HMMs) has grown in popularity over the last few years, because it needs less memory and low computation complexity and is suitable for the embedded system in comparison with a corpus-based unit concatenation text-to-speech (TTS) system. It also has the advantage that voice characteristics of the synthetic speech can be modified easily by transforming HMM parameters appropriately. In this paper, we present experimental results of voice characteristics conversion using the HMM-based Korean speech synthesis system. The results have shown that conversion of voice characteristics could be achieved using a few sentences uttered by a target speaker. Synthetic speech generated from adapted models with only ten sentences was very close to that from the speaker dependent models trained using 646 sentences.

  • PDF

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.