• Title/Summary/Keyword: Speech recognition services

Search Result 80, Processing Time 0.042 seconds

An Implementation of the Speech-Library and Conversion Web-Services of the Web-Page for Speech-Recognition (음성인식을 위한 웹페이지 변환 웹서비스와 음성라이브러리 구현)

  • Oh, Jee-Young;Kim, Yoon-Joong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.478-482
    • /
    • 2006
  • This paper implemented speech-library and the Web Services that conversion the Web page for the speech recognition. The system is consisted of Web services consumer and Web services providers. The Web services consumer has libraries that Speech-library and proxy-library. The Speech -library has functions as follows from the user's speech extracted speech-data and searching the URL in link-table that is mapped with user's speech. The proxy-library calls two web services and is received the returning result. The Web services provider consisted of Parsing Web Services and Speech-Recognition Web Services. Parsing Web Services adds ActiveX control and reconstructs web page using the speech recognition. The speech recognizer is the web service providers that implemented in the previous study. As the result of experiment, we show that reconstructs web page and creates link-Table. Also searching the URL in link-table that is mapped with user's speech. Also confirmed returning the web page to user by searching URL in link-table that is mapped with the result of speech recognition web services.

  • PDF

Development of a Weather Forecast Service Based on AIN Using Speech Recognition (음성 인식을 이용한 지능망 기반 일기예보 서비스 개발)

  • Park Sung-Joon;Kim Jae-In;Koo Myoung-Wan;Jhon Chu-Shik
    • MALSORI
    • /
    • no.51
    • /
    • pp.137-149
    • /
    • 2004
  • A weather forecast service with speech recognition is described. This service allows users to get the weather information of all the cities by saying the city names with just one phone call, which was not provided in the previous weather forecast service. Speech recognition is implemented in the intelligent peripheral (IP) of the advanced intelligent network (AIN). The AIN is a telephone network architecture that separates service logic from switching equipment, allowing new services to be added without having to redesign switches to support new services. Experiments in speech recognition show that the recognition accuracy is 90.06% for the general users' speech database. For the laboratory members' speech database, the accuracies are 95.04% and 93.81%, respectively in simulation and in the test on the developed system.

  • PDF

Real-Time Implementation of Wireless Remote Control of Mobile Robot Based-on Speech Recognition Command (음성명령에 의한 모바일로봇의 실시간 무선원격 제어 실현)

  • Shim, Byoung-Kyun;Han, Sung-Hyun
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.20 no.2
    • /
    • pp.207-213
    • /
    • 2011
  • In this paper, we present a study on the real-time implementation of mobile robot to which the interactive voice recognition technique is applied. The speech command utters the sentential connected word and asserted through the wireless remote control system. We implement an automatic distance speech command recognition system for voice-enabled services interactively. We construct a baseline automatic speech command recognition system, where acoustic models are trained from speech utterances spoken by a microphone. In order to improve the performance of the baseline automatic speech recognition system, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross talking and distance speech. We illustrate the performance of the developed speech recognition system by experiments. As a result, it is illustrated that the average rates of proposed speech recognition system shows about 95% above.

Multimodal audiovisual speech recognition architecture using a three-feature multi-fusion method for noise-robust systems

  • Sanghun Jeon;Jieun Lee;Dohyeon Yeo;Yong-Ju Lee;SeungJun Kim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.22-34
    • /
    • 2024
  • Exposure to varied noisy environments impairs the recognition performance of artificial intelligence-based speech recognition technologies. Degraded-performance services can be utilized as limited systems that assure good performance in certain environments, but impair the general quality of speech recognition services. This study introduces an audiovisual speech recognition (AVSR) model robust to various noise settings, mimicking human dialogue recognition elements. The model converts word embeddings and log-Mel spectrograms into feature vectors for audio recognition. A dense spatial-temporal convolutional neural network model extracts features from log-Mel spectrograms, transformed for visual-based recognition. This approach exhibits improved aural and visual recognition capabilities. We assess the signal-to-noise ratio in nine synthesized noise environments, with the proposed model exhibiting lower average error rates. The error rate for the AVSR model using a three-feature multi-fusion method is 1.711%, compared to the general 3.939% rate. This model is applicable in noise-affected environments owing to its enhanced stability and recognition rate.

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF

A Study on Dialect Expression in Korean-Based Speech Recognition (한국어 기반 음성 인식에서 사투리 표현에 관한 연구)

  • Lee, Sin-hyup
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.333-335
    • /
    • 2022
  • The development of speech recognition processing technology has been applied and used in various video and streaming services along with STT and TTS technologies. However, there are high barriers to clear written expression due to the use of dialects and overlapping of stop words, exclamations, and similar words for voice recognition of actual conversation content. In this study, for ambiguous dialects in speech recognition, we propose a speech recognition technology that applies dialect key word dictionary processing method by category and dialect prosody as speech recognition network model properties.

  • PDF

An Proposal and Evaluation of the New formant Tracking Algorithm for Speech Recognition (음성인식을 위한 새로운 포만트트랙킹 알고리즘의 제안과 평가)

  • 송정영
    • Journal of Internet Computing and Services
    • /
    • v.3 no.4
    • /
    • pp.51-59
    • /
    • 2002
  • For the speech recognition, this paper proposes a improved new formant tracking algorithm The recognition data for the simulation on this paper are used with the Korean digit speech. The recognition rate of the improved algorithm for the Korean digit speech shows 91% for 300 digit speech The effectiveness of this research has been confirmed through recognition simulations.

  • PDF

A Study on the Intelligent Personal Assistant Development Method Base on the Open Source (오픈소스기반의 지능형 개인 도움시스템(IPA) 개발방법 연구)

  • Kim, Kil-hyun;Kim, Young-kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.89-92
    • /
    • 2016
  • The latest the siri and like this is offering services that recognize and respond to words in the smartphone or web services. In order to handle intelligently these voices, It needs to search big data in the cloud and requires the implementation of parsing context accuracy given. In this paper, I would like to propose the study on the intelligent personal assistant development method base on the Open source with ASR(Automatic Speech Recognition), QAS(Question Answering System) and TTS(Text To Speech).

  • PDF

A Study on Voice Web Browsing in Automatic Speech Recognition Application System (음성인식 시스템에서의 Voice Web Browsing에 관한 연구)

  • 윤재석
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.949-954
    • /
    • 2003
  • In this study, Automatic Speech Recognition Application System is designed and implemented to realize transformation from present GUI-centered web services to VUI-centered web service. Due to ASP's restriction with web in reusability and portability, in this study, Automatic Speech Recognition Application System with Javabeans Component Architecture is devised and studied. Also the voice web browsing which is able to transfer voice and graphic information simultaneously is studied using Remote AWT(Abstract Windows Toolkit).

A Basic Performance Evaluation of the Speech Recognition APP of Standard Language and Dialect using Google, Naver, and Daum KAKAO APIs (구글, 네이버, 다음 카카오 API 활용앱의 표준어 및 방언 음성인식 기초 성능평가)

  • Roh, Hee-Kyung;Lee, Kang-Hee
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.12
    • /
    • pp.819-829
    • /
    • 2017
  • In this paper, we describe the current state of speech recognition technology and identify the basic speech recognition technology and algorithms first, and then explain the code flow of API necessary for speech recognition technology. We use the application programming interface (API) of Google, Naver, and Daum KaKao, which have the most famous search engine among the speech recognition APIs, to create a voice recognition app in the Android studio tool. Then, we perform a speech recognition experiment on people's standard words and dialects according to gender, age, and region, and then organize the recognition rates into a table. Experiments were conducted on the Gyeongsang-do, Chungcheong-do, and Jeolla-do provinces where the degree of tongues was severe. And Comparative experiments were also conducted on standardized dialects. Based on the resultant sentences, the accuracy of the sentence is checked based on spacing of words, final consonant, postposition, and words and the number of each error is represented by a number. As a result, we aim to introduce the advantages of each API according to the speech recognition rate, and to establish a basic framework for the most efficient use.