• Title/Summary/Keyword: user.s voice recognition

Search Result 68, Processing Time 0.034 seconds

A Study of Hybrid Automatic Interpret Support System (하이브리드 자동 통역지원 시스템에 관한 연구)

  • Lim, Chong-Gyu;Gang, Bong-Gyun;Park, Ju-Sik;Kang, Bong-Kyun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.28 no.3
    • /
    • pp.133-141
    • /
    • 2005
  • The previous research has been mainly focused on individual technology of voice recognition, voice synthesis, translation, and bone transmission technical. Recently, commercial models have been produced using aforementioned technologies. In this research, a new automated translation support system concept has been proposed by combining established technology of bone transmission and wireless system. The proposed system has following three major components. First, the hybrid system consist of headset, bone transmission and other technologies will recognize user's voice. Second, computer recognized voice (using small server attached to the user) of the user will be converted into digital signal. Then it will be translated into other user's language by translation algorithm. Third, the translated language will be wirelessly transmitted to the other party. The transmitted signal will be converted into voice in the other party's computer using the hybrid system. This hybrid system will transmit the clear message regardless of the noise level in the environment or user's hearing ability. By using the network technology, communication between users can also be clearly transmitted despite the distance.

A Study on VoiceXML Application of User-Controlled Form Dialog System (사용자 주도 폼 다이얼로그 시스템의 VoiceXML 어플리케이션에 관한 연구)

  • Kwon, Hyeong-Joon;Roh, Yong-Wan;Lee, Hyon-Gu;Hong, Hwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.183-190
    • /
    • 2007
  • VoiceXML is new markup language which is designed for web resource navigation via voice based on XML. An application using VoiceXML is classified into mutual-controlled and machine-controlled form dialog structure. Such dialog structures can't construct service which provide free navigation of web resource by user because a scenario is decided by application developer. In this paper, we propose VoiceXML application structure using user-controlled form dialog system which decide service scenario according to user's intention. The proposed application automatically detects recognition candidates from requested information by user, and then system uses recognition candidate as voice-anchor. Also, system connects each voice-anchor with new voice-node. An example of proposed system, we implement news service with IT term dictionary, and we confirm detection and registration of voice-anchor and make an estimate of hit rate about measurement of an successive offer from information according to user's intention and response speed. As the experiment result, we confirmed possibility which is more freely navigation of web resource than existing VoiceXML form dialog systems.

A Voice Controlled Service Robot Using Support Vector Machine

  • Kim, Seong-Rock;Park, Jae-Suk;Park, Ju-Hyun;Lee, Suk-Gyu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1413-1415
    • /
    • 2004
  • This paper proposes a SVM(Support Vector Machine) training algorithm to control a service robot with voice command. The service robot with a stereo vision system and dual manipulators of four degrees of freedom implements a User-Dependent Voice Control System. The training of SVM algorithm that is one of the statistical learning theories leads to a QP(quadratic programming) problem. In this paper, we present an efficient SVM speech recognition scheme especially based on less learning data comparing with conventional approaches. SVM discriminator decides rejection or acceptance of user's extracted voice features by the MFCC(Mel Frequency Cepstrum Coefficient). Among several SVM kernels, the exponential RBF function gives the best classification and the accurate user recognition. The numerical simulation and the experiment verified the usefulness of the proposed algorithm.

  • PDF

Implementation of Extracting Specific Information by Sniffing Voice Packet in VoIP

  • Lee, Dong-Geon;Choi, WoongChul
    • International journal of advanced smart convergence
    • /
    • v.9 no.4
    • /
    • pp.209-214
    • /
    • 2020
  • VoIP technology has been widely used for exchanging voice or image data through IP networks. VoIP technology, often called Internet Telephony, sends and receives voice data over the RTP protocol during the session. However, there is an exposition risk in the voice data in VoIP using the RTP protocol, where the RTP protocol does not have a specification for encryption of the original data. We implement programs that can extract meaningful information from the user's dialogue. The meaningful information means the information that the program user wants to obtain. In order to do that, our implementation has two parts. One is the client part, which inputs the keyword of the information that the user wants to obtain, and the other is the server part, which sniffs and performs the speech recognition process. We use the Google Speech API from Google Cloud, which uses machine learning in the speech recognition process. Finally, we discuss the usability and the limitations of the implementation with the example.

Voice Activity Detection Algorithm using Wavelet Band Entropy Ensemble Analysis in Car Noisy Environments (문서 편집 접근성 향상을 위한 음성 명령 기반 모바일 어플리케이션 개발)

  • Park, Joo Hyun;Park, Seah;Lee, Muneui;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1342-1352
    • /
    • 2018
  • Voice Command systems are important means of ensuring accessibility to digital devices for use in situations where both hands are not free or for people with disabilities. Interests in services using speech recognition technology have been increasing. In this study, we developed a mobile writing application using voice recognition and voice command technology which helps people create and edit documents easily. This application is characterized by the minimization of the touch on the screen and the writing of memo by voice. We have systematically designed a mode to distinguish voice writing and voice command so that the writing and execution system can be used simultaneously in one voice interface. It provides a shortcut function that can control the cursor by voice, which makes document editing as convenient as possible. This allows people to conveniently access writing applications by voice under both physical and environmental constraints.

An ASIC implementation of a Dual Channel Acoustic Beamforming for MEMS microphone in 0.18㎛ CMOS technology (0.18㎛ CMOS 공정을 이용한 MEMS 마이크로폰용 이중 채널 음성 빔포밍 ASIC 설계)

  • Jang, Young-Jong;Lee, Jea-Hack;Kim, Dong-Sun;Hwang, Tae-ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.949-958
    • /
    • 2018
  • A voice recognition control system is a system for controlling a peripheral device by recognizing a voice. Recently, a voice recognition control system have been applied not only to smart devices but also to various environments ranging from IoT(: Internet of Things), robots, and vehicles. In such a voice recognition control system, the recognition rate is lowered due to the ambient noise in addition to the voice of the user. In this paper, we propose a dual channel acoustic beamforming hardware architecture for MEMS(: Microelectromechanical Systems) microphones to eliminate ambient noise in addition to user's voice. And the proposed hardware architecture is designed as ASIC(: Application-Specific Integrated Circuit) using TowerJazz $0.18{\mu}m$ CMOS(: Complementary Metal-Oxide Semiconductor) technology. The designed dual channel acoustic beamforming ASIC has a die size of $48mm^2$, and the directivity index of the user's voice were measured to be 4.233㏈.

Development of a Voice User Interface for Web Browser using VoiceXML (VoiceXML을 이용한 VUI 지원 웹브라우저 개발)

  • Yea SangHoo;Jang MinSeok
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.11 no.2
    • /
    • pp.101-111
    • /
    • 2005
  • The present web informations are mainly described in terms of HTML, which users obtain through input devices such as mouse, keyboard, etc. Thus the existing GUI environment have not supported human's most natural information acquisition means, that is, voice. To solve the problem, several vendors are developing voice user interface. However these products are deficient in man -machine interactivity and their accommodation of existing web environment. This paper presents a VUI(Voice User Interface) supporting web browser by utilizing more and more maturing speech recognition technology and VoiceXML, a markup language derived from XML. It provides users with both interfaces, VUI as well as GUI. In addition, XML Island technology is applied to the bowser in a way that VoiceXML fragments are nested in HTML documents to accommodate the existing web environment. Also for better interactivity, dialogue scenarios for menu, bulletin, and search engine are suggested.

An Interactive Voice Web Browser Usable as a Multimodal Interface in Information Devices by Using VoiceXML

  • Jang, Min-Seok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.6
    • /
    • pp.771-775
    • /
    • 2004
  • The present Web surroundings is mostly composed of HTML(Hypertext Mark-up Language) and thereby users obtain web informations mainly in GUI(Graphical User Interface) environment by clicking mouse in order to keep up with hyperlinked informations. However it is very inconvenient to work in this environment comparing with easily accessed one in which human`s voice is utilized for obtaining informations. Using VoiceXML, resulted from XML, for supplying the information through telephone on the basis of the contemporary matured technology of voice recognition/synthesis to work out the inconvenience problem, this paper presents the research results about VoiceXML VUI(Voice User Interface) Browser designed and implemented for realizing its technology and also the VoiceXML Dialog designed for the purpose of the browser's efficient use.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Automatic Log-in System by the Speaker Certification

  • Sohn, Young-Sun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.176-181
    • /
    • 2004
  • This paper introduces a Web site login system that uses user's native voice to improve the bother of remembering the ID and password in order to login the Web site. The DTW method that applies fuzzy inference is used as the speaker recognition algorithm. We get the ACC(Average Cepstrum Coefficient) membership function by each degree, by using the LPC that models the vocal chords, to block the recorded voice that is problem for the speaker recognition. We infer the existence of the recorded voice by setting on the basis of the number of zeros that is the value of the ACC membership function, and on the basis of the average value of the ACC membership function. We experiment the six Web sites for the six subjects and get the result that protects the recorded voice about 98% that is recorded by the digital recorder.