• Title/Summary/Keyword: Voice Recognition Technology

Search Result 212, Processing Time 0.039 seconds

Artificial intelligence wearable platform that supports the life cycle of the visually impaired (시각장애인의 라이프 사이클을 지원하는 인공지능 웨어러블 플랫폼)

  • Park, Siwoong;Kim, Jeung Eun;Kang, Hyun Seo;Park, Hyoung Jun
    • Journal of Platform Technology
    • /
    • v.8 no.4
    • /
    • pp.20-28
    • /
    • 2020
  • In this paper, a voice, object, and optical character recognition platform including voice recognition-based smart wearable devices, smart devices, and web AI servers was proposed as an appropriate technology to help the visually impaired to live independently by learning the life cycle of the visually impaired in advance. The wearable device for the visually impaired was designed and manufactured with a reverse neckband structure to increase the convenience of wearing and the efficiency of object recognition. And the high-sensitivity small microphone and speaker attached to the wearable device was configured to support the voice recognition interface function consisting of the app of the smart device linked to the wearable device. From experimental results, the voice, object, and optical character recognition service used open source and Google APIs in the web AI server, and it was confirmed that the accuracy of voice, object and optical character recognition of the service platform achieved an average of 90% or more.

  • PDF

Voice Recognition Module for Multi-functional Electric Wheelchair (다기능 전동휠체어의 음성인식 모듈에 관한 연구)

  • 류홍석;김정훈;강성인;강재명;이상배
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.83-86
    • /
    • 2002
  • This paper intends to provide convenience to the disabled, losing the use of their limbs, through voice recognition technology. The voice recognition part of this system recognizes voice by DTW (Dynamic Time Warping) Which is most Widely used in Speaker dependent system. Specially, S/N rate was improved through Wiener filter in the pre-treatment phase while considering real environmental conditions; the result values of 12th order feature pattern per frame are extracted by DTW algorithm using LPC and Cepsturm in feature extraction process. Furthermore, miniaturization is pursued using TMS320C32, 71's the floating-point DSP, for the hardware part. Currently, 90% of hardware porting has been completed, but we can confirm that the recognition rate was 96% as a result of performing the DTW algorithm in PC.

  • PDF

Automatic Log-in System by the Speaker Certification

  • Sohn, Young-Sun
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.176-181
    • /
    • 2004
  • This paper introduces a Web site login system that uses user's native voice to improve the bother of remembering the ID and password in order to login the Web site. The DTW method that applies fuzzy inference is used as the speaker recognition algorithm. We get the ACC(Average Cepstrum Coefficient) membership function by each degree, by using the LPC that models the vocal chords, to block the recorded voice that is problem for the speaker recognition. We infer the existence of the recorded voice by setting on the basis of the number of zeros that is the value of the ACC membership function, and on the basis of the average value of the ACC membership function. We experiment the six Web sites for the six subjects and get the result that protects the recorded voice about 98% that is recorded by the digital recorder.

CONTINUOUS DIGIT RECOGNITION FOR A REAL-TIME VOICE DIALING SYSTEM USING DISCRETE HIDDEN MARKOV MODELS

  • Choi, S.H.;Hong, H.J.;Lee, S.W.;Kim, H.K.;Oh, K.C.;Kim, K.C.;Lee, H.S.
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.1027-1032
    • /
    • 1994
  • This paper introduces a interword modeling and a Viterbi search method for continuous speech recognition. We also describe a development of a real-time voice dialing system which can recognize around one hundred words and continuous digits in speaker independent mode. For continuous digit recognition, between-word units have been proposed to provide a more precise representation of word junctures. The best path in HMM is found by the Viterbi search algorithm, from which digit sequences are recognized. The simulation results show that a interword modeling using the context-dependent between-word units provide better recognition rates than a pause modeling using the context-independent pause unit. The voice dialing system is implemented on a DSP board with a telephone interface plugged in an IBM PC AT/486.

  • PDF

A Study on Stable Motion Control of Humanoid Robot with 24 Joints Based on Voice Command

  • Lee, Woo-Song;Kim, Min-Seong;Bae, Ho-Young;Jung, Yang-Keun;Jung, Young-Hwa;Shin, Gi-Soo;Park, In-Man;Han, Sung-Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.1
    • /
    • pp.17-27
    • /
    • 2018
  • We propose a new approach to control a biped robot motion based on iterative learning of voice command for the implementation of smart factory. The real-time processing of speech signal is very important for high-speed and precise automatic voice recognition technology. Recently, voice recognition is being used for intelligent robot control, artificial life, wireless communication and IoT application. In order to extract valuable information from the speech signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. Basic method used for extracting the features of the voice signal is to find the Mel frequency cepstral coefficients. Mel-frequency cepstral coefficients are the coefficients that collectively represent the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. The reliability of voice command to control of the biped robot's motion is illustrated by computer simulation and experiment for biped walking robot with 24 joint.

A Study on Emotion Recognition of Chunk-Based Time Series Speech (청크 기반 시계열 음성의 감정 인식 연구)

  • Hyun-Sam Shin;Jun-Ki Hong;Sung-Chan Hong
    • Journal of Internet Computing and Services
    • /
    • v.24 no.2
    • /
    • pp.11-18
    • /
    • 2023
  • Recently, in the field of Speech Emotion Recognition (SER), many studies have been conducted to improve accuracy using voice features and modeling. In addition to modeling studies to improve the accuracy of existing voice emotion recognition, various studies using voice features are being conducted. This paper, voice files are separated by time interval in a time series method, focusing on the fact that voice emotions are related to time flow. After voice file separation, we propose a model for classifying emotions of speech data by extracting speech features Mel, Chroma, zero-crossing rate (ZCR), root mean square (RMS), and mel-frequency cepstrum coefficients (MFCC) and applying them to a recurrent neural network model used for sequential data processing. As proposed method, voice features were extracted from all files using 'librosa' library and applied to neural network models. The experimental method compared and analyzed the performance of models of recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU) using the Interactive emotional dyadic motion capture Interactive Emotional Dyadic Motion Capture (IEMOCAP) english dataset.

Semantic-Oriented Error Correction for Voice-Activated Information Retrieval System

  • Yoon, Yong-Wook;Kim, Byeong-Chang;Lee, Gary-Geunbae
    • MALSORI
    • /
    • no.44
    • /
    • pp.115-130
    • /
    • 2002
  • Voice input is often required in many new application environments, but the low rate of speech recognition makes it difficult to extend its application. Previous approaches were to raise the accuracy of the recognition by post-processing of the recognition results, which were all lexical-oriented. We suggest a new semantic-oriented approach in speech recognition error correction. Through experiments using a speech-driven in-vehicle telematics information application, we show the excellent performance of our approach and some advantages it has as a semantic-oriented approach over a pure lexical-oriented approach.

  • PDF

An Experimental Study on Barging-In Effects for Speech Recognition Using Three Telephone Interface Boards

  • Park, Sung-Joon;Kim, Ho-Kyoung;Koo, Myoung-Wan
    • Speech Sciences
    • /
    • v.8 no.1
    • /
    • pp.159-165
    • /
    • 2001
  • In this paper, we make an experiment on speech recognition systems with barging-in and non-barging-in utterances. Barging-in capability, with which we can say voice commands while voice announcement is coming out, is one of the important elements for practical speech recognition systems. Barging-in capability can be realized by echo cancellation techniques based on the LMS (least-mean-square) algorithm. We use three kinds of telephone interface boards with barging-in capability, which are respectively made by Dialogic Company, Natural MicroSystems Company and Korea Telecom. Speech database was made using these three kinds of boards. We make a comparative recognition experiment with this speech database.

  • PDF

Voice Portal based on SMS Authentication at CTI Module Implementation by Speech Recognition (SMS 인증 기반의 보이스포탈에서의 음성인식을 위한 CTI 모듈 구현)

  • Oh, Se-Il;Kim, Bong-Hyun;Koh, Jin-Hwan;Park, Won-Tea
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04b
    • /
    • pp.1177-1180
    • /
    • 2001
  • 전화를 통해 인터넷 정보를 들을 수 있는 보이스 포탈(Voice Portal) 서비스가 인기를 얻고 있다. Voice Portal 서비스란 알고자 하는 정보를 Speech Recognition System에 음성으로 명령하면 전화를 통해 음성으로 원하는 정보를 듣는 서비스이다. Authentication의 절차를 수행하는 SMS (Short Message Service) 서버 Module, PSTN과 Database 서버사이의 Interface를 제공하는 CTI (Computer Telephony Integration) Module, CTI 서버와 WWW (World Wide Web) 사이의 Voice XML Module, 정보를 검색하기 위한 Searching Module들이 필요하다. 본 논문은 Speech Recognition technology를 기반으로 한 CTI Module 설계를 구현하였다. 또한 인정 방식으로 Random한 일회용 password를 기반으로 한 SMS Authentication을 택하므로 더욱 더 안정된 서비스 제공을 목적으로 하였다.

  • PDF

A Study of Hybrid Automatic Interpret Support System (하이브리드 자동 통역지원 시스템에 관한 연구)

  • Lim, Chong-Gyu;Gang, Bong-Gyun;Park, Ju-Sik;Kang, Bong-Kyun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.28 no.3
    • /
    • pp.133-141
    • /
    • 2005
  • The previous research has been mainly focused on individual technology of voice recognition, voice synthesis, translation, and bone transmission technical. Recently, commercial models have been produced using aforementioned technologies. In this research, a new automated translation support system concept has been proposed by combining established technology of bone transmission and wireless system. The proposed system has following three major components. First, the hybrid system consist of headset, bone transmission and other technologies will recognize user's voice. Second, computer recognized voice (using small server attached to the user) of the user will be converted into digital signal. Then it will be translated into other user's language by translation algorithm. Third, the translated language will be wirelessly transmitted to the other party. The transmitted signal will be converted into voice in the other party's computer using the hybrid system. This hybrid system will transmit the clear message regardless of the noise level in the environment or user's hearing ability. By using the network technology, communication between users can also be clearly transmitted despite the distance.