• Title/Summary/Keyword: Voice recognition system

Search Result 334, Processing Time 0.027 seconds

Virtual Reality based Situation Immersive English Dialogue Learning System (가상현실 기반 상황몰입형 영어 대화 학습 시스템)

  • Kim, Jin-Won;Park, Seung-Jin;Min, Ga-Young;Lee, Keon-Myung
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.245-251
    • /
    • 2017
  • This presents an English conversation training system with which learners train their conversation skills in English, which makes them converse with native speaker characters in a virtual reality environment with voice. The proposed system allows the learners to talk with multiple native speaker characters in varous scenarios in the virtual reality environment. It recongizes voices spoken by the learners and generates voices by a speech synthesis method. The interaction with characters in the virtual reality environment in voice makes the learners immerged in the conversation situations. The scoring system which evaluates the learner's pronunciation provides the positive feedback for the learners to get engaged in the learning context.

A Study on the Multilingual Speech Recognition using International Phonetic Language (IPA를 활용한 다국어 음성 인식에 관한 연구)

  • Kim, Suk-Dong;Kim, Woo-Sung;Woo, In-Sung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.12 no.7
    • /
    • pp.3267-3274
    • /
    • 2011
  • Recently, speech recognition technology has dramatically developed, with the increase in the user environment of various mobile devices and influence of a variety of speech recognition software. However, for speech recognition for multi-language, lack of understanding of multi-language lexical model and limited capacity of systems interfere with the improvement of the recognition rate. It is not easy to embody speech expressed with multi-language into a single acoustic model and systems using several acoustic models lower speech recognition rate. In this regard, it is necessary to research and develop a multi-language speech recognition system in order to embody speech comprised of various languages into a single acoustic model. This paper studied a system that can recognize Korean and English as International Phonetic Language (IPA), based on the research for using a multi-language acoustic model in mobile devices. Focusing on finding an IPA model which satisfies both Korean and English phonemes, we get 94.8% of the voice recognition rate in Korean and 95.36% in English.

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.1
    • /
    • pp.59-68
    • /
    • 1999
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels. We propose that usability with visual distinguishing factor that using feature vector because as a result of recognition experiment for recognition parameter with the 10 korean vowels, obtaining high recognition rate.

  • PDF

Speech Interactive Agent on Car Navigation System Using Embedded ASR/DSR/TTS

  • Lee, Heung-Kyu;Kwon, Oh-Il;Ko, Han-Seok
    • Speech Sciences
    • /
    • v.11 no.2
    • /
    • pp.181-192
    • /
    • 2004
  • This paper presents an efficient speech interactive agent rendering smooth car navigation and Telematics services, by employing embedded automatic speech recognition (ASR), distributed speech recognition (DSR) and text-to-speech (ITS) modules, all while enabling safe driving. A speech interactive agent is essentially a conversational tool providing command and control functions to drivers such' as enabling navigation task, audio/video manipulation, and E-commerce services through natural voice/response interactions between user and interface. While the benefits of automatic speech recognition and speech synthesizer have become well known, involved hardware resources are often limited and internal communication protocols are complex to achieve real time responses. As a result, performance degradation always exists in the embedded H/W system. To implement the speech interactive agent to accommodate the demands of user commands in real time, we propose to optimize the hardware dependent architectural codes for speed-up. In particular, we propose to provide a composite solution through memory reconfiguration and efficient arithmetic operation conversion, as well as invoking an effective out-of-vocabulary rejection algorithm, all made suitable for system operation under limited resources.

  • PDF

Emotion Recognition Algorithm Based on Minimum Classification Error incorporating Multi-modal System (최소 분류 오차 기법과 멀티 모달 시스템을 이용한 감정 인식 알고리즘)

  • Lee, Kye-Hwan;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.76-81
    • /
    • 2009
  • We propose an effective emotion recognition algorithm based on the minimum classification error (MCE) incorporating multi-modal system The emotion recognition is performed based on a Gaussian mixture model (GMM) based on MCE method employing on log-likelihood. In particular, the reposed technique is based on the fusion of feature vectors based on voice signal and galvanic skin response (GSR) from the body sensor. The experimental results indicate that performance of the proposal approach based on MCE incorporating the multi-modal system outperforms the conventional approach.

Speaker Identification in Small Training Data Environment using MLLR Adaptation Method (MLLR 화자적응 기법을 이용한 적은 학습자료 환경의 화자식별)

  • Kim, Se-hyun;Oh, Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2005.11a
    • /
    • pp.159-162
    • /
    • 2005
  • Identification is the process automatically identify who is speaking on the basis of information obtained from speech waves. In training phase, each speaker models are trained using each speaker's speech data. GMMs (Gaussian Mixture Models), which have been successfully applied to speaker modeling in text-independent speaker identification, are not efficient in insufficient training data environment. This paper proposes speaker modeling method using MLLR (Maximum Likelihood Linear Regression) method which is used for speaker adaptation in speech recognition. We make SD-like model using MLLR adaptation method instead of speaker dependent model (SD). Proposed system outperforms the GMMs in small training data environment.

  • PDF

Voice Command through Facial Recognition Smart Mirror System (얼굴인식을 통한 음성 명령 스마트 거울 시스템)

  • Lee, Se-Hoon;Kim, Su-Min;Park, Hyun-Gyu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.253-254
    • /
    • 2019
  • 본 논문에서는 가정 등에서 사용자의 행동 반경에 가장 많이 있는 거울에 홈 제어 및 근처 전열 기구들을 보다 쉽게 제어 할 수 있도록 Google Speech API와 Open CV 라이브러리를 사용해 음성인식을 통한 홈 제어 방안을 제시하였다. 이를 통해서 바쁜 아침에 화장 등을 하는 경우 두 손을 자유롭게 사용하면서 디바이스를 음성으로 제어 할 수 있는 편리성을 제공하였다.

  • PDF

Design of Metaverse for Two-Way Video Conferencing Platform Based on Virtual Reality

  • Yoon, Dongeon;Oh, Amsuk
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.189-194
    • /
    • 2022
  • As non-face-to-face activities have become commonplace, online video conferencing platforms have become popular collaboration tools. However, existing video conferencing platforms have a structure in which one side unilaterally exchanges information, potentially increase the fatigue of meeting participants. In this study, we designed a video conferencing platform utilizing virtual reality (VR), a metaverse technology, to enable various interactions. A virtual conferencing space and realistic VR video conferencing content authoring tool support system were designed using Meta's Oculus Quest 2 hardware, the Unity engine, and 3D Max software. With the Photon software development kit, voice recognition was designed to perform automatic text translation with the Watson application programming interface, allowing the online video conferencing participants to communicate smoothly even if using different languages. It is expected that the proposed video conferencing platform will enable conference participants to interact and improve their work efficiency.

On Pattern Kernel with Multi-Resolution Architecture for a Lip Print Recognition (구순문 인식을 위한 복수 해상도 시스템의 패턴 커널에 관한 연구)

  • 김진옥;황대준;백경석;정진현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2067-2073
    • /
    • 2001
  • Biometric systems are forms of technology that use unique human physical characteristics to automatically identify a person. They have sensors to pick up some physical characteristics, convert them into digital patterns, and compare them with patterns stored for individual identification. However, lip-print recognition has been less developed than recognition of other human physical attributes such as the fingerprint, voice patterns, retinal at blood vessel patterns, or the face. The lip print recognition by a CCD camera has the merit of being linked with other recognition systems such as the retinal/iris eye and the face. A new method using multi-resolution architecture is proposed to recognize a lip print from the pattern kernels. A set of pattern kernels is a function of some local lip print masks. This function converts the information from a lip print into digital data. Recognition in the multi-resolution system is more reliable than recognition in the single-resolution system. The multi-resolution architecture allows us to reduce the false recognition rate from 15% to 4.7%. This paper shows that a lip print is sufficiently used by the measurements of biometric systems.

  • PDF

Product Nutrition Information System for Visually Impaired People (시각 장애인을 위한 상품 영양 정보 안내 시스템)

  • Jonguk Jung;Je-Kyung Lee;Hyori Kim;Yoosoo Oh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.5
    • /
    • pp.233-240
    • /
    • 2023
  • Nutrition information about food is written on the label paper, which is very inconvenient for visually impaired people to recognize. In order to solve the inconvenience of visually impaired people with nutritional information recognition, this paper proposes a product nutrition information guide system for visually impaired people. In the proposed system, user's image data input through UI, and object recognition is carried out through YOLO v5. The proposed system is a system that provides voice guidance on the names and nutrition information of recognized products. This paper constructs a new dataset that augments the 319 classes of canned/late-night snack product image data using rotate matrix techniques, pepper noise, and salt noise techniques. The proposed system compared and analyzed the performance of YOLO v5n, YOLO v5m, and YOLO v5l models through hyperparameter tuning and learned the dataset built with YOLO v5n models. This paper compares and analyzes the performance of the proposed system with that of previous studies.