• Title/Summary/Keyword: Voice recognition system

Search Result 334, Processing Time 0.024 seconds

A Study of Authentication Design for Youth (청소년을 위한 인증시스템의 설계에 관한 연구)

  • Hong, Ki-Cheon;Kim, Eun-Mi
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.4
    • /
    • pp.952-960
    • /
    • 2007
  • Most Websites perform login process for authentication. But simple features like ID and Password have no trust because most people worry about appropriation. So the youth can easily access illegal media sites using other's ID and Password. Therefore this paper examine features be adaptable to authentication system, and propose a design of authentication system using multiple features. A proposed authentication system has two categories, such as low-level and high-level method. Low-level method consists of grant of authentication number through mobile phone from server and certificate from authority. High-level method combines ID/Password and features of fingerprint, character, voice, face recognition systems. For this, this paper surveys six recognition systems such as fingerprint, face, iris, character, vein, voice recognition system. Among these, fingerprint, character, voice, face recognition systems can be easily implemented in personal computer with low cost accessories. Usage of multiple features can improve reliability of authentication.

  • PDF

A Study on Cockpit Voice Command System for Fighter Aircraft (전투기용 음성명령 시스템에 대한 연구)

  • Kim, Seongwoo;Seo, Mingi;Oh, Yunghwan;Kim, Bonggyu
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.41 no.12
    • /
    • pp.1011-1017
    • /
    • 2013
  • The human voice is the most natural means of communication. The need for speech recognition technology is increasing gradually to increase the ease of human and machine interface. The function of the avionics equipment is getting various and complicated in consequence of the growth of digital technology development, so that the load of pilots in the fighter aircraft must become increased since they don't concentrate only the attack function, but also operate the complicated avionics equipments. Accordingly, if speech recognition technology is applied to the aircraft cockpit as regards the operating the avionics equipments, pilots can spend their time and effort on the mission of fighter aircraft. In this paper, the cockpit voice command system applicable to the fighter aircraft has been developed and the function and the performance of the system verified.

Ship s Maneuvering and Winch Control System with Voice Instruction Based Learning (음성지시에 의한 선박 조종 및 윈치 제어 시스템)

  • Seo, Ki-Yeol;Park, Gyei-Kark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.12 no.6
    • /
    • pp.517-523
    • /
    • 2002
  • In this paper, we propose system that apply VIBL method to add speech recognition to LIBL method based on human s studying method to use natural language to steering system of ship, MERCS and winch appliances and use VIBL method to alternate process that linguistic instruction such as officer s steering instruction is achieved via ableman and control steering gear, MERCS and winch appliances. By specific method of study, ableman s suitable steering manufacturing model embodies intelligent steering gear controlling system that embody and language direction base studying method to present proper meaning element and evaluation rule to steering system of ship apply and respond more efficiently on voice instruction of commander using fuzzy inference rule. Also we embody system that recognize voice direction of commander and control MERCS and winch appliances. We embodied steering manufacturing model based on ableman s experience and presented rudder angle for intelligent steering system, compass bearing arrival time, evaluation rule to propose meaning element of stationary state and correct steerman manufacturing model rule using technique to recognize voice instruction of commander and change to text and fuzzy inference. Also we apply VIBL method to speech recognition ship control simulator and confirmed the effectiveness.

Large Scale Voice Dialling using Speaker Adaptation (화자 적응을 이용한 대용량 음성 다이얼링)

  • Kim, Weon-Goo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.335-338
    • /
    • 2010
  • A new method that improves the performance of large scale voice dialling system is presented using speaker adaptation. Since SI (Speaker Independent) based speech recognition system with phoneme HMM uses only the phoneme string of the input sentence, the storage space could be reduced greatly. However, the performance of the system is worse than that of the speaker dependent system due to the mismatch between the input utterance and the SI models. A new method that estimates the phonetic string and adaptation vectors iteratively is presented to reduce the mismatch between the training utterances and a set of SI models using speaker adaptation techniques. For speaker adaptation the stochastic matching methods are used to estimate the adaptation vectors. The experiments performed over actual telephone line shows that proposed method shows better performance as compared to the conventional method. with the SI phonetic recognizer.

Development of a Read-time Voice Dialing System Using Discrete Hidden Markov Models (이산 HM을 이용한 실시간 음성인식 다이얼링 시스템 개발)

  • Lee, Se-Woong;Choi, Seung-Ho;Lee, Mi-Suk;Kim, Hong-Kook;Oh, Kwang-Cheol;Kim, Ki-Chul;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1E
    • /
    • pp.89-95
    • /
    • 1994
  • This paper describes development of a real-time voice dialing system which can recognize around one hundred word vocabularies in speaker independent mode. The voice recognition algorithm in this system is implemented on a DSP board with a telephone interface plugged in an IBM PC AT/486. In the DSP board, procedures for feature extraction, vector quantization(VQ), and end-point detection are performed simultaneously in every 10 msec frame interval to satisfy real-time constraints after detecting the word starting point. In addition, we optimize the VQ codebook size and the end-point detection procedure to reduce recognition time and memory requirement. The demonstration system has been displayed in MOBILAB of the Korean Mobile Telecom at the Taejon EXPO'93.

  • PDF

Development of Automatic Creating Web-Site Tool for the Blind (시각장애인용 웹사이트 자동생성 툴 개발)

  • Baek, Hyeun-Ki;Ha, Tai-Hyun
    • Journal of Digital Contents Society
    • /
    • v.8 no.4
    • /
    • pp.467-474
    • /
    • 2007
  • This paper documents the design and implementation of an automatic creating web-site tool for the blind to build their own homepage by using both voice recognition and voice mixed technology with equal ease as the non-disabled. The blind can make voice mails, schedules, address lists and bookmarks by making use of the tool. It also facilitates communication between the non-disabled with the help of their information management system. This tool converts basic commands into voice recognition, also making an offer of text-to-speech which supports voice output. In the end, the tool will remove the blind's social isolation, allowing them to enjoy the information age like the non-disabled.

  • PDF

Implementation of Embedded Speech Recognition System for Supporting Voice Commander to Control an Audio and a Video on Telematics Terminals (텔레메틱스 단말기 내의 오디오/비디오 명령처리를 위한 임베디드용 음성인식 시스템의 구현)

  • Kwon, Oh-Il;Lee, Heung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.11
    • /
    • pp.93-100
    • /
    • 2005
  • In this paper, we implement the embedded speech recognition system to support various application services such as audio and video control using speech recognition interface on cars. The embedded speech recognition system is implemented and ported in a DSP board. Because MIC type and speech codecs affect the accuracy of speech recognition. And also, we optimize the simulation and test environment to effectively remove the real noises on a car. We applied a noise suppression and feature compensation algorithm to increase an accuracy of sppech recognition on a car. And we used a context dependent tied-mixture acoustic modeling. The performance evaluation showed high accuracy of proposed system in office environment and even real car environment.

Multi-Modal Biometries System for Ubiquitous Sensor Network Environment (유비쿼터스 센서 네트워크 환경을 위한 다중 생체인식 시스템)

  • Noh, Jin-Soo;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.4 s.316
    • /
    • pp.36-44
    • /
    • 2007
  • In this paper, we implement the speech & face recognition system to support various ubiquitous sensor network application services such as switch control, authentication, etc. using wireless audio and image interface. The proposed system is consist of the H/W with audio and image sensor and S/W such as speech recognition algorithm using psychoacoustic model, face recognition algorithm using PCA (Principal Components Analysis) and LDPC (Low Density Parity Check). The proposed speech and face recognition systems are inserted in a HOST PC to use the sensor energy effectively. And improve the accuracy of speech and face recognition, we implement a FEC (Forward Error Correction) system Also, we optimized the simulation coefficient and test environment to effectively remove the wireless channel noises and correcting wireless channel errors. As a result, when the distance that between audio sensor and the source of voice is less then 1.5m FAR and FRR are 0.126% and 7.5% respectively. The face recognition algorithm step is limited 2 times, GAR and FAR are 98.5% and 0.036%.

Proposed Efficient Architectures and Design Choices in SoPC System for Speech Recognition

  • Trang, Hoang;Hoang, Tran Van
    • Journal of IKEEE
    • /
    • v.17 no.3
    • /
    • pp.241-247
    • /
    • 2013
  • This paper presents the design of a System on Programmable Chip (SoPC) based on Field Programmable Gate Array (FPGA) for speech recognition in which Mel-Frequency Cepstral Coefficients (MFCC) for speech feature extraction and Vector Quantization for recognition are used. The implementing process of the speech recognition system undergoes the following steps: feature extraction, training codebook, recognition. In the first step of feature extraction, the input voice data will be transformed into spectral components and extracted to get the main features by using MFCC algorithm. In the recognition step, the obtained spectral features from the first step will be processed and compared with the trained components. The Vector Quantization (VQ) is applied in this step. In our experiment, Altera's DE2 board with Cyclone II FPGA is used to implement the recognition system which can recognize 64 words. The execution speed of the blocks in the speech recognition system is surveyed by calculating the number of clock cycles while executing each block. The recognition accuracies are also measured in different parameters of the system. These results in execution speed and recognition accuracy could help the designer to choose the best configurations in speech recognition on SoPC.

Development of Context Awareness and Service Reasoning Technique for Handicapped People (멀티 모달 감정인식 시스템 기반 상황인식 서비스 추론 기술 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.34-39
    • /
    • 2009
  • As a subjective recognition effect, human's emotion has impulsive characteristic and it expresses intentions and needs unconsciously. These are pregnant with information of the context about the ubiquitous computing environment or intelligent robot systems users. Such indicators which can aware the user's emotion are facial image, voice signal, biological signal spectrum and so on. In this paper, we generate the each result of facial and voice emotion recognition by using facial image and voice for the increasing convenience and efficiency of the emotion recognition. Also, we extract the feature which is the best fit information based on image and sound to upgrade emotion recognition rate and implement Multi-Modal Emotion recognition system based on feature fusion. Eventually, we propose the possibility of the ubiquitous computing service reasoning method based on Bayesian Network and ubiquitous context scenario in the ubiquitous computing environment by using result of emotion recognition.