• Title/Summary/Keyword: Voice recognition system

Search Result 334, Processing Time 0.027 seconds

Piezoelectric Ultrasound MEMS Transducers for Fingerprint Recognition

  • Jung, Soo Young;Park, Jin Soo;Kim, Min-Seok;Jang, Ho Won;Lee, Byung Chul;Baek, Seung-Hyub
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.5
    • /
    • pp.286-292
    • /
    • 2022
  • As mobile electronics become smarter, higher-level security systems are necessary to protect private information and property from hackers. For this, biometric authentication systems have been widely studied, where the recognition of unique biological traits of an individual, such as the face, iris, fingerprint, and voice, is required to operate the device. Among them, ultrasound fingerprint imaging technology using piezoelectric materials is one of the most promising approaches adopted by Samsung Galaxy smartphones. In this review, we summarize the recent progress on piezoelectric ultrasound micro-electro-mechanical systems (MEMS) transducers with various piezoelectric materials and provide insights to achieve the highest-level biometric authentication system for mobile electronics.

Implementation of the Aircraft Control System with Voice Recognition (음성 인식을 통한 항공기 제어 시스템의 구현)

  • Park, Myeong-Chul;Cha, Hyun-Jun;Kim, Tae-Hyung
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.171-172
    • /
    • 2021
  • 현재까지 항공기에 적용되는 기술들은 수많이 발전해왔다. 조종사의 편의성을 위한 기술들 또한 많이 발전해왔다. 자동운항 등 많은 기술들이 조종사를 보조하며 편의성을 위해 사용되고 있다. 하지만 비행기가 처음 만들어 졌을 때부터 바뀌지 않은 조종의 방식과 항공기의 국제성이 가지는 장시간의 비행은 여전히 조종사에게는 큰 피로를 안겨주고 있다. 본 논문은 조종사들의 피로를 경감 시켜 피로로 인해 발생 할 수 있는 사고들을 예방하기 위해 음성인식을 적용하여 새로운 조종의 방식인 '음성인식을 통한 항공기 조종면의 제어' 기술을 제안한다. 기존의 손을 사용한 조종방식이 아닌 컴퓨터와 조종사의 대화를 통해 데이터를 처리하고 즉각적인 피드백을 받으며 조종사의 편의성을 증가시켜 나아가 피로를 경감 시킬 수 있다.

  • PDF

An Intelligence Embedding Quadruped Pet Robot with Sensor Fusion (센서 퓨전을 통한 인공지능 4족 보행 애완용 로봇)

  • Lee Lae-Kyoung;Park Soo-Min;Kim Hyung-Chul;Kwon Yong-Kwan;Kang Suk-Hee;Choi Byoung-Wook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.314-321
    • /
    • 2005
  • In this paper an intelligence embedding quadruped pet robot is described. It has 15 degrees of freedom and consists of various sensors such as CMOS image, voice recognition and sound localization, inclinometer, thermistor, real-time clock, tactile touch, PIR and IR to allows owners to interact with pet robot according to human's intention as well as the original features of pet animals. The architecture is flexible and adopts various embedded processors for handling sensors to provide modular structure. The pet robot is also used for additional purpose such like security, gaming visual tracking, and research platform. It is possible to generate various actions and behaviors and to download voice or music files to maintain a close relation of users. With cost-effective sensor, the pet robot is able to find its recharge station and recharge itself when its battery runs low. To facilitate programming of the robot, we support several development environments. Therefore, the developed system is a low-cost programmable entertainment robot platform.

Effects of Feedback Types on Users' Subjective Responses in a Voice User Interface (음성 사용자 인터페이스 내 피드백 유형이 사용자의 주관적 반응에 미치는)

  • Lee, Dasom;Lee, Sangwon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.219-222
    • /
    • 2017
  • This study aimed to demonstrate the effect of feedback type on users' subjective responses in a voice user interface. Feedback type is classified depend on information characteristic it involves; verification feedback and elaboration feedback. Error type is categorized as recognition error and performance error. Users' subjective assessment about system, feedback acceptance, and intention to use were measured as dependent variables. The results of experiment showed that feedback type has impacts on the subjective assessment(likeability, habitability, system response accuracy) of VUI, feedback acceptance, and intention to use. the results also demonstrated an interaction effect of feedback type and error type on the feedback acceptance. It leads to the conclusion that VUI should be designed with the elaboration feedback about error situation.

  • PDF

Control of Mobile Robot Using Voice Recognition and Wearable Module (음성인식과 웨어러블 모듈을 이용한 이동로봇 제어)

  • 정성호;서재용;김용민;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.37-40
    • /
    • 2002
  • Intelligent Wearable Module is intelligent system that arises when a human is part of the feedback loop of a computational process like a certain control system. Applied system is mobile robot. This paper represents the mobile robot control system remote controlled by Intelligent Wearable Module. So far, owing to the development of internet technologies, lots of remote control methods through internet have been proposed. To control a mobile robot through internet and guide it under unknown environment, We propose a control method activated by Intelligent Wearable Module. In a proposed system, PDA acts as a user interface to communicate with notebook as a controller of the mobile robot system using TCP/IP protocol, and the notebook controls the mobile robot system. Tlle information about the direction and velocity of the mobile robot feedbacks to the PDA and the PDA send new control method produced from the fuzzy inference engine.

  • PDF

Speech Recognition of the Korean Vowel 'ㅗ' Based on Time Domain Waveform Patterns (시간 영역 파형 패턴에 기반한 한국어 모음 'ㅗ'의 음성 인식)

  • Lee, Jae Won
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.11
    • /
    • pp.583-590
    • /
    • 2016
  • Recently, the rapidly increasing interest in IoT in almost all areas of casual human life has led to wide acceptance of speech recognition as a means of HCI. Simultaneously, the demand for speech recognition systems for mobile environments is increasing rapidly. The server-based speech recognition systems are typically fast and show high recognition rates; however, an internet connection is necessary, and complicated server computation is required since a voice is recognized by units of words that are stored in server databases. In this paper, we present a novel method for recognizing the Korean vowel 'ㅗ', as a part of a phoneme based Korean speech recognition system. The proposed method involves analyses of waveform patterns in the time domain instead of the frequency domain, with consequent reduction in computational cost. Elementary algorithms for detecting typical waveform patterns of 'ㅗ' are presented and combined to make final decisions. The experimental results show that the proposed method can achieve 89.9% recognition accuracy.

Performance Evaluation of an Automatic Distance Speech Recognition System (원거리 음성명령어 인식시스템 설계)

  • Oh, Yoo-Rhee;Yoon, Jae-Sam;Park, Ji-Hoon;Kim, Min-A;Kim, Hong-Kook;Kong, Dong-Geon;Myung, Hyun;Bang, Seok-Won
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.303-304
    • /
    • 2007
  • In this paper, we implement an automatic distance speech recognition system for voiced-enabled services. We first construct a baseline automatic speech recognition (ASR) system, where acoustic models are trained from speech utterances spoken by using a cross-talking microphone. In order to improve the performance of the baseline ASR using distance speech, the acoustic models are adapted to adjust the spectral characteristics of speech according to different microphones and the environmental mismatches between cross-talking and distance speech. Next we develop a voice activity detection algorithm for distance speech. We compare the performance of the base-line system and the developed ASR system on a task of PBW (Phonetically Balanced Word) 452. As a result it is shown that the developed ASR system provides the average word error rate (WER) reduction of 30.6 % compared to the baseline ASR system.

  • PDF

How to Express Emotion: Role of Prosody and Voice Quality Parameters (감정 표현 방법: 운율과 음질의 역할)

  • Lee, Sang-Min;Lee, Ho-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.11
    • /
    • pp.159-166
    • /
    • 2014
  • In this paper, we examine the role of emotional acoustic cues including both prosody and voice quality parameters for the modification of a word sense. For the extraction of prosody parameters and voice quality parameters, we used 60 pieces of speech data spoken by six speakers with five different emotional states. We analyzed eight different emotional acoustic cues, and used a discriminant analysis technique in order to find the dominant sequence of acoustic cues. As a result, we found that anger has a close relation with intensity level and 2nd formant bandwidth range; joy has a relative relation with the position of 2nd and 3rd formant values and intensity level; sadness has a strong relation only with prosody cues such as intensity level and pitch level; and fear has a relation with pitch level and 2nd formant value with its bandwidth range. These findings can be used as the guideline for find-tuning an emotional spoken language generation system, because these distinct sequences of acoustic cues reveal the subtle characteristics of each emotional state.

A Study on subtitle synchronization calibration to enhance hearing-impaired persons' viewing convenience of e-sports contents or game streamer contents (청각장애인의 이스포츠 중계방송 및 게임 스트리머 콘텐츠 시청 편의성 증대를 위한 자막 동기화 보정 연구)

  • Shin, Dong-Hwan;Kim, Jeong-Soo;Kim, Chang-Won
    • Journal of Korea Game Society
    • /
    • v.19 no.1
    • /
    • pp.73-84
    • /
    • 2019
  • This study is intended to suggest ways to improve the quality of the service of subtitles provided for the convenience of viewing for deaf people on e-sports broadcast content and game streamer content. Generally, subtitling files of broadcast content are manually written on air by stenographers, so a delay of 3 to 5 seconds is inevitable compared to the original content. Therefore, the present study proposed the formation of an automatic synchronization calibration system using speech recognition technology. In addition, a content application experiment using this system was conducted, and the final result confirmed that the time of synchronization error of subtitling data could be reduced to less than 1 second.

Multi-Modal Instruction Recognition System using Speech and Gesture (음성 및 제스처를 이용한 멀티 모달 명령어 인식 시스템)

  • Kim, Jung-Hyun;Rho, Yong-Wan;Kwon, Hyung-Joon;Hong, Kwang-Seok
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2006.06a
    • /
    • pp.57-62
    • /
    • 2006
  • 휴대용 단말기의 소형화 및 지능화와 더불어 차세대 PC 기반의 유비쿼터스 컴퓨팅에 대한 관심이 높아짐에 따라 최근에는 펜이나 음성 입력 멀티미디어 등 여러 가지 대화 모드를 구비한 멀티 모달 상호작용 (Multi-Modal Interaction MMI)에 대한 연구가 활발히 진행되고 있다. 따라서, 본 논문에서는 잡음 환경에서의 명확한 의사 전달 및 휴대용 단말기에서의 음성-제스처 통합 인식을 위한 인터페이스의 연구를 목적으로 Voice-XML과 Wearable Personal Station(WPS) 기반의 음성 및 내장형 수화 인식기를 통합한 멀티 모달 명령어 인식 시스템 (Multi-Modal Instruction Recognition System : MMIRS)을 제안하고 구현한다. 제안되어진 MMIRS는 한국 표준 수화 (The Korean Standard Sign Language : KSSL)에 상응하는 문장 및 단어 단위의 명령어 인식 모델에 대하여 음성뿐만 아니라 화자의 수화제스처 명령어를 함께 인식하고 사용함에 따라 잡음 환경에서도 규정된 명령어 모델에 대한 인식 성능의 향상을 기대할 수 있다. MMIRS의 인식 성능을 평가하기 위하여, 15인의 피험자가 62개의 문장형 인식 모델과 104개의 단어인식 모델에 대하여 음성과 수화 제스처를 연속적으로 표현하고, 이를 인식함에 있어 개별 명령어 인식기 및 MMIRS의 평균 인식율을 비교하고 분석하였으며 MMIRS는 문장형 명령어 인식모델에 대하여 잡음환경에서는 93.45%, 비잡음환경에서는 95.26%의 평균 인식율을 나타내었다.

  • PDF