• Title/Summary/Keyword: voice commands

Search Result 47, Processing Time 0.026 seconds

A Study on Real-Time Walking Action Control of Biped Robot with Twenty Six Joints Based on Voice Command (음성명령기반 26관절 보행로봇 실시간 작업동작제어에 관한 연구)

  • Jo, Sang Young;Kim, Min Sung;Yang, Jun Suk;Koo, Young Mok;Jung, Yang Geun;Han, Sung Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.4
    • /
    • pp.293-300
    • /
    • 2016
  • The Voice recognition is one of convenient methods to communicate between human and robots. This study proposes a speech recognition method using speech recognizers based on Hidden Markov Model (HMM) with a combination of techniques to enhance a biped robot control. In the past, Artificial Neural Networks (ANN) and Dynamic Time Wrapping (DTW) were used, however, currently they are less commonly applied to speech recognition systems. This Research confirms that the HMM, an accepted high-performance technique, can be successfully employed to model speech signals. High recognition accuracy can be obtained by using HMMs. Apart from speech modeling techniques, multiple feature extraction methods have been studied to find speech stresses caused by emotions and the environment to improve speech recognition rates. The procedure consisted of 2 parts: one is recognizing robot commands using multiple HMM recognizers, and the other is sending recognized commands to control a robot. In this paper, a practical voice recognition system which can recognize a lot of task commands is proposed. The proposed system consists of a general purpose microprocessor and a useful voice recognition processor which can recognize a limited number of voice patterns. By simulation and experiment, it was illustrated the reliability of voice recognition rates for application of the manufacturing process.

Design and Implementation of Context-aware Application on Smartphone Using Speech Recognizer

  • Kim, Kyuseok
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.49-59
    • /
    • 2020
  • As technologies have been developing, our lives are getting easier. Today we are surrounded by the new technologies such as AI and IoT. Moreover, the word, "smart" is a very broad one because we are trying to change our daily environment into smart one by using those technologies. For example, the traditional workplaces have changed into smart offices. Since the 3rd industrial revolution, we have used the touch interface to operate the machines. In the 4th industrial revolution, however, we are trying adding the speech recognition module to the machines to operate them by giving voice commands. Today many of the things are communicated with human by voice commands. Many of them are called AI things and they do tasks which users request and do tasks more than what users request. In the 4th industrial revolution, we use smartphones all the time every day from the morning to the night. For this reason, the privacy using phone is not guaranteed sometimes. For example, the caller's voice can be heard through the phone speaker when accepting a call. So, it is needed to protect privacy on smartphone and it should work automatically according to the user context. In this aspect, this paper proposes a method to adjust the voice volume for call to protect privacy on smartphone according to the user context.

Implementation Of Moving Picture Transfer System Using Bluetooth (Bluetooth를 이용한 동영상 전송 시스템 구현)

  • 조경연;이승은;최종찬
    • Proceedings of the IEEK Conference
    • /
    • 2001.06a
    • /
    • pp.25-28
    • /
    • 2001
  • In this paper we implement moving picture transfer system using bluetooth Development Kit (DK). To reduce the size of the image data, we use M-JPEG compression. We use bluetooth Synchronous Connection-Oriented (SCO) link to transfer voice data. Server receive image data from camera and compress the image data in M-JPEG format, and then transmit the image data to client using bluetooth Asynchronous connection-less (ACL) link. Client receive image data from bluetooth ACL link and decode the compressed image and then display the image to screen. Sever and Client can transmit and receive voice data simultaneously using bluetooth SCO link. In this paper bluetooth HCI commands and events generated by host controller to return the results of HCI commands are explained and the flow of bluetooth connection procedure is presented.

  • PDF

A Proposal of Eye-Voice Method based on the Comparative Analysis of Malfunctions on Pointer Click in Gaze Interface for the Upper Limb Disabled (상지장애인을 위한 시선 인터페이스에서 포인터 실행 방법의 오작동 비교 분석을 통한 Eye-Voice 방식의 제안)

  • Park, Joo Hyun;Park, Mi Hyun;Lim, Soon-Bum
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.566-573
    • /
    • 2020
  • Computers are the most common tool when using the Internet and utilizing a mouse to select and execute objects. Eye tracking technology is welcomed as an alternative technology to help control computers for users who cannot use their hands due to their disabilities. However, the pointer execution method of the existing eye tracking technique causes many malfunctions. Therefore, in this paper, we developed a gaze tracking interface that combines voice commands to solve the malfunction problem when the upper limb disabled uses the existing gaze tracking technology to execute computer menus and objects. Usability verification was conducted through comparative experiments regarding the improvements of the malfunction. The upper limb disabled who are hand-impaired use eye tracking technology to move the pointer and utilize the voice commands, such as, "okay" while browsing the computer screen for instant clicks. As a result of the comparative experiments on the reduction of the malfunction of pointer execution with the existing gaze interfaces, we verified that our system, Eye-Voice, reduced the malfunction rate of pointer execution and is effective for the upper limb disabled to use.

Training of Fuzzy-Neural Network for Voice-Controlled Robot Systems by a Particle Swarm Optimization

  • Watanabe, Keigo;Chatterjee, Amitava;Pulasinghe, Koliya;Jin, Sang-Ho;Izumi, Kiyotaka;Kiguchi, Kazuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1115-1120
    • /
    • 2003
  • The present paper shows the possible development of particle swarm optimization (PSO) based fuzzy-neural networks (FNN) which can be employed as an important building block in real life robot systems, controlled by voice-based commands. The PSO is employed to train the FNNs which can accurately output the crisp control signals for the robot systems, based on fuzzy linguistic spoken language commands, issued by an user. The FNN is also trained to capture the user spoken directive in the context of the present performance of the robot system. Hidden Markov Model (HMM) based automatic speech recognizers are developed, as part of the entire system, so that the system can identify important user directives from the running utterances. The system is successfully employed in a real life situation for motion control of a redundant manipulator.

  • PDF

Development of a Work Management System Based on Speech and Speaker Recognition

  • Gaybulayev, Abdulaziz;Yunusov, Jahongir;Kim, Tae-Hyong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.89-97
    • /
    • 2021
  • Voice interface can not only make daily life more convenient through artificial intelligence speakers but also improve the working environment of the factory. This paper presents a voice-assisted work management system that supports both speech and speaker recognition. This system is able to provide machine control and authorized worker authentication by voice at the same time. We applied two speech recognition methods, Google's Speech application programming interface (API) service, and DeepSpeech speech-to-text engine. For worker identification, the SincNet architecture for speaker recognition was adopted. We implemented a prototype of the work management system that provides voice control with 26 commands and identifies 100 workers by voice. Worker identification using our model was almost perfect, and the command recognition accuracy was 97.0% in Google API after post- processing and 92.0% in our DeepSpeech model.

A Study on Development and Real-Time Implementation of Voice Recognition Algorithm (화자독립방식에 의한 음성인식 알고리즘 개발 및 실시간 실현에 관한 연구)

  • Jung, Yang-geun;Jo, Sang Young;Yang, Jun Seok;Park, In-Man;Han, Sung Hyun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.18 no.4
    • /
    • pp.250-258
    • /
    • 2015
  • In this research, we proposed a new approach to implement the real-time motion control of biped robot based on voice command for unmanned FA. Voice is one of convenient methods to communicate between human and robots. To command a lot of robot task by voice, voice of the same number have to be able to be recognition voice is, the higher the time of recognition is. In this paper, a practical voice recognition system which can recognition a lot of task commands is proposed. The proposed system consists of a general purpose microprocessor and a useful voice recognition processor which can recognize a limited number of voice patterns. Given biped robots, each robot task is, classified and organized such that the number of robot tasks under each directory is net more than the maximum recognition number of the voice recognition processor so that robot tasks under each directory can be distinguished by the voice recognition command. By simulation and experiment, it was illustrated the reliability of voice recognition rates for application of the manufacturing process.

A Voice Annotation Browsing Technique in Digital Talking Book for Reading-disabled People (독서장애인을 위한 음성 도서 어노테이션 검색 기법)

  • Park, Joo Hyun;Lim, Soon-Bum;Lee, Jongwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.4
    • /
    • pp.510-519
    • /
    • 2013
  • In this paper, we propose a voice-annotation browsing system that make the reading-disabled people to be able to find and play the existing voice-annotations. The proposed system consists of 4 steps: input, ranking & recommendation, search, and output. For the reading-disabled people depending only on the auditory sense, all steps can accept voice commands. To evaluate the effectiveness of our system, we design and implement an android-based mobile e-book application supporting the voice-annotation browsing ability. The implemented system is tested by a number of blind-folded users. As a result, we can see almost all the reading-disabled people can successfully and easily reach the existing voice-annotations they want to find.

Robust Speech Recognition Algorithm of Voice Activated Powered Wheelchair for Severely Disabled Person (중증 장애우용 음성구동 휠체어를 위한 강인한 음성인식 알고리즘)

  • Suk, Soo-Young;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.6
    • /
    • pp.250-258
    • /
    • 2007
  • Current speech recognition technology s achieved high performance with the development of hardware devices, however it is insufficient for some applications where high reliability is required, such as voice control of powered wheelchairs for disabled persons. For the system which aims to operate powered wheelchairs safely by voice in real environment, we need to consider that non-voice commands such as user s coughing, breathing, and spark-like mechanical noise should be rejected and the wheelchair system need to recognize the speech commands affected by disability, which contains specific pronunciation speed and frequency. In this paper, we propose non-voice rejection method to perform voice/non-voice classification using both YIN based fundamental frequency(F0) extraction and reliability in preprocessing. We adopted a multi-template dictionary and acoustic modeling based speaker adaptation to cope with the pronunciation variation of inarticulately uttered speech. From the recognition tests conducted with the data collected in real environment, proposed YIN based fundamental extraction showed recall-precision rate of 95.1% better than that of 62% by cepstrum based method. Recognition test by a new system applied with multi-template dictionary and MAP adaptation also showed much higher accuracy of 99.5% than that of 78.6% by baseline system.

Design of Voice Control Solution for Industrial Articulated Robot (산업용 다관절로봇 음성제어솔루션 설계)

  • Kwak, Kwang-Jin;Kim, Dae-Yeon;Park, Jeongmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.55-60
    • /
    • 2021
  • As the smart factory progresses, the use of automation facilities and robots is increasing. Also, with the development of IT technology, the utilization of the system using voice recognition is also increasing. Voice recognition technology is a technology that stands out in smart home and various IoT technologies, but it is difficult to apply to factories due to the specificity of factories. Therefore, in this study, a method to control an industrial articulated robot was designed using voice recognition technology that considers the situation at the manufacturing site. It was confirmed that the robot could be controlled through network protocol and command conversion after receiving voice commands for robot operation through mobile.