• Title/Summary/Keyword: voice commands

Search Result 47, Processing Time 0.03 seconds

A Voice Command System for Autonomous Robots

  • Hong, Soon-Hyuk;Jeon, Jae-Wook
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.1
    • /
    • pp.51-57
    • /
    • 2001
  • How to promote students interest is very important in undergraduate engineering education. One of the techniques for achieving this is select appropriate projects and to integrated them with regular courses. In this paper, a voice recognition system for autonomous robots is proposed as a project to educate students about microprocessors efficiently. The proposed system consists of a microprocessor and a voice recognition processor that can recognize a limited unmber of voice patterns. The commands of autono-mous robots are classified and are organized such that one voice recognition processor can distinguish robot commands under each directory. Thus. the proposed system can distinguish more voice commands than one voice recognition processor can. A voice com-mand systems for three autonomous robots is implemented with a microprocessor Inter 80CI196KC and a voice recognition processor HM2007. The advantages in integrating this system with regular courses are also described.

  • PDF

A study on the voice command recognition at the motion control in the industrial robot (산업용 로보트의 동작제어 명령어의 인식에 관한 연구)

  • 이순요;권규식;김홍태
    • Journal of the Ergonomics Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.3-10
    • /
    • 1991
  • The teach pendant and keyboard have been used as an input device of control command in human-robot sustem. But, many problems occur in case that the usef is a novice. So, speech recognition system is required to communicate between a human and the robot. In this study, Korean voice commands, eitht robot commands, and ten digits based on the broad phonetic analysis are described. Applying broad phonetic analysis, phonemes of voice commands are divided into phoneme groups, such as plosive, fricative, affricative, nasal, and glide sound, having similar features. And then, the feature parameters and their ranges to detect phoneme groups are found by minimax method. Classification rules are consisted of combination of the feature parameters, such as zero corssing rate(ZCR), log engery(LE), up and down(UD), formant frequency, and their ranges. Voice commands were recognized by the classification rules. The recognition rate was over 90 percent in this experiment. Also, this experiment showed that the recognition rate about digits was better than that about robot commands.

  • PDF

Control of a welfare liferobot guided by voice commands

  • Han, Seong-Ho;Yoshihiro, Takita
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.47.3-47
    • /
    • 2001
  • This paper describes the control of a health care robot (called Welfare Liferobot) with voice commands. The welfare liferobot is an intelligent autonomous mobile robot with its own control system on-board and the set of sensors to perceive an environment. It is a natural way to control the welfare liferobot by use of voice command for the usage of keyboard and mouse may present a difficult problem to the elderly and the handicapped. Voice input as the main control modality can offer many advantages. A set of oral commands is included, and each command has its associated function. These control words (commands) have to be chosen by user. Each time a voice command is recognized by the robot, it executes the pre-assigned action ...

  • PDF

Probabilistic Neural Network Based Learning from Fuzzy Voice Commands for Controlling a Robot

  • Jayawardena, Chandimal;Watanabe, Keigo;Izumi, Kiyotaka
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.2011-2016
    • /
    • 2004
  • Study of human-robot communication is one of the most important research areas. Among various communication media, any useful law we find in voice communication in human-human interactions, is significant in human-robot interactions too. Control strategy of most of such systems available at present is on/off control. These robots activate a function if particular word or phrase associated with that function can be recognized in the user utterance. Recently, there have been some researches on controlling robots using information rich fuzzy commands such as "go little slowly". However, in those works, although the voice command interpretation has been considered, learning from such commands has not been treated. In this paper, learning from such information rich voice commands for controlling a robot is studied. New concepts of the coach-player model and the sub-coach are proposed and such concepts are also demonstrated for a PA-10 redundant manipulator.

  • PDF

Modular Fuzzy Neural Controller Driven by Voice Commands

  • Izumi, Kiyotaka;Lim, Young-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.32.3-32
    • /
    • 2001
  • This paper proposes a layered protocol to interpret voice commands of the user´s own language to a machine, to control it in real time. The layers consist of speech signal capturing layer, lexical analysis layer, interpretation layer and finally activation layer, where each layer tries to mimic the human counterparts in command following. The contents of a continuous voice command are captured by using Hidden Markov Model based speech recognizer. Then the concepts of Artificial Neural Network are devised to classify the contents of the recognized voice command ...

  • PDF

Design and Implementation of a Usability Testing Tool for User-oriented Design of Command-and-Control Voice User Interfaces (명령 제어 음성 인터페이스 사용자 중심 설계를 위한 사용성 평가도구의 설계 및 구현)

  • Lee, Myeong-Ji;Hong, Ki-Hyung
    • Phonetics and Speech Sciences
    • /
    • v.3 no.2
    • /
    • pp.79-87
    • /
    • 2011
  • Recently, usability has become very important in voice user interface systems. In this paper, we have designed and implemented a wizard-of-oz (WOZ) usability testing tool for command-and-control voice user interfaces. We have proposed the VUIDML (Voice User Interface Design Markup Language) to design the usability test scenario of command-and-control voice interfaces in the early design stages. For highly satisfactory voice user interfaces, we have to select highly preferred voice commands and prompts. In VUIDML, we can specify possible prompt candidates. The WOZ usability testing tool can also be used to collect user-preferred voice commands and feedback from real users.

  • PDF

Voice-based Device Control Using oneM2M IoT Platforms

  • Jeong, Isu;Yun, Jaeseok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.151-157
    • /
    • 2019
  • In this paper, we present a prototype system for controlling IoT home appliances via voice-based commands. A voice command has been widely deployed as one of unobtrusive user interfaces for applications in a variety of IoT domains. However, interoperability between diverse IoT systems is limited by several dominant companies providing voice assistants like Amazon Alexa or Google Now due to their proprietary systems. A global IoT standard, oneM2M has been proposed to mitigate the lack of interoperability between IoT systems. In this paper, we deployed oneM2M-based platforms for a voice record device like a wrist band and LED control device like a home appliance. We developed all the components for recording voices and controlling IoT devices, and demonstrate the feasibility of our proposed method based on oneM2M platforms and Google STT (Speech-to-Text) API for controlling home appliances by showing a user scenario for turning the LED device on and off via voice commands.

A Method for Selecting Voice Game Commands to Maximize the Command Distance (명령어간 거리를 최대화하는 음성 게임 명령어의 선택 방법)

  • Kim, Sangchul
    • Journal of Korea Game Society
    • /
    • v.19 no.4
    • /
    • pp.97-108
    • /
    • 2019
  • Recently interests in voice game commands have been increasing due to the diversity and convenience of the input method, but also by the distance between commands. The command distance is the phonetic difference between command utterances, and as such distance increases, the recognition rate improves. In this paper, we propose an IP(Integer Programming) modeling of the problem which is to select a combination of commands from given candidate commands for maximizing the average distance. We also propose a SA(Simulated Annealing)-based algorithm for solving the problem. We analyze the characteristics of our method using experiments under various conditions such as the number of commands, allowable command length, and so on.

Implementation of voice Command System to control the Car Sunroof (자동차 선루프 제어용 음성 명령 시스템 구현)

  • 정윤식;임재열
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.1095-1098
    • /
    • 1999
  • We have developed a speaker dependent voice command system(VCS) to control the sunroof in the car using RSC-164 VRP(Voice Recognition Processor). VCS consists of control circuits, microphone, speaker and user switch box. The control circuits include RSC-164, input audio preamplifier, memory devices, and relay circuit for sunroof control. It is designed robustly in various car noisy situations like audio volume, air conditioner, and incoming noise when window or sunroof opened. Each two users can control the car sunroof using seven voice commands on the Super TVS model and five voice commands on the Onyx model. It works well when we drive the car at over 100 km/h with the sunroof opened.

  • PDF

The Development of Personal Computer Control System Using Voice Command (음성 명령을 이용한 개인용 컴퓨터 조작 시스템의 구현)

  • Lee, Tae Jun;Kim, Dong Hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.101-102
    • /
    • 2018
  • Users who using computer may experience fatigue or sickness on their wrists if they use the keyboard and mouse for a long time. People with physical disabilities will find it difficult to work with the keyboard and mouse. There is a problem in that the substitute product for solving this is limited in function or expensive. In this paper, we development a system for controlling a personal computer with voice commands using the Amazon Echo and Amazon Web Services lambda functions. The implemented system processes the user's voice commands from the Amazon web server and sends them to the personal computer. The personal computer processes the received command and uses it to operate the application program.

  • PDF