• 제목/요약/키워드: Voice Interaction and Control

검색결과 35건 처리시간 0.031초

호 제어 마크업 해석기 개발 및 음성 대화 시스템과의 연동 (Design and Implementation of a Call Control Markup Interpreter and Its Interaction with Voice Dialog Systems)

  • 이경아;권지혜;김지영;홍기형
    • 대한음성학회지:말소리
    • /
    • 제53호
    • /
    • pp.171-183
    • /
    • 2005
  • Call Control eXtensible Markup (CCXML) is a standard language that supports a call control of voice dialog systems such as VoiceXML based systems. CCXML allows developers to handle telephony calls in an easy way without deep knowledge about telephony networks and their switching systems.We design and implement a call control markup interpreter. At the implementation, we use a Dialogic JCT-LS board, but, by designing a wrapping class for CTI (computer telephony board) features, the interpreter can easily adopt other CTI boards. We also design and implement event-based interaction scheme between the interpreter and voice dialog systems. For verifying the interaction scheme, we implement a simple voice dialog system.

  • PDF

A research on man-robot cooperative interaction system

  • Ishii, Masaru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1992년도 한국자동제어학술회의논문집(국제학술편); KOEX, Seoul; 19-21 Oct. 1992
    • /
    • pp.555-557
    • /
    • 1992
  • Recently, realization of an intelligent cooperative interaction system between a man and robot systems is required. In this paper, HyperCard with a voice control is used for above system because of its easy handling and excellent human interfaces. Clicking buttons in the HyperCard by a mouse device or a voice command means controlling each joint of a robot system. Robot teaching operation of grasping a bin and pouring liquid in it into a cup is carried out. This robot teaching method using HyperCard provides a foundation for realizing a user friendly cooperative interaction system.

  • PDF

모바일-매니퓰레이터 구조 로봇시스템의 안정한 모션제어에 관한연구 (A Study on Stable Motion Control of Mobile-Manipulators Robot System)

  • 박문열;황원준;박인만;강언욱
    • 한국산업융합학회 논문집
    • /
    • 제17권4호
    • /
    • pp.217-226
    • /
    • 2014
  • Since the world has changed to a society of 21st century high-tech industries, the modern people have become reluctant to work in a difficult and dirty environment. Therefore, unmanned technologies through robots are being demanded. Now days, effects such as voice, control, obstacle avoidance are being suggested, and especially, voice recognition technique that enables convenient interaction between human and machines is very important. In this study, in order to conduct study on the stable motion control of the robot system that has mobile-manipulator structure and is voice command-based, kinetic interpretation and dynamic modeling of two-armed manipulator and three-wheel mobile robot were conducted. In addition, autonomous driving of three-wheel mobile robot and motion control system of two-armed manipulator were designed, and combined robot control through voice command was conducted. For the performance experiment method, driving control and simulation mock experiment of manipulator that has two-armed structure was conducted, and for experiment of combined robot motion control which is voice command-based, through driving control, motion control of two-armed manipulator, and combined control based on voice command, experiment on stable motion control of voice command-based robot system that has mobile-manipulator structure was verified.

동굴관광용 고층수직이동 승강기의 긴급 음성구동 제어 (Voice Recognition Sensor Driven Elevator for High-rise Vertical Shift)

  • 최병섭;강태현;윤여훈;장훈규;소대화
    • 동굴
    • /
    • 제88호
    • /
    • pp.1-7
    • /
    • 2008
  • Recently, it is one of very interest technology of Human Computer Interaction(HCI). Nowadays, it is easy to find out that, for example, inside SF movies people has talking to computer. However, there are difference between CPU language and ours. So, we focus on connecting to CPU. For 30 years many scientists experienced in that technology. But it is really difficult. Our project goal is making that CPU could understand human voice. First of all the signal through a voice sensor will move to BCD (binary code). That elevator helps out people who wants to move up and down. This product's point is related with people's safety. Using a PWM for motor control by ATmega16, we choose a DC motor to drive it because of making a regular speed elevator. Furthermore, using a voice identification module the elevator driven by voice sensor could operate well up and down perfectly from 1st to 10th floor by PWM control with ATmega16. And, it will be clearly useful for high-rise vertical shift with voice recognition sensor driven.

조작 방식에 따른 음성과 소리 피드백의 할당 방법 가전제품과의 상호작용을 중심으로 (An Arrangement Method of Voice and Sound Feedback According to the Operation : For Interaction of Domestic Appliance)

  • 홍은지;황해정;강연아
    • 한국HCI학회논문지
    • /
    • 제11권2호
    • /
    • pp.15-22
    • /
    • 2016
  • 가전제품과 사용자와의 상호작용 방식이 다양해지고 있다. 사용자는 리모컨, 터치스크린 등으로 기기를 제어할 수 있고, 기기 역시 사운드, 음성, 시각적 신호 등 다양한 방식으로 사용자에게 피드백을 줄 수 있게 되었다. 그러나 사용자의 조작 방식에 따른 피드백 방식을 배정하는 원칙이나 기준이 없어 각 브랜드, 기기 별로 임의로 배정되어 있는 상황이다. 본 연구에서는 사용자가 가전제품을 음성 명령을 통해 조작할 때와 버튼으로 조작할 때 가전제품에서 주어지는 피드백의 방식으로 음성, 소리 중 어떤 방식이 적절한지 실험을 통해 알아보았다. 본 연구에서는 조작 방식(음성 인식, 버튼), 피드백 방식(음성 안내, 소리)의 조합으로 구성 된 총 4가지($2{\times}2$) 셀을 갖는 요인 설계 실험을 진행하였고, 조작 방식과 피드백 방식의 조합에 따라 피 실험자가 느끼는 사용성, 만족도, 선호도, 적합도가 달라지는지 살펴보았다. 그 결과 가전제품을 음성 인식으로 조작 하는 것이 사용 용이성, 조작 만족도가 높았다. 하지만 버튼으로 조작 했을 때는 피드백 방식의 종류에 따라 사용 용이성, 조작 만족도가 달라지는 것으로 나타나, 조작 방식과 피드백 방식의 상호작용 효과가 검정되었다. 조작 방식, 피드백 방식의 조합이 가전에 적절한지에 대해서는 피드백 방식의 주효과가 검정되었다. 결론적으로 음성 인식으로 조작 할 때는 피드백이 소리(earcons)로 제시되는 것이 만족도가 높았으나 이는 통계적으로 검정 되는 정도는 아니었으며, 버튼을 조작 할 때는 피드백이 음성 안내로 제시되는 것이 만족도가 높았으며 이는 통계적으로 검정 되었다. 또한 가전에 어떠한 조작 방법이나 피드백 방법이 적절한지에 대해서는 피드백 방법이 주로 영향을 미치는 것으로 나타났다.

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • 제2권4호
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Interactive Adaptation of Fuzzy Neural Networks in Voice-Controlled Systems

  • Pulasinghe, Koliya;Watanabe, Keigo;Izumi, Kiyotaka;Kiguchi, Kazuo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2002년도 ICCAS
    • /
    • pp.42.3-42
    • /
    • 2002
  • Fuzzy Neural Network (FNN) is a compulsory element in a voice-controlled machine due to its inherent capability of interpreting imprecise natural language commands. To control such a machine, user's perception of imprecise words is very important because the words' meaning is highly subjective. This paper presents a voice based controller centered on an adaptable FNN to capture the user's perception of imprecise words. Conversational interface of the machine facilitates the learning through interaction. The system consists of a dialog manager (DM), the conversational interface, a Knowledge base, which absorbs user's perception and acts as a replica of human understanding of imprecise words,...

  • PDF

감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발 (Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System)

  • 김도우;정기철;박원성
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2007년도 제38회 하계학술대회
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • 제7권4호
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.