• Title/Summary/Keyword: Human-robot interface

Search Result 150, Processing Time 0.027 seconds

Design of Embedded EPGA for Controlling Humanoid Robot Arms Using Exoskeleton Motion Capture System (Exoskeleton 모션 캡처 장치로 다관절 로봇의 원격제어를 하기 위한 FPGA 임베디드 제어기 설계)

  • Lee, Woon-Kyu;Jung, Seul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.1
    • /
    • pp.33-38
    • /
    • 2007
  • In this paper, hardware implementation of interface and control between two robots, the master and the slave robot, are designed. The master robot is the motion capturing device that captures motions of the human operator who wears it. The slave robot is the corresponding humanoid robot arms. Captured motions from the master robot are transferred to the slave robot to follow after the master. All hardware designs such as PID controllers, communications between the master robot, encoder counters, and PWM generators are embedded on a single FPGA chip. Experimental studies are conducted to demonstrate the performance of the FPGA controller design.

A Study on the Walking Recognition Method of Assistance Robot Legs Using EEG and EMG Signals

  • Shin, Dae Seob
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.269-274
    • /
    • 2020
  • This paper is to study the exoskeleton robot for the walking of the elderly and the disabled. We developed and tested an Exoskeletal robot with two axes of freedom for joint motion. The EEG and EMG signals were used to move the joints of the Exoskeletal robot. By analyzing the EMG signal, the control signal was extracted and applied to the robot to facilitate the walking operation of the walking assistance robot. In addition, the brain-computer interface technology is applied to perform the operation of the robot using brain waves, spontaneous electrical activities recorded on the human scalp. These two signals were fused to study the walking recognition method of the supporting robot leg.

Development of an Autonomous Mobile Robot with Functions of Speech Recognition and Collision Avoidance

  • Park, Min-Gyu;Lee, Min-Cheol
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.475-475
    • /
    • 2000
  • This paper describes the construction of an autonomous mobile robot with functions of collision avoidance and speech recognition that is used for teaching path of the robot. The human voice as a teaching method provides more convenient user-interface to mobile robot. For safe navigation, the autonomous mobile robot needs abilities to recognize surrounding environment and avoid collision. We use u1trasonic sensors to obtain the distance from the mobile robot to the various obstacles. By navigation algorithm, the robot forecasts the possibility of collision with obstacles and modifies a path if it detects dangerous obstacles. For these functions, the robot system is composed of four separated control modules, which are a speech recognition module, a servo motor control module, an ultrasonic sensor module, and a main control module. These modules are integrated by CAN(controller area network) in order to provide real-time communication.

  • PDF

HUMAN MOTION AND SPEECH ANALYSIS TO CONSTRUCT DECISION MODEL FOR A ROBOT TO END COMMUNICATING WITH A HUMAN

  • Otsuka, Naoki;Murakami, Makoto
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.719-722
    • /
    • 2009
  • The purpose of this paper is to develop a robot that moves independently, communicates with a human, and explicitly extracts information from the human mind that is rarely expressed verbally. In a spoken dialog system for information collection, it is desirable to continue communicating with the user as long as possible, but not if the user does not wish to communicate. Therefore, the system should be able to terminate the communication before the user starts to object to using it. In this paper, to enable the construction of a decision model for a system to decide when to stop communicating with a human, we acquired speech and motion data from individuals who were asked many questions by another person. We then analyze their speech and body motion when they do not mind answering the questions, and also when they wish the questioning to cease. From the results, we can identify differences in speech power, length of pauses, speech rate, and body motion.

  • PDF

Control of Mobile Robot Using Voice Recognition and Wearable Module (음성인식과 웨어러블 모듈을 이용한 이동로봇 제어)

  • 정성호;서재용;김용민;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2002.06c
    • /
    • pp.37-40
    • /
    • 2002
  • Intelligent Wearable Module is intelligent system that arises when a human is part of the feedback loop of a computational process like a certain control system. Applied system is mobile robot. This paper represents the mobile robot control system remote controlled by Intelligent Wearable Module. So far, owing to the development of internet technologies, lots of remote control methods through internet have been proposed. To control a mobile robot through internet and guide it under unknown environment, We propose a control method activated by Intelligent Wearable Module. In a proposed system, PDA acts as a user interface to communicate with notebook as a controller of the mobile robot system using TCP/IP protocol, and the notebook controls the mobile robot system. Tlle information about the direction and velocity of the mobile robot feedbacks to the PDA and the PDA send new control method produced from the fuzzy inference engine.

  • PDF

Implementation of Hidden Markov Model based Speech Recognition System for Teaching Autonomous Mobile Robot (자율이동로봇의 명령 교시를 위한 HMM 기반 음성인식시스템의 구현)

  • 조현수;박민규;이민철
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.281-281
    • /
    • 2000
  • This paper presents an implementation of speech recognition system for teaching an autonomous mobile robot. The use of human speech as the teaching method provides more convenient user-interface for the mobile robot. In this study, for easily teaching the mobile robot, a study on the autonomous mobile robot with the function of speech recognition is tried. In speech recognition system, a speech recognition algorithm using HMM(Hidden Markov Model) is presented to recognize Korean word. Filter-bank analysis model is used to extract of features as the spectral analysis method. A recognized word is converted to command for the control of robot navigation.

  • PDF

Development of an Autonomous Mobile Robot with the Function of Teaching a Moving Path by Speech and Avoiding a Collision (음성에 의한 경로교시 기능과 충돌회피 기능을 갖춘 자율이동로봇의 개발)

  • Park, Min-Gyu;Lee, Min-Cheol;Lee, Suk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.8
    • /
    • pp.189-197
    • /
    • 2000
  • This paper addresses that the autonomous mobile robot with the function of teaching a moving path by speech and avoiding a collision is developed. The use of human speech as the teaching method provides more convenient user-interface for a mobile robot. In speech recognition system a speech recognition algorithm using neural is proposed to recognize Korean syllable. For the safe navigation the autonomous mobile robot needs abilities to recognize a surrounding environment and to avoid collision with obstacles. To obtain the distance from the mobile robot to the various obstacles in surrounding environment ultrasonic sensors is used. By the navigation algorithm the robot forecasts the collision possibility with obstacles and modifies a moving path if it detects a dangerous obstacle.

  • PDF

User Interface for Unmanned Combat Vehicle Based on Mission Planning and Global Path Planning (임무계획 및 전역경로계획에 기반한 무인전투차량의 운용자 인터페이스 구현)

  • Lee, Ho-Joo;Lee, Young-Il;Park, Yong-Woon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.6
    • /
    • pp.689-696
    • /
    • 2009
  • In this paper, a new user interface for unmanned combat vehicle(UCV) is developed based on the mission planning and global path planning. In order to complete a tactical mission given to an UCV, it is essential to design an effective interface scheme between human and UCV considering changing combat environment and characteristics of the mission. The user interface is mainly composed of two parts, mission planning and global path planning, since they are important factors to accomplish combat missions. First of all, mission types of UCV are identified. Based on mission types, the concept of mission planning for UCVs is presented. Then a new method for global path planning is devised. It is capable of dealing with multiple grid maps to consider various combat factors so that paths suitable for the mission be generated. By combining these two, a user interface method is suggested. It is partially implemented in the Dog-horse Robot of ADD and its effectiveness is verified.

Dynamic Gesture Recognition for the Remote Camera Robot Control (원격 카메라 로봇 제어를 위한 동적 제스처 인식)

  • Lee Ju-Won;Lee Byung-Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1480-1487
    • /
    • 2004
  • This study is proposed the novel gesture recognition method for the remote camera robot control. To recognize the dynamics gesture, the preprocessing step is the image segmentation. The conventional methods for the effectively object segmentation has need a lot of the cole. information about the object(hand) image. And these methods in the recognition step have need a lot of the features with the each object. To improve the problems of the conventional methods, this study proposed the novel method to recognize the dynamic hand gesture such as the MMS(Max-Min Search) method to segment the object image, MSM(Mean Space Mapping) method and COG(Conte. Of Gravity) method to extract the features of image, and the structure of recognition MLPNN(Multi Layer Perceptron Neural Network) to recognize the dynamic gestures. In the results of experiment, the recognition rate of the proposed method appeared more than 90[%], and this result is shown that is available by HCI(Human Computer Interface) device for .emote robot control.

Human Robot Interaction via Evolutionary Network Intelligence

  • Yamaguchi, Toru
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.49.2-49
    • /
    • 2002
  • This paper describes the configuration of a multi-agent system that can recognize human intentions. This system constructs ontologies of human intentions and enables knowledge acquisition and sharing between intelligent agents operating in different environments. This is achieved by using a bi-directional associative memory network. The process of intention recognition is based on fuzzy association inferences. This paper shows the process of information sharing by using ontologies. The purpose of this research is to create human-centered systems that can provide a natural interface in their interaction with people.

  • PDF