• Title/Summary/Keyword: Robot Interface

Search Result 444, Processing Time 0.022 seconds

Hybrid System Modeling and Control for Path Planning and Autonomous Navigation of Wheeled Mobile Robots (차륜형 이동로봇의 경로 계획과 자율 주행을 위한 하이브리드 시스템 모델과 제어)

  • Im, Mi-Seop;Im, Jun-Hong
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.49 no.1
    • /
    • pp.33-40
    • /
    • 2000
  • In this paper, an integrated method for the path planning and motion control of wheeled mobile robots using a hybrid system model and control is presented. The hybrid model including the continuous dynamics and discrete dynamics with the continuous and discrete state vector is derived for a two wheel driven mobile robot. The architecture of the hybrid control system for real time path planning and following is designed which has the 3-layered hierarchical structure : the discrete event system using the digital automata as the higher process, the continuous state system for the wheel velocity controls as the lower process, and the interface system as the interaction process between the continuous system as the low level and the discrete event system as the high level. The reference motion commands for autonomous navigation are generated by the abstracted motion in the discrete event system. The motion control tasks including the feasible path planning and autonomous motion control with various initial conditions are investigated as the applications by the simulation studies.

  • PDF

Operator Capacity Assessment Method for the Supervisory Control of Unmanned Military Vehicle (군사로봇의 감시제어에서 운용자 역량 평가 방법에 관한 연구)

  • Choi, Sang-Yeong;Yang, Ji-Hyeon
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.94-106
    • /
    • 2017
  • Unmanned military vehicles (UMVs) will be increasingly applied to the various military operations. These UMVs are most commonly characterized as dealing with "4D" task - dull, dirty, dangerous and difficult with automations. Although most of the UMVs are designed to a high degree of autonomy, the human operator will still intervene in the robots operation, and tele-operate them to achieve his or her mission. Thus, operator capacity, along with robot autonomy and user interface, is one of the important design factors in the research and development of the UMVs. In this paper, we propose the method to assess the operator capacity of the UMVs. The method is comprised of the 6 steps (problem, assumption, goal function identification, operator task analysis, task modeling & simulation, results and assessment), and herein colored Petri-nets are used for the modeling and simulation. Further, an illustrative example is described at the end of this paper.

Engine of computational Emotion model for emotional interaction with human (인간과 감정적 상호작용을 위한 '감정 엔진')

  • Lee, Yeon Gon
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.503-516
    • /
    • 2012
  • According to the researches of robot and software agent until now, computational emotion model is dependent on system, so it is hard task that emotion models is separated from existing systems and then recycled into new systems. Therefore, I introduce the Engine of computational Emotion model (shall hereafter appear as EE) to integrate with any robots or agents. This is the engine, ie a software for independent form from inputs and outputs, so the EE is Emotion Generation to control only generation and processing of emotions without both phases of Inputs(Perception) and Outputs(Expression). The EE can be interfaced with any inputs and outputs, and produce emotions from not only emotion itself but also personality and emotions of person. In addition, the EE can be existed in any robot or agent by a kind of software library, or be used as a separate system to communicate. In EE, emotions is the Primary Emotions, ie Joy, Surprise, Disgust, Fear, Sadness, and Anger. It is vector that consist of string and coefficient about emotion, and EE receives this vectors from input interface and then sends its to output interface. In EE, each emotions are connected to lists of emotional experiences, and the lists consisted of string and coefficient of each emotional experiences are used to generate and process emotional states. The emotional experiences are consisted of emotion vocabulary understanding various emotional experiences of human. This study EE is available to use to make interaction products to response the appropriate reaction of human emotions. The significance of the study is on development of a system to induce that person feel that product has your sympathy. Therefore, the EE can help give an efficient service of emotional sympathy to products of HRI, HCI area.

  • PDF

Development of Localization Tracking System and User Interface of Guiding Robot for the Visually Impaired (시각장애인 유도 로봇의 자기 위치 추적 시스템 및 사용자 인터페이스 개발)

  • Ryu Je-Goon;Shen Dong-Fan;Kwon Oh-Sang;Kim Nack-Hwan;Lee Sang-Moo;Lee Eung-Hyuk;Hong Seung-Hong
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.481-492
    • /
    • 2005
  • To guide the guiding robot for the visually impaired carefully, the digital map to be used to search a path must be detailed and has some information about dangerous spots. It also has to search not only safe but also short path through the position data by GPS and INS sensors. In this paper, as the difference of the ability that the visually unpaired can recognize, we have developed the localization tracking system so that it can make a movement path and verify position information, and the global navigation system for the visually impaired using the GPS and INS. This system can be used when the visually impaired move short path relatively. We had also verified that the system was able to correct the position as the assistant navigation system of the GPS on the outside.

A Study on Human-Robot Interface based on Imitative Learning using Computational Model of Mirror Neuron System (Mirror Neuron System 계산 모델을 이용한 모방학습 기반 인간-로봇 인터페이스에 관한 연구)

  • Ko, Kwang-Enu;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.565-570
    • /
    • 2013
  • The mirror neuron regions which are distributed in cortical area handled a functionality of intention recognition on the basis of imitative learning of an observed action which is acquired from visual-information of a goal-directed action. In this paper an automated intention recognition system is proposed by applying computational model of mirror neuron system to the human-robot interaction system. The computational model of mirror neuron system is designed by using dynamic neural networks which have model input which includes sequential feature vector set from the behaviors from the target object and actor and produce results as a form of motor data which can be used to perform the corresponding intentional action through the imitative learning and estimation procedures of the proposed computational model. The intention recognition framework is designed by a system which has a model input from KINECT sensor and has a model output by calculating the corresponding motor data within a virtual robot simulation environment on the basis of intention-related scenario with the limited experimental space and specified target object.

Development of a Robot for Automation of a Callus Inoculation (식물조직배양 자동화를 위한 로봇개발 - 엔드이펙터 및 시스템의 성능시험 -)

  • Chung, Suk-Hyun;No, Dae-Hyun
    • Journal of Bio-Environment Control
    • /
    • v.18 no.2
    • /
    • pp.87-94
    • /
    • 2009
  • This study was conducted to develop an automation system of inoculation processing of a lily callus. The results are summarized as followings: The end-effector was manufactured as suction and machine type. And these end-effectors can separate the callus from the mediums and divide the separated callus and then inoculate the divided callus to new mediums. Using the machine type end-effect0r, the results of the experiment showed the success rate in the division process was 100% while the separation and inoculation process was 92%. To develop the automation controller of inoculation process, the system was developed to control an external device and the manipulator. The data communication program between a robot and a personal computer was also developed using CAsyncsocket and Ethernet Interface.

A Research for Interface Based on EMG Pattern Combinations of Commercial Gesture Controller (상용 제스처 컨트롤러의 근전도 패턴 조합에 따른 인터페이스 연구)

  • Kim, Ki-Chang;Kang, Min-Sung;Ji, Chang-Uk;Ha, Ji-Woo;Sun, Dong-Ik;Xue, Gang;Shin, Kyoo-Sik
    • Journal of Engineering Education Research
    • /
    • v.19 no.1
    • /
    • pp.31-36
    • /
    • 2016
  • These days, ICT-related products are pouring out due to development of mobile technology and increase of smart phones. Among the ICT-related products, wearable devices are being spotlighted with the advent of hyper-connected society. In this paper, a body-attached type wearable device using EMG(electromyography) sensors is studied. The research field of EMG sensors is divided into two parts. One is medical area and another is control device area. This study corresponds to the latter that is a method of transmitting user's manipulation intention to robots, games or computers through the measurement of EMG. We used commercial device MYO developed by Thalmic Labs in Canada and matched up EMG of arm muscles with gesture controller. In the experiment part, first of all, various arm motions for controlling devices are defined. Finally, we drew several distinguishing kinds of motions through analysis of the EMG signals and substituted a joystick with the motions.

Study on Bilateral Exercise Interface Techniques for Active Rehabilitation of the Upper Limb Hemiplegia (상지 편마비 환자의 능동형 재활운동을 위한 양측성 훈련 인터페이스 기법에 대한 연구)

  • Eom, Su-Hong;Song, Ki-Sun;Jang, Mun-Suck;Lee, Eung-Hyuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.6
    • /
    • pp.510-517
    • /
    • 2015
  • For the self-directed rehabilitation of upper extremity hemiplegia patients, in this paper we propose an interface method capable of doing bilateral exercises in rehabilitation robotics. This is a method for estimating information of movements from the unaffected-side, and projects it to the affected-side in order. That the affected-side is followed the movements of the unaffected-side. For estimation of the unaffected-side movements information, gyro sensor data and acceleration sensor data were fused. In order to improve the measurement error in data fusion, a HDR filter and a complementary filter were applied. Estimated motion information is derived the one side of the drive input of rehabilitation robot. In order to validate the proposed method, experimental equipment is designed to be similar to the body's joints. The verification was performed by comparing the estimation angle data from inertial sensors and the encoder data which were attached to the mechanism.

A Study on Development of EEG-Based Password System Fit for Lifecaretainment (라이프케어테인먼트에 적합한 뇌파 기반 패스워드 시스템 개발에 관한 연구)

  • Yang, Gi-Chul
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.525-530
    • /
    • 2019
  • Electroencephalography(EEG) studies that have been in clinical research since the discovery of brainwave have recently been developed into brain-computer interface studies. Currently, research is underway to manipulate robot arms and drones by analyzing brainwave. However, resolution and reliability of EEG information is still limited. Therefore, it is required to develop various technologies necessary for measuring and interpreting brainwave more accurately. Pioneering new applications with these technologies is also important. In this paper, we propose development of a personal authentication system fit for lifecaretainment based on EEG. The proposed system guarantees the resolution and reliability of EEG information by using the Electrooculogram and Electromyogram(EMG) together with EEG.

The Development of Robot and Augmented Reality Based Contents and Instructional Model Supporting Childrens' Dramatic Play (로봇과 증강현실 기반의 유아 극놀이 콘텐츠 및 교수.학습 모형 개발)

  • Jo, Miheon;Han, Jeonghye;Hyun, Eunja
    • Journal of The Korean Association of Information Education
    • /
    • v.17 no.4
    • /
    • pp.421-432
    • /
    • 2013
  • The purpose of this study is to develop contents and an instructional model that support children's dramatic play by integrating the robot and augmented reality technology. In order to support the dramatic play, the robot shows various facial expressions and actions, serves as a narrator and a sound manager, supports the simultaneous interaction by using the camera and recognizing the markers and children's motions, records children's activities as a photo and a video that can be used for further activities. The robot also uses a projector to allow children to directly interact with the video object. On the other hand, augmented reality offers a variety of character changes and props, and allows various effects of background and foreground. Also it allows natural interaction between the contents and children through the real-type interface, and provides the opportunities for the interaction between actors and audiences. Along with these, augmented reality provides an experience-based learning environment that induces a sensory immersion by allowing children to manipulate or choose the learning situation and experience the results. In addition, the instructional model supporting dramatic play consists of 4 stages(i.e., teachers' preparation, introducing and understanding a story, action plan and play, evaluation and wrapping up). At each stage, detailed activities to decide or proceed are suggested.