• Title/Summary/Keyword: Intelligent Mobile Robot

Search Result 455, Processing Time 0.029 seconds

Motion Control of a Mobile Robot Using Natural Hand Gesture (자연스런 손동작을 이용한 모바일 로봇의 동작제어)

  • Kim, A-Ram;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.64-70
    • /
    • 2014
  • In this paper, we propose a method that gives motion command to a mobile robot to recognize human being's hand gesture. Former way of the robot-controlling system with the movement of hand used several kinds of pre-arranged gesture, therefore the ordering motion was unnatural. Also it forced people to study the pre-arranged gesture, making it more inconvenient. To solve this problem, there are many researches going on trying to figure out another way to make the machine to recognize the movement of the hand. In this paper, we used third-dimensional camera to obtain the color and depth data, which can be used to search the human hand and recognize its movement based on it. We used HMM method to make the proposed system to perceive the movement, then the observed data transfers to the robot making it to move at the direction where we want it to be.

Autonomous Mobile Robot Control using the Wearable Devices Based on EMG Signal for detecting fire (EMG 신호 기반의 웨어러블 기기를 통한 화재감지 자율 주행 로봇 제어)

  • Kim, Jin-Woo;Lee, Woo-Young;Yu, Je-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.3
    • /
    • pp.176-181
    • /
    • 2016
  • In this paper, the autonomous mobile robot control system for detecting fire was proposed using the wearable device based on EMG(Electromyogram) signal. Myo armband is used for detecting the user's EMG signal. The gesture was classified after sending the data of EMG signal to a computer using Bluetooth communication. Then the robot named 'uBrain' was implemented to move by received data from Bluetooth communication in our experiment. 'Move front', 'Turn right', 'Turn left', and 'Stop' are controllable commands for the robot. And if the robot cannot receive the Bluetooth signal from a user or if a user wants to change manual mode to autonomous mode, the robot was implemented to be in the autonomous mode. The robot flashes the LED when IR sensor detects the fire during moving.

Obstacle Modeling for Environment Recognition of Mobile Robots Using Growing Neural Gas Network

  • Kim, Min-Young;Hyungsuck Cho;Kim, Jae-Hoon
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.134-141
    • /
    • 2003
  • A major research issue associated with service robots is the creation of an environment recognition system for mobile robot navigation that is robust and efficient on various environment situations. In recent years, intelligent autonomous mobile robots have received much attention as the types of service robots for serving people and industrial robots for replacing human. To help people, robots must be able to sense and recognize three dimensional space where they live or work. In this paper, we propose a three dimensional environmental modeling method based on an edge enhancement technique using a planar fitting method and a neural network technique called "Growing Neural Gas Network." Input data pre-processing provides probabilistic density to the input data of the neural network, and the neural network generates a graphical structure that reflects the topology of the input space. Using these methods, robot's surroundings are autonomously clustered into isolated objects and modeled as polygon patches with the user-selected resolution. Through a series of simulations and experiments, the proposed method is tested to recognize the environments surrounding the robot. From the experimental results, the usefulness and robustness of the proposed method are investigated and discussed in detail.in detail.

TWR based Cooperative Localization of Multiple Mobile Robots for Search and Rescue Application (재난 구조용 다중 로봇을 위한 GNSS 음영지역에서의 TWR 기반 협업 측위 기술)

  • Lee, Chang-Eun;Sung, Tae-Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.127-132
    • /
    • 2016
  • For a practical mobile robot team such as carrying out a search and rescue mission in a disaster area, the localization have to be guaranteed even in an environment where the network infrastructure is destroyed or a global positioning system (GPS) is unavailable. The proposed architecture supports localizing robots seamlessly by finding their relative locations while moving from a global outdoor environment to a local indoor position. The proposed schemes use a cooperative positioning system (CPS) based on the two-way ranging (TWR) technique. In the proposed TWR-based CPS, each non-localized mobile robot act as tag, and finds its position using bilateral range measurements of all localized mobile robots. The localized mobile robots act as anchors, and support the localization of mobile robots in the GPS-shadow region such as an indoor environment. As a tag localizes its position with anchors, the position error of the anchor propagates to the tag, and the position error of the tag accumulates the position errors of the anchor. To minimize the effect of error propagation, this paper suggests the new scheme of full-mesh based CPS for improving the position accuracy. The proposed schemes assuring localization were validated through experiment results.

Utilization of Visual Context for Robust Object Recognition in Intelligent Mobile Robots (지능형 이동 로봇에서 강인 물체 인식을 위한 영상 문맥 정보 활용 기법)

  • Kim, Sung-Ho;Kim, Jun-Sik;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.36-45
    • /
    • 2006
  • In this paper, we introduce visual contexts in terms of types and utilization methods for robust object recognition with intelligent mobile robots. One of the core technologies for intelligent robots is visual object recognition. Robust techniques are strongly required since there are many sources of visual variations such as geometric, photometric, and noise. For such requirements, we define spatial context, hierarchical context, and temporal context. According to object recognition domain, we can select such visual contexts. We also propose a unified framework which can utilize the whole contexts and validates it in real working environment. Finally, we also discuss the future research directions of object recognition technologies for intelligent robots.

  • PDF

Fuzzy Logic Controller for a Mobile Robot Navigation (퍼지제어기를 이용한 무인차 항법제어)

  • Chung, Hak-Young;Lee, Jang-Gyu
    • Proceedings of the KIEE Conference
    • /
    • 1991.07a
    • /
    • pp.713-716
    • /
    • 1991
  • This paper describes a methodology of mobile robot navigation which is designed to carry heavy payloads at high speeds to be used in FMS(Flexible Manufacturing System) without human control. Intelligent control scheme using fuzzy logic is applied to the navigation control. It analyzes sensor readings from multi-sensor system, which is composed of ultrasonic sensors, infrared sensors and odometer, for environment learning, planning, landmark detecting and system control. And it is implemented on a physical robot, AGV(Autonomous Guided Vehicle) which is a two-wheeled, indoor robot. An on-board control software is composed of two subsystems, i.e., AGV control subsystem and Sensor control subsystem. The results show that the navigation of the AGV is robust and flexible, and a real-time control is possible.

  • PDF

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

Optimization of parameters in mobile robot navigation using genetic algorithm (유전자 알고리즘을 이용한 이동 로봇 주행 파라미터의 최적화)

  • 김경훈;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.1161-1164
    • /
    • 1996
  • In this paper, a parameter optimization technique for a mobile robot navigation is discussed. Authors already have proposed a navigation algorithm for mobile robots with sonar sensors using fuzzy decision making theory. Fuzzy decision making selects the optimal via-point utilizing membership values of each via-point candidate for fuzzy navigation goals. However, to make a robot successfully navigate through an unknown and cluttered environment, one needs to adjust parameters of membership function, thus changing shape of MF, for each fuzzy goal. Furthermore, the change in robot configuration, like change in sensor arrangement or sensing range, invokes another adjusting of MFs. To accomplish an intelligent way to adjust these parameters, we adopted a genetic algorithm, which does not require any formulation of the problem, thus more appropriate for robot navigation. Genetic algorithm generates the fittest parameter set through crossover and mutation operation of its string representation. The fitness of a parameter set is assigned after a simulation run according to its time of travel, accumulated heading angle change and collision. A series of simulations for several different environments is carried out to verify the proposed method. The results show the optimal parameters can be acquired with this method.

  • PDF

Motion Planning and Control for Mobile Robot with SOFM

  • Yun, Seok-Min;Choi, Jin-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1039-1043
    • /
    • 2005
  • Despite the many significant advances made in robot architecture, the basic approaches are deliberative and reactive methods. They are quite different in recognizing outer environment and inner operating mechanism. For this reason, they have almost opposite characteristics. Later, researchers integrate these two approaches into hybrid architecture. In such architecture, Reactive module also called low-level motion control module have advantage in real-time reacting and sensing outer environment; Deliberative module also called high-level task planning module is good at planning task using world knowledge, reasoning and intelligent computing. This paper presents a framework of the integrated planning and control for mobile robot navigation. Unlike the existing hybrid architecture, it learns topological map from the world map by using MST (Minimum Spanning Tree)-based SOFM (Self-Organizing Feature Map) algorithm. High-level planning module plans simple tasks to low-level control module and low-level control module feedbacks the environment information to high-level planning module. This method allows for a tight integration between high-level and low-level modules, which provide real-time performance and strong adaptability and reactivity to outer environment and its unforeseen changes. This proposed framework is verified by simulation.

  • PDF

Learning of Emergent Behaviors in Collective Virtual Robots using ANN and Genetic Algorithm

  • Cho, Kyung-Dal
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.3
    • /
    • pp.327-336
    • /
    • 2004
  • In distributed autonomous mobile robot system, each robot (predator or prey) must behave by itself according to its states and environments, and if necessary, must cooperate with other robots in order to carry out a given task. Therefore it is essential that each robot have both learning and evolution ability to adapt to dynamic environment. This paper proposes a pursuing system utilizing the artificial life concept where virtual robots emulate social behaviors of animals and insects and realize their group behaviors. Each robot contains sensors to perceive other robots in several directions and decides its behavior based on the information obtained by the sensors. In this paper, a neural network is used for behavior decision controller. The input of the neural network is decided by the existence of other robots and the distance to the other robots. The output determines the directions in which the robot moves. The connection weight values of this neural network are encoded as genes, and the fitness individuals are determined using a genetic algorithm. Here, the fitness values imply how much group behaviors fit adequately to the goal and can express group behaviors. The validity of the system is verified through simulation. Besides, in this paper, we could have observed the robots' emergent behaviors during simulation.