• Title/Summary/Keyword: Mobile robot recognition

Search Result 227, Processing Time 0.03 seconds

Development of Joystick & Speech Recognition Moving Machine Control System (조이스틱 및 음성인식 겸용 이동기제어시스템 개발)

  • Lee, Sang-Bae;Kang, Sung-In
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.52-57
    • /
    • 2007
  • This paper presents the design of intelligent moving machine control system using a real time speech recognition. The proposed moving machine control system is composed of four separated module, which are main control module, speech recognition module, servo motor driving module and sensor module. In main control module with microprocessor(80C196KC), one part of the artificial intelligences, fuzzy logic, was applied to the proposed intelligent control system. In order to improve the non-linear characteristic which depend on an user's weight and variable environment, encoder attached to the servo motors was used for feedback control. The proposed system is tested using 9 words lot control of the mobile robot, and the performance of a mobile robot using voice and joystick command is also evaluated.

Emergency Situation Detection using Images from Surveillance Camera and Mobile Robot Tracking System (감시카메라 영상기반 응급상황 탐지 및 이동로봇 추적 시스템)

  • Han, Tae-Woo;Seo, Yong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.101-107
    • /
    • 2009
  • In this paper, we describe a method of detecting emergency situation using images from surveillance cameras and propose a mobile robot tracking system for detailed examination of that situation. We are able to track a few persons and recognize their actions by an analyzing image sequences acquired from a fixed camera on all sides of buildings. When emergency situation is detected, a mobile robot moves and closely examines the place where the emergency is occurred. In order to recognize actions of a few persons using a sequence of images from surveillance cameras images, we need to track and manage a list of the regions which are regarded as human appearances. Interest regions are segmented from the background using MOG(Mixture of Gaussian) model and continuously tracked using appearance model in a single image. Then we construct a MHI(Motion History Image) for a tracked person using silhouette information of region blobs and model actions. Emergency situation is finally detected by applying these information to neural network. And we also implement mobile robot tracking technology using the distance between the person and a mobile robot.

  • PDF

Grid Map Building through Neighborhood Recognition Factor of Sonar Data (초음파 데이터의 형상 인지 지수를 이용한 확률 격자 지도의 작성)

  • Lee, Se-Jin;Park, Byung-Jae;Lim, Jong-Hwan;Chung, Wan-Kyun;Cho, Dong-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.3
    • /
    • pp.227-233
    • /
    • 2007
  • Representing an environment as the probabilistic grids is effective to sense outlines of the environment in the mobile robot area. Outlines of an environment can be expressed factually by using the probabilistic grids especially if sonar sensors would be supposed to build an environment map. However, the difficult problem of a sonar such as a specular reflection phenomenon should be overcome to build a grid map through sonar observations. In this paper, the NRF(Neighborhood Recognition Factor) was developed for building a grid map in which the specular reflection effect is minimized. Also the reproduction rate of the gird map built by using NRF was analyzed with respect to a true map. The experiment was conducted in a home environment to verify the proposed technique.

  • PDF

Position estimation and navigation control of mobile robot using mono vision (단일 카메라를 이용한 이동 로봇의 위치 추정과 주행 제어)

  • Lee, Ki-Chul;Lee, Sung-Ryul;Park, Min-Yong;Kim, Hyun-Tai;Kho, Jae-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.5
    • /
    • pp.529-539
    • /
    • 1999
  • This paper suggests a new image analysis method and indoor navigation control algorithm of mobile robots using a mono vision system. In order to reduce the positional uncertainty which is generated as the robot travels around the workspace, we propose a new visual landmark recognition algorithm with 2-D graph world model which describes the workspace as only a rough plane figure. The suggested algorithm is implemented to our mobile robot and experimented in a real corridor using extended Kalman filter. The validity and performance of the proposed algorithm was verified by showing that the trajectory deviation error was maintained under 0.075m and the position estimation error was sustained under 0.05m in the resultant trajectory of the navigation.

  • PDF

A Fuzzy Control of Autonomous Mobile Robot for Obstacle Avoidance (장애물 회피를 위한 자율이동로봇의 퍼지제어)

  • Chae Moon-Seok;Jung Tae-Young;Kang Suk-Bum;Yang Tae-Kyu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.9
    • /
    • pp.1718-1726
    • /
    • 2006
  • In this paper, we proposed a fuzzy controller and algorithm for efficiently obstacle avoidance in unknown space. The ultrasonic sensor is used for position and distance recognition of obstacle, and fuzzy controller is used for left and right wheels angular velocity control. The fuzzification is used singleton method and the control rule is each wheel forty-nine. The fuzzy inference is used simplified Mamdani's reasoning and defuzzification is used SCOG(Simplified Center Of Gravity). The computer simulation based on mobile robot modelling was performed for the capacity of fuzzy controller and the really applicable possibility revaluation of the proposed avoidance algorithm and fuzzy controller. As a result, mobile robot was exactly reached in target and it avoided obstacle efficiently.

A study on stand-alone autonomous mobile robot using mono camera (단일 카메라를 사용한 독립형 자율이동로봇 개발)

  • 정성보;이경복;장동식
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.1
    • /
    • pp.56-63
    • /
    • 2003
  • This paper introduces a vision based autonomous mini mobile robot that is an approach to produce real autonomous vehicle. Previous autonomous vehicles are dependent on PC, because of complexity of designing hardware, difficulty of installation and abundant calculations. In this paper, we present an autonomous motile robot system that has abilities of accurate steering, quick movement in high speed and intelligent recognition as a stand-alone system using a mono camera. The proposed system has been implemented on mini track of which width is 25~30cm, and length is about 200cm. Test robot can run at average 32.9km/h speed on straight lane and average 22.3km/h speed on curved lane with 30~40m radius. This system provides a model of autonomous mobile robot adapted a lane recognition algorithm in odor to make real autonomous vehicle easily.

  • PDF

Precise Localization for Mobile Robot Based on Cell-coded Landmarks on the Ceiling (천정 부착 셀코드 랜드마크에 기반한 이동 로봇의 정밀 위치 계산)

  • Chen, Hongxin;Wang, Shi;Yang, Chang-Ju;Lee, Jun-Ho;Kim, Hyong-Suk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.75-83
    • /
    • 2009
  • This paper presents a new mobile robot localization method for indoor robot navigation. The method uses color-coded landmarks on the ceiling and a camera is installed on the robot facing the ceiling. The proposed "cell-coded map", with the use of only nine different kinds of color-coded landmarks distributed in a particular way, helps reduce the complexity of the landmark structure. This technique is applicable for navigation in an unlimited size of indoor space. The structure of the landmarks and the recognition method are introduced. And 2 rigid rules are also used to ensure the correctness of the recognition. Experimental results prove that the method is useful.

Simultaneous and Coded Driving System of Ultrasonic Sensor Array for Object Recognition in Autonomous Mobile Robots

  • Kim, Ch-S.;Choi, B.J.;Park, S.H.;Lee, Y.J.;Lee, S.R.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2519-2523
    • /
    • 2003
  • Ultrasonic sensors are widely used in mobile robot applications to recognize external environments, because they are cheap, easy to use, and robust under varying lighting conditions. In most cases, a single ultrasonic sensor is used to measure the distance to an object based on time-of-flight (TOF) information, whereas multiple sensors are used to recognize the shape of an object, such as a corner, plane, or edge. However, the conventional sequential driving technique involves a long measurement time. This problem can be resolved by pulse coding ultrasonic signals, which allows multi-sensors to be fired simultaneously and adjacent objects to be distinguished. Accordingly, the current presents a new simultaneous coded driving system for an ultrasonic sensor array for object recognition in autonomous mobile robots. The proposed system is designed and implemented using a DSP and FPGA. A micro-controller board is made using a DSP, Polaroid 6500 ranging modules are modified for firing the coded signals, and a 5-channel coded signal generating board is made using a FPGA. To verify the proposed method, experiments were conducted in an environment with overlapping signals, and the flight distances for each sensor were obtained from the received overlapping signals using correlations and conversion to a bipolar PCM-NRZ signal.

  • PDF

Getting On and Off an Elevator Safely for a Mobile Robot Using RGB-D Sensors (RGB-D 센서를 이용한 이동로봇의 안전한 엘리베이터 승하차)

  • Kim, Jihwan;Jung, Minkuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.55-61
    • /
    • 2020
  • Getting on and off an elevator is one of the most important parts for multi-floor navigation of a mobile robot. In this study, we proposed the method for the pose recognition of elevator doors, safe path planning, and motion estimation of a robot using RGB-D sensors in order to safely get on and off the elevator. The accurate pose of the elevator doors is recognized using a particle filter algorithm. After the elevator door is open, the robot builds an occupancy grid map including the internal environments of the elevator to generate a safe path. The safe path prevents collision with obstacles in the elevator. While the robot gets on and off the elevator, the robot uses the optical flow algorithm of the floor image to detect the state that the robot cannot move due to an elevator door sill. The experimental results in various experiments show that the proposed method enables the robot to get on and off the elevator safely.

Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments (실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현)

  • Lee, Seung-H.;Lee, Heon-C.;Lee, Beom-H.
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.3
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.