• 제목/요약/키워드: Robot localization

Search Result 587, Processing Time 0.032 seconds

A Path tracking algorithm and a VRML image overlay method (VRML과 영상오버레이를 이용한 로봇의 경로추적)

  • Sohn, Eun-Ho;Zhang, Yuanliang;Kim, Young-Chul;Chong, Kil-To
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.907-908
    • /
    • 2006
  • We describe a method for localizing a mobile robot in its working environment using a vision system and Virtual Reality Modeling Language (VRML). The robot identifies landmarks in the environment, using image processing and neural network pattern matching techniques, and then its performs self-positioning with a vision system based on a well-known localization algorithm. After the self-positioning procedure, the 2-D scene of the vision is overlaid with the VRML scene. This paper describes how to realize the self-positioning, and shows the overlap between the 2-D and VRML scenes. The method successfully defines a robot's path.

  • PDF

Coordinate Estimation of Mobile Robot Using Optical Mouse Sensors (광 마우스 센서를 이용한 이동로봇 좌표추정)

  • Park, Sang-Hyung;Yi, Soo-Yeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.9
    • /
    • pp.716-722
    • /
    • 2016
  • Coordinate estimation is an essential function for autonomous navigation of a mobile robot. The optical mouse sensor is convenient and cost-effective for the coordinate estimation problem. It is possible to overcome the position estimation error caused by the slip and the model mismatch of robot's motion equation using the optical mouse sensor. One of the simple methods for the position estimation using the optical mouse sensor is integration of the velocity data from the sensor with time. However, the unavoidable noise in the sensor data may deteriorate the position estimation in case of the simple integration method. In general, a mobile robot has ready-to-use motion information from the encoder sensors of driving motors. By combining the velocity data from the optical mouse sensor and the motion information of a mobile robot, it is possible to improve the coordinate estimation performance. In this paper, a coordinate estimation algorithm for an autonomous mobile robot is presented based on the well-known Kalman filter that is useful to combine the different types of sensors. Computer simulation results show the performance of the proposed localization algorithm for several types of trajectories in comparison with the simple integration method.

Real-Time Mapping of Mobile Robot on Stereo Vision (스테레오 비전 기반 이동 로봇의 실시간 지도 작성 기법)

  • Han, Cheol-Hun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.60-65
    • /
    • 2010
  • This paper describes the results of 2D mapping, feature detection and matching to create the surrounding environment in the mounted stereo camera on Mobile robot. Extract method of image's feature in real-time processing for quick operation uses the edge detection and Sum of Absolute Difference(SAD), stereo matching technique can be obtained through the correlation coefficient. To estimate the location of a mobile robot using ZigBee beacon and encoders mounted on the robot is estimated by Kalman filter. In addition, the merged gyro scope to measure compass is possible to generate map during mobile robot is moving. The Simultaneous Localization and Mapping (SLAM) of mobile robot technology with an intelligent robot can be applied efficiently in human life would be based.

Reduction in Sample Size Using Topological Information for Monte Carlo Localization

  • Yang, Ju-Ho;Song, Jae-Bok;Chung, Woo-Jin
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.901-905
    • /
    • 2005
  • Monte Carlo localization is known to be one of the most reliable methods for pose estimation of a mobile robot. Much research has been done to improve performance of MCL so far. Although MCL is capable of estimating the robot pose even for a completely unknown initial pose in the known environment, it takes considerable time to give an initial estimate because the number of random samples is usually very large especially for a large-scale environment. For practical implementation of the MCL, therefore, a reduction in sample size is desirable. This paper presents a novel approach to reducing the number of samples used in the particle filter for efficient implementation of MCL. To this end, the topological information generated off- line using a thinning method, which is commonly used in image processing, is employed. The topological map is first created from the given grid map for the environment. The robot scans the local environment using a laser rangefinder and generates a local topological map. The robot then navigates only on this local topological edge, which is likely to be the same as the one obtained off- line from the given grid map. Random samples are drawn near the off-line topological edge instead of being taken with uniform distribution, since the robot traverses along the edge. In this way, the sample size required for MCL can be drastically reduced, thus leading to reduced initial operation time. Experimental results using the proposed method show that the number of samples can be reduced considerably, and the time required for robot pose estimation can also be substantially decreased.

  • PDF

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Advanced Relative Localization Algorithm Robust to Systematic Odometry Errors (주행거리계의 기구적 오차에 강인한 개선된 상대 위치추정 알고리즘)

  • Ra, Won-Sang;Whang, Ick-Ho;Lee, Hye-Jin;Park, Jin-Bae;Yoon, Tae-Sung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.9
    • /
    • pp.931-938
    • /
    • 2008
  • In this paper, a novel localization algorithm robust to the unmodeled systematic odometry errors is proposed for low-cost non-holonomic mobile robots. It is well known that the most pose estimators using odometry measurements cannot avoid the performance degradation due to the dead-reckoning of systematic odometry errors. As a remedy for this problem, we tty to reflect the wheelbase error in the robot motion model as a parametric uncertainty. Applying the Krein space estimation theory for the discrete-time uncertain nonlinear motion model results in the extended robust Kalman filter. This idea comes from the fact that systematic odometry errors might be regarded as the parametric uncertainties satisfying the sum quadratic constrains (SQCs). The advantage of the proposed methodology is that it has the same recursive structure as the conventional extended Kalman filter, which makes our scheme suitable for real-time applications. Moreover, it guarantees the satisfactoty localization performance even in the presence of wheelbase uncertainty which is hard to model or estimate but often arises from real driving environments. The computer simulations will be given to demonstrate the robustness of the suggested localization algorithm.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Localization using Ego Motion based on Fisheye Warping Image (어안 워핑 이미지 기반의 Ego motion을 이용한 위치 인식 알고리즘)

  • Choi, Yun Won;Choi, Kyung Sik;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.70-77
    • /
    • 2014
  • This paper proposes a novel localization algorithm based on ego-motion which used Lucas-Kanade Optical Flow and warping image obtained through fish-eye lenses mounted on the robots. The omnidirectional image sensor is a desirable sensor for real-time view-based recognition of a robot because the all information around the robot can be obtained simultaneously. The preprocessing (distortion correction, image merge, etc.) of the omnidirectional image which obtained by camera using reflect in mirror or by connection of multiple camera images is essential because it is difficult to obtain information from the original image. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we extract motion vectors using Lucas-Kanade Optical Flow in preprocessed image. Third, we estimate the robot position and angle using ego-motion method which used direction of vector and vanishing point obtained by RANSAC. We confirmed the reliability of localization algorithm using ego-motion based on fisheye warping image through comparison between results (position and angle) of the experiment obtained using the proposed algorithm and results of the experiment measured from Global Vision Localization System.