• Title/Summary/Keyword: Vision based localization

Search Result 134, Processing Time 0.03 seconds

Vision-based Autonomous Semantic Map Building and Robot Localization (영상 기반 자율적인 Semantic Map 제작과 로봇 위치 지정)

  • Lim, Joung-Hoon;Jeong, Seung-Do;Suh, Il-Hong;Choi, Byung-Uk
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.86-88
    • /
    • 2005
  • An autonomous semantic-map building method is proposed, with the robot localized in the semantic-map. Our semantic-map is organized by objects represented as SIFT features and vision-based relative localization is employed as a process model to implement extended Kalman filters. Thus, we expect that robust SLAM performance can be obtained even under poor conditions in which localization cannot be achieved by classical odometry-based SLAM

  • PDF

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

Vision-based Self Localization Using Ceiling Artificial Landmark for Ubiquitous Mobile Robot (유비쿼터스 이동로봇용 천장 인공표식을 이용한 비젼기반 자기위치인식법)

  • Lee Ju-Sang;Lim Young-Cheol;Ryoo Young-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.560-566
    • /
    • 2005
  • In this paper, a practical technique for correction of a distorted image for vision-based localization of ubiquitous mobile robot. The localization of mobile robot is essential and is realized by using camera vision system. In order to wide the view angle of camera, the vision system includes a fish-eye lens, which distorts the image. Because a mobile robot moves rapidly, the image processing should he fast to recognize the localization. Thus, we propose the practical correction technique for a distorted image, verify the Performance by experimental test.

Self-Localization of Mobile Robot Using Single Camera (단일 카메라를 이용한 이동로봇의 자기 위치 추정)

  • 김명호;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.404-404
    • /
    • 2000
  • This paper presents a single vision-based sel(-localization method in an corridor environment. We use the Hough transform for finding parallel lines and vertical lines. And we use these cross points as feature points and it is calculated relative distance from mobile robot to these points. For matching environment map to feature points, searching window is defined and self-localization is performed by matching procedure. The result shows the suitability of this method by experiment.

  • PDF

Accurate Range-free Localization Based on Quantum Particle Swarm Optimization in Heterogeneous Wireless Sensor Networks

  • Wu, Wenlan;Wen, Xianbin;Xu, Haixia;Yuan, Liming;Meng, Qingxia
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1083-1097
    • /
    • 2018
  • This paper presents a novel range-free localization algorithm based on quantum particle swarm optimization. The proposed algorithm is capable of estimating the distance between two non-neighboring sensors for multi-hop heterogeneous wireless sensor networks where all nodes' communication ranges are different. Firstly, we construct a new cumulative distribution function of expected hop progress for sensor nodes with different transmission capability. Then, the distance between any two nodes can be computed accurately and effectively by deriving the mathematical expectation of cumulative distribution function. Finally, quantum particle swarm optimization algorithm is used to improve the positioning accuracy. Simulation results show that the proposed algorithm is superior in the localization accuracy and efficiency when used in random and uniform placement of nodes for heterogeneous wireless sensor networks.

SLAM Aided GPS/INS/Vision Navigation System for Helicopter (SLAM 기반 GPS/INS/영상센서를 결합한 헬리콥터 항법시스템의 구성)

  • Kim, Jae-Hyung;Lyou, Joon;Kwak, Hwy-Kuen
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.8
    • /
    • pp.745-751
    • /
    • 2008
  • This paper presents a framework for GPS/INS/Vision based navigation system of helicopters. GPS/INS coupled algorithm has weak points such as GPS blockage and jamming, while the helicopter is a speedy and high dynamical vehicle amenable to lose the GPS signal. In case of the vision sensor, it is not affected by signal jamming and also navigation error is not accumulated. So, we have implemented an GPS/INS/Vision aided navigation system providing the robust localization suitable for helicopters operating in various environments. The core algorithm is the vision based simultaneous localization and mapping (SLAM) technique. For the verification of the SLAM algorithm, we performed flight tests. From the tests, we confirm the developed system is robust enough under the GPS blockage. The system design, software algorithm, and flight test results are described.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

Self-localization of a Mobile Robot for Decreasing the Error and VRML Image Overlay (오차 감소를 위한 이동로봇 Self-Localization과 VRML 영상오버레이 기법)

  • Kwon Bang-Hyun;Shon Eun-Ho;Kim Young-Chul;Chong Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.4
    • /
    • pp.389-394
    • /
    • 2006
  • Inaccurate localization exposes a robot to many dangerous conditions. It could make a robot be moved to wrong direction or damaged by collision with surrounding obstacles. There are numerous approaches to self-localization, and there are different modalities as well (vision, laser range finders, ultrasonic sonars). Since sensor information is generally uncertain and contains noise, there are many researches to reduce the noise. But, the correctness is limited because most researches are based on statistical approach. The goal of our research is to measure more exact robot location by matching between built VRML 3D model and real vision image. To determine the position of mobile robot, landmark-localization technique has been applied. Landmarks are any detectable structure in the physical environment. Some use vertical lines, others use specially designed markers, In this paper, specially designed markers are used as landmarks. Given known focal length and a single image of three landmarks it is possible to compute the angular separation between the lines of sight of the landmarks. The image-processing and neural network pattern matching techniques are employed to recognize landmarks placed in a robot working environment. After self-localization, the 2D scene of the vision is overlaid with the VRML scene.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.