• Title/Summary/Keyword: Vision-based Localization

Search Result 134, Processing Time 0.029 seconds

Simultaneous Localization & Map-building of Mobile Robot in the Outdoor Environments by Vision-based Compressed Extended Kalman Filter (Compressed Extended Kalman 필터를 이용한 야외 환경에서 주행 로봇의 위치 추정 및 지도 작성)

  • Yoon Suk-June;Choi Hyun-Do;Park Sung-Kee;Kim Soo-Hyun;Kwak Yoon-Keun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.6
    • /
    • pp.585-593
    • /
    • 2006
  • In this paper, we propose a vision-based simultaneous localization and map-building (SLAM) algorithm. SLAM problem asks the location of mobile robot in the unknown environments. Therefore, this problem is one of the most important processes of mobile robots in the outdoor operation. To solve this problem, Extended Kalman filter (EKF) is widely used. However, this filter requires computational power (${\sim}O(N)$, N is the dimension of state vector). To reduce the computational complexity, we applied compressed extended Kalman filter (CEKF) to stereo image sequence. Moreover, because the mobile robots operate in the outdoor environments, we should estimate full d.o.f.s of mobile robot. To evaluate proposed SLAM algorithm, we performed the outdoor experiments. The experiment was performed by using new wheeled type mobile robot, Robhaz-6W. The performance results of CEKF SLAM are presented.

A self-localization algorithm for a mobile robot using perspective invariant

  • Roh, Kyoung-Sig;Lee, Wang-Heon;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.920-923
    • /
    • 1996
  • This paper presents a new algorithm for the self-localization of a mobile robot using perspective invariant(Cross Ratio). Most of conventional model-based self-localization methods have some problems that data structure building, map updating and matching processes are very complex. Use of the simple cross ratio can be effective to the above problems. The algorithm is based on two basic assumptions that the ground plane is flat and two parallel walls are available. Also it is assumed that an environmental map is available for matching between the scene and the model. To extract an accurate steering angle for a mobile robot, we take advantage of geometric features such as vanishing points(V.P). Point features for computing cross ratios are extracted robustly using a vanishing point and the intersection points between floor and the vertical lines of door frames. The robustness and feasibility of our algorithms have been demonstrated through experiments in indoor environments using an indoor mobile robot, KASIRI-II(KAist SImple Roving Intelligence).

  • PDF

The Study of Mobile Robot Self-displacement Recognition Using Stereo Vision (스테레오 비젼을 이용한 이동로봇의 자기-이동변위인식 시스템에 관한 연구)

  • 심성준;고덕현;김규로;이순걸
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.934-937
    • /
    • 2003
  • In this paper, authors use a stereo vision system based on the visual model of human and establish inexpensive method that recognizes moving distance using characteristic points around the robot. With the stereovision. the changes of the coordinate values of the characteristic points that are fixed around the robot are measured. Self-displacement and self-localization recognition system is proposed from coordination reconstruction with those changes. To evaluate the proposed system, several characteristic points that is made with a LED around the robot and two cheap USB PC cameras are used. The mobile robot measures the coordinate value of each characteristic point at its initial position. After moving, the robot measures the coordinate values of the characteristic points those are set at the initial position. The mobile robot compares the changes of these several coordinate values and converts transformation matrix from these coordinate changes. As a matrix of the amount and the direction of moving displacement of the mobile robot, the obtained transformation matrix represents self-displacement and self-localization by the environment.

  • PDF

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

An Approach to 3D Object Localization Based on Monocular Vision

  • Jung, Sung-Hoon;Jang, Do-Won;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1658-1667
    • /
    • 2008
  • Reconstruction of 3D objects from a single view image is generally an ill-posed problem because of the projection distortion. A monocular vision based 3D object localization method is proposed in this paper, which approximates an object on the ground to a simple bounding solid and works automatically without any prior information about the object. A spherical or cylindrical object determined based on a circularity measure is approximated to a bounding cylinder, while the other general free-shaped objects to a bounding box or a bounding cylinder appropriately. For a general object, its silhouette on the ground is first computed by back-projecting its projected image in image plane onto the ground plane and then a base rectangle on the ground is determined by using the intuition that touched parts of the object on the ground should appear at lower part of the silhouette. The base rectangle is adjusted and extended until a derived bounding box from it can enclose the general object sufficiently. Height of the bounding box is also determined enough to enclose the general object. When the general object looks like a round-shaped object, a bounding cylinder that encloses the bounding box minimally is selected instead of the bounding box. A bounding solid can be utilized to localize a 3D object on the ground and to roughly estimate its volume. Usefulness of our approach is presented with experimental results on real image objects and limitations of our approach are discussed.

  • PDF

Extended Information Overlap Measure Algorithm for Neighbor Vehicle Localization

  • Punithan, Xavier;Seo, Seung-Woo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.4
    • /
    • pp.208-215
    • /
    • 2013
  • Early iterations of the existing Global Positioning System (GPS)-based or radio lateration technique-based vehicle localization algorithms suffer from flip ambiguities, forged relative location information and location information exchange overhead, which affect the subsequent iterations. This, in turn, results in an erroneous neighbor-vehicle map. This paper proposes an extended information overlap measure (EIOM) algorithm to reduce the flip error rates by exchanging the neighbor-vehicle presence features in binary information. This algorithm shifts and associates three pieces of information in the Moore neighborhood format: 1) feature information of the neighboring vehicles from a vision-based environment sensor system; 2) cardinal locations of the neighboring vehicles in its Moore neighborhood; and 3) identification information (MAC/IP addresses). Simulations were conducted for multi-lane highway scenarios to compare the proposed algorithm with the existing algorithm. The results showed that the flip error rates were reduced by up to 50%.

  • PDF

An Approach for Localization Around Indoor Corridors Based on Visual Attention Model (시각주의 모델을 적용한 실내 복도에서의 위치인식 기법)

  • Yoon, Kook-Yeol;Choi, Sun-Wook;Lee, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.93-101
    • /
    • 2011
  • For mobile robot, recognizing its current location is very important to navigate autonomously. Especially, loop closing detection that robot recognize location where it has visited before is a kernel problem to solve localization. A considerable amount of research has been conducted on loop closing detection and localization based on appearance because vision sensor has an advantage in terms of costs and various approaching methods to solve this problem. In case of scenes that consist of repeated structures like in corridors, perceptual aliasing in which, the two different locations are recognized as the same, occurs frequently. In this paper, we propose an improved method to recognize location in the scenes which have similar structures. We extracted salient regions from images using visual attention model and calculated weights using distinctive features in the salient region. It makes possible to emphasize unique features in the scene to classify similar-looking locations. In the results of corridor recognition experiments, proposed method showed improved recognition performance. It shows 78.2% in the accuracy of single floor corridor recognition and 71.5% for multi floor corridors recognition.

Experiments of Urban Autonomous Navigation using Lane Tracking Control with Monocular Vision (도심 자율주행을 위한 비전기반 차선 추종주행 실험)

  • Suh, Seung-Beum;Kang, Yeon-Sik;Roh, Chi-Won;Kang, Sung-Chul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.480-487
    • /
    • 2009
  • Autonomous Lane detection with vision is a difficult problem because of various road conditions, such as shadowy road surface, various light conditions, and the signs on the road. In this paper we propose a robust lane detection algorithm to overcome shadowy road problem using a statistical method. The algorithm is applied to the vision-based mobile robot system and the robot followed the lane with the lane following controller. In parallel with the lane following controller, the global position of the robot is estimated by the developed localization method to specify the locations where the lane is discontinued. The results of experiments, done in the region where the GPS measurement is unreliable, show good performance to detect and to follow the lane in complex conditions with shades, water marks, and so on.

Three-Dimensional Pose Estimation of Neighbor Mobile Robots in Formation System Based on the Vision System (비전시스템 기반 군집주행 이동로봇들의 삼차원 위치 및 자세 추정)

  • Kwon, Ji-Wook;Park, Mun-Soo;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.12
    • /
    • pp.1223-1231
    • /
    • 2009
  • We derive a systematic and iterative calibration algorithm, and position and pose estimation algorithm for the mobile robots in formation system based on the vision system. In addition, we develop a coordinate matching algorithm which calculates matched sequence of order in both extracted image coordinates and object coordinates for non interactive calibration and pose estimation. Based on the results of calibration, we also develop a camera simulator to confirm the results of calibration and compare the results of simulations with those of experiments in position and pose estimation.

Extended Support Vector Machines for Object Detection and Localization

  • Feyereisl, Jan;Han, Bo-Hyung
    • The Magazine of the IEIE
    • /
    • v.39 no.2
    • /
    • pp.45-54
    • /
    • 2012
  • Object detection is a fundamental task for many high-level computer vision applications such as image retrieval, scene understanding, activity recognition, visual surveillance and many others. Although object detection is one of the most popular problems in computer vision and various algorithms have been proposed thus far, it is also notoriously difficult, mainly due to lack of proper models for object representation, that handle large variations of object structure and appearance. In this article, we review a branch of object detection algorithms based on Support Vector Machines (SVMs), a well-known max-margin technique to minimize classification error. We introduce a few variations of SVMs-Structural SVMs and Latent SVMs-and discuss their applications to object detection and localization.

  • PDF