• Title/Summary/Keyword: 2D laser range finder

Search Result 15, Processing Time 0.02 seconds

Multi-UAV Formation Based on Feedback Linearization Technique Using Range-Only Measurement (거리 정보를 이용한 되먹음 선형화 기법 무인기 편대 비행제어)

  • Kim, Sung-Hwan;Ryoo, Chang-Kyung;Park, Choon-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.23-30
    • /
    • 2009
  • This paper addresses how to make a formation of multiple unmanned aerial vehicles (UAVs) using only the relative range information. Since the relative range can easily be measured by an on-board range sensor like the laser range finder, the proposed method does not require any expensive and heavy wireless communication system to share the navigation information of each vehicle. Based on the two-dimensional (2-D) nonlinear equations of motion, we propose a nonlinear formation controller using the typical input-output feedback linearization method. The performance of the proposed formation controller is verified by various numerical simulations.

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.

Localization of Mobile Robot using Local Map and Kalman Filtering (지역 지도와 칼만 필터를 이용한 이동 로봇의 위치 추정)

  • Lim, Byung-Hyun;Kim, Yeong-Min;Hwang, Jong-Sun;Ko, Nak-Yong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.07b
    • /
    • pp.1227-1230
    • /
    • 2003
  • In this paper, we propose a pose estimation method using local map acquired from 2d laser range finder information. The proposed method uses extended kalman filter. The state equation is a navigation system equation of Nomad Super Scout II. The measurement equation is a map-based measurement equation using a SICK PLS 101-112 sensor. We describe a map consisting of geometric features such as plane, edge and corner. For pose estimation we scan external environments by laser rage finer. And then these data are fed to kalman filter to estimate robot pose and position. The proposed method enables very fast simultaneous map building and pose estimation.

  • PDF

Implementation of vision system for a mobile robot using pulse phase difference & structured light (펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF