• Title/Summary/Keyword: LRF (Laser Range Finder)

Search Result 44, Processing Time 0.022 seconds

A 3D Map Building Algorithm for a Mobile Robot Moving on the Slanted Surface (모바일 로봇의 경사 주행 시 3차원 지도작성 알고리즘)

  • Hwang, Yo-Seop;Han, Jong-Ho;Kim, Hyun-Woo;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.3
    • /
    • pp.243-250
    • /
    • 2012
  • This paper proposes a 3D map-building algorithm using one LRF (Laser Range Finder) while a mobile robot is navigating on the slanted surface. There are several researches on 3D map buildings using the LRF. However most of them are performing the map building only on the flat surface. While a mobile robot is moving on the slanted surface, the view angle of LRF is dynamically changing, which makes it very difficult to build the 3D map using encoder data. To cope with this dynamic change of the view angle in build 3D map, IMU and balance filters are fused to correct the unstable encoder data in this research. Through the real navigation experiments, it is verified that the fusion of multiple sensors are properly performed to correct the slope angle of the slanted surface. The effectiveness of the balance filter are also checked through the hill climbing navigations.

An Efficient Outdoor Localization Method Using Multi-Sensor Fusion for Car-Like Robots (다중 센서 융합을 사용한 자동차형 로봇의 효율적인 실외 지역 위치 추정 방법)

  • Bae, Sang-Hoon;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.995-1005
    • /
    • 2011
  • An efficient outdoor local localization method is suggested using multi-sensor fusion with MU-EKF (Multi-Update Extended Kalman Filter) for car-like mobile robots. In outdoor environments, where mobile robots are used for explorations or military services, accurate localization with multiple sensors is indispensable. In this paper, multi-sensor fusion outdoor local localization algorithm is proposed, which fuses sensor data from LRF (Laser Range Finder), Encoder, and GPS. First, encoder data is used for the prediction stage of MU-EKF. Then the LRF data obtained by scanning the environment is used to extract objects, and estimates the robot position and orientation by mapping with map objects, as the first update stage of MU-EKF. This estimation is finally fused with GPS as the second update stage of MU-EKF. This MU-EKF algorithm can also fuse more than three sensor data efficiently even with different sensor data sampling periods, and ensures high accuracy in localization. The validity of the proposed algorithm is revealed via experiments.

Implementation of vision system for a mobile robot using pulse phase difference & structured light (펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF

Traveling Performance of a Robot Platform for Unmanned Weeding in a Dry Field (벼농사용 무인 제초로봇의 건답환경 주행 성능)

  • Kim, Gook-Hwan;Kim, Sang-Cheol;Hong, Young-Ki
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.1
    • /
    • pp.43-50
    • /
    • 2014
  • This paper introduces a robot platform which can do weeding while traveling between rice seedlings stably against irregular land surface of a paddy field. Also, an autonomous navigation technique that can track on stable state without any damage of the seedlings in the working area is proposed. Detection of the rice seedlings and avoidance knocking down by the robot platform is achieved by the sensor fusion of a laser range finder (LRF) and an inertial measurement unit (IMU). These sensors are also used to control navigating direction of the robot to keep going along the column of rice seedling consistently. Deviation of the robot direction from the rice column that is sensed by the LRF is fed back to a proportional and derivative controller to obtain stable adjustment of navigating direction and get proper returning speed of the robot to the rice column.

Development of a Real-Time Collision Avoidance Algorithm for eXperimental Autonomous Vehicle (무인자율차량의 실시간 충돌 회피 알고리즘 개발)

  • Choe, Tok-Son
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.56 no.7
    • /
    • pp.1302-1308
    • /
    • 2007
  • In this paper, a real-time collision avoidance algorithm is proposed for experimental Autonomous Vehicle(XAV). To ensure real-time implementation, a virtual potential field is calculated in one dimensional space. The attractive force is generated by the steering command either transmitted in the remote control station or calculated in the Autonomous Navigation System(ANS) of the XAV. The repulsive force is generated by obstacle information obtained from Laser Range Finder(LRF) mounted on the XAV. Using these attractive and repulsive forces, modified steering, velocity and emergency stop commands are created to avoid obstacles and follow a planned path. The suggested algorithm is inserted as one component in the XAV system. Through various real experiments and technical demonstration using the XAV, the usefulness and practicality of the proposed algorithm are verified.

An Indoor Space Representation Method Using 3D Environmental Data (3차원 데이터를 이용한 실내 공간 표현 기법)

  • Lee, Se-Ho;Jeong, Seong-Gyun;Chung, Tae-Young;Kim, Chang-Su
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.417-418
    • /
    • 2012
  • 본 논문에서는 3차원 데이터를 이용한 효율적인 실내 공간 표현 기법을 제안한다. 제안하는 기법은 3차원 데이터의 획득과 실내 구조 및 영상 정보를 표현하기 위한 표현 복원으로 구성된다. 3차원 데이터는 레이저 거리 측정기(laser range finder, LRF)와 전방향(omni) 카메라를 통해 획득한 포인트 클라우드 공간 정보와 전방향 텍스쳐 영상으로 구성된다. 실내 구조를 복원하기 위해, 획득한 포인트 클라우드를 복셀 격자 기반의 샘플링 기법을 통해 균일화하고 포아송 표면 재구성(Poisson surface rocoostruction) 기법을 통해 3차원 메쉬를 생성한다. 그리고 전방향 텍스쳐 영상과 3차원 메쉬외 기하학적 관계를 이용한 텍스쳐 매핑 기법을 통해 최종적으로 3차원 메쉬 표면을 복원한다. 실험 결과를 통해 제안하는 기법이 실내 공간을 효과적으로 표현함을 확인한다.

  • PDF

A Study of Object Tracking Drones Combining Image Processing and Distance Sensor (영상처리와 거리센서를 융합한 객체 추적용 드론의 연구)

  • Yang, Woo-Seok;Chun, Myung-Hyun;Jang, Gun-Woo;Kim, Sang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.961-964
    • /
    • 2017
  • 드론의 대중화에 따른 사고위험의 증가로 안전한 조종 방법에 대한 연구의 필요성이 대두되었다. 따라서 조종자의 조종능력에 구애받지 않는 자율비행제어기술이 필요하게 되었고, 이를 보다 안정적으로 구현하기 위하여 자율주행 소프트웨어 플랫폼으로 주목받고 있는 Robot Operating System(ROS)를 사용하였다. ROS를 기반으로 Laser Range Finder(LRF)와 Particle Filter를 사용하여 자율적으로 객체추적이 가능하며 지능적으로 장애물을 회피하여 비행 할 수 있는 안정적인 자율비행제어시스템을 구현하고자 한다.

Performance Comparison of Sensor-Programming Schemes According to the Shapes of Obstacles

  • Chung, Jong-In;Chae, Yi-Geun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.3
    • /
    • pp.56-62
    • /
    • 2021
  • MSRDS(Microsoft Robotics Developer Studio) provides the ability to simulate these technologies. SPL(Simple Programming Language) of MSRDS provides many functions for sensor programming to control autonomous robots. Sensor programming in SPL can be implemented in two types of schemes: procedure sensor notification and while-loop schemes. We considered the three programming schemes to control the robot movement after studying the advantages and disadvantages of the sensor notification procedure and the while-loop scheme. We also created simulation environments to evaluate the performance of the three considered schemes when applied to four different mazes. The simulation environment consisted of a maze and a robot with the most powerful sensor, i.e., the LRF(Laser Range Finder) sensor. We measured the required travel time and robot actions (number of turns and number of collisions) needed to escape the maze and compared the performance outcomes of the three considered schemes in the four different mazes.

Land Preview System Using Laser Range Finder based on Heave Estimation (Heave 추정 기반의 레이저 거리측정기를 이용한 선행지형예측시스템)

  • Kim, Tae-Won;Kim, Jin-Hyoung;Kim, Sung-Soo;Ko, Yun-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.64-73
    • /
    • 2012
  • In this paper, a new land preview system using laser range finder based on heave estimation algorithm is proposed. The proposed land preview system is an equipment which measures the shape of forward topography for autonomous vehicle. To implement this land preview system, the laser range finder is generally used because of its wide measuring range and robustness under various environmental condition. Then the current location of the vehicle has to be known to generate the shape of forward topography and sensors based on acceleration such as IMU and accelerometer are generally utilized to measure heave motion in the conventional land preview system. However the drawback to these sensors is that they are too expensive for low-cost vehicle such as mobile robot and their measurement error is increased for mobile robot with abrupt acceleration. In order to overcome this drawback, an algorithm that estimates heave motion using the information of odometer and previously measured topography is proposed in this paper. The proposed land preview system based on the heave estimation algorithm is verified through simulation and experiments for various terrain using a simulator and a real system.

A Study for Vision-based Estimation Algorithm of Moving Target Using Aiming Unit of Unguided Rocket (무유도 로켓의 조준 장치를 이용한 영상 기반 이동 표적 정보 추정 기법 연구)

  • Song, Jin-Mo;Lee, Sang-Hoon;Do, Joo-Cheol;Park, Tai-Sun;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.20 no.3
    • /
    • pp.315-327
    • /
    • 2017
  • In this paper, we present a method for estimating of position and velocity of a moving target by using the range and the bearing measurements from multiple sensors of aiming unit. In many cases, conventional low cost gyro sensor and a portable laser range finder(LRF) degrade the accuracy of estimation. To enhance these problems, we propose two methods. The first is background image tracking and the other is principal component analysis (PCA). The background tracking is used to assist the low cost gyro censor. And the PCA is used to cope with the problems of a portable LRF. In this paper, we prove that our method is robust with respect to low-frequency, biased and noisy inputs. We also present a comparison between our method and the extended Kalman filter(EKF).