• Title/Summary/Keyword: LRF Sensor

Search Result 26, Processing Time 0.026 seconds

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Comparative Analysis of the Performance of Robot Sensors in the MSRDS Platform (MSRDS 플랫폼에서 로봇 센서들의 성능 비교분석)

  • Lee, Jeong-Won;Chung, Jong-In
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.5
    • /
    • pp.57-68
    • /
    • 2014
  • MSRDS(Microsoft Robotics Developer Studio), the robot simulation platform provides the simulation robots and environments enabling to the basic robot programming without hardware robots. In this paper, we carry the maze escaping problems to compare and analyze the performance of LRF, bumper, IR, and sonar sensor with the same condition on MSRDS(Microsoft Robotics Developer Studio) environment. To evaluate the performance of sensors, we program the simulation environments with same conditions for all sensors. We could find that the LRF sensor had the highest performance and the bumper sensor has the lowest performance on the travel time, the number of turning, and the number of collisions. It was also confirmed that IR sensor and sonar sensor had lower performance than LRF sensor on the number of turning.

Extraction of Different Types of Geometrical Features from Raw Sensor Data of Two-dimensional LRF (2차원 LRF의 Raw Sensor Data로부터 추출된 다른 타입의 기하학적 특징)

  • Yan, Rui-Jun;Wu, Jing;Yuan, Chao;Han, Chang-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.265-275
    • /
    • 2015
  • This paper describes extraction methods of five different types of geometrical features (line, arc, corner, polynomial curve, NURBS curve) from the obtained raw data by using a two-dimensional laser range finder (LRF). Natural features with their covariance matrices play a key role in the realization of feature-based simultaneous localization and mapping (SLAM), which can be used to represent the environment and correct the pose of mobile robot. The covariance matrices of these geometrical features are derived in detail based on the raw sensor data and the uncertainty of LRF. Several comparison are made and discussed to highlight the advantages and drawbacks of each type of geometrical feature. Finally, the extracted features from raw sensor data obtained by using a LRF in an indoor environment are used to validate the proposed extraction methods.

Development of Adaptive Moving Obstacle Avoidance Algorithm Based on Global Map using LRF sensor (LRF 센서를 이용한 글로벌 맵 기반의 적응형 이동 장애물 회피 알고리즘 개발)

  • Oh, Se-Kwon;Lee, You-Sang;Lee, Dae-Hyun;Kim, Young-Sung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.5
    • /
    • pp.377-388
    • /
    • 2020
  • In this paper, the autonomous mobile robot whit only LRF sensors proposes an algorithm for avoiding moving obstacles in an environment where a global map containing fixed obstacles. First of all, in oder to avoid moving obstacles, moving obstacles are extracted using LRF distance sensor data and a global map. An ellipse-shaped safety radius is created using the sum of relative vector components between the extracted moving obstacles and of the autonomuos mobile robot. Considering the created safety radius, the autonomous mobile robot can avoid moving obstacles and reach the destination. To verify the proposed algorithm, use quantitative analysis methods to compare and analyze with existing algorithms. The analysis method compares the length and run time of the proposed algorithm with the length of the path of the existing algorithm based on the absence of a moving obstacle. The proposed algorithm can be avoided by taking into account the relative speed and direction of the moving obstacle, so both the route and the driving time show higher performance than the existing algorithm.

Robust Elevator Door Recognition using LRF and Camera (LRF와 카메라를 이용한 강인한 엘리베이터 문 인식)

  • Ma, Seung-Wan;Cui, Xuenan;Lee, Hyung-Ho;Kim, Hyung-Rae;Lee, Jae-Hong;Kim, Hak-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.6
    • /
    • pp.601-607
    • /
    • 2012
  • The recognition of elevator door is needed for mobile service robots to moving between floors in the building. This paper proposed the sensor fusion approach using LRF (Laser Range Finder) and camera to solve the problem. Using the laser scans by the LRF, we extract line segments and detect candidates as the elevator door. Using the image by the camera, the door candidates are verified and selected as real door of the elevator. The outliers are filtered through the verification process. Then, the door state detection is performed by depth analysis within the door. The proposed method uses extrinsic calibration to fuse the LRF and the camera. It gives better results of elevator door recognition compared to the method using LRF only.

An Efficient Outdoor Localization Method Using Multi-Sensor Fusion for Car-Like Robots (다중 센서 융합을 사용한 자동차형 로봇의 효율적인 실외 지역 위치 추정 방법)

  • Bae, Sang-Hoon;Kim, Byung-Kook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.10
    • /
    • pp.995-1005
    • /
    • 2011
  • An efficient outdoor local localization method is suggested using multi-sensor fusion with MU-EKF (Multi-Update Extended Kalman Filter) for car-like mobile robots. In outdoor environments, where mobile robots are used for explorations or military services, accurate localization with multiple sensors is indispensable. In this paper, multi-sensor fusion outdoor local localization algorithm is proposed, which fuses sensor data from LRF (Laser Range Finder), Encoder, and GPS. First, encoder data is used for the prediction stage of MU-EKF. Then the LRF data obtained by scanning the environment is used to extract objects, and estimates the robot position and orientation by mapping with map objects, as the first update stage of MU-EKF. This estimation is finally fused with GPS as the second update stage of MU-EKF. This MU-EKF algorithm can also fuse more than three sensor data efficiently even with different sensor data sampling periods, and ensures high accuracy in localization. The validity of the proposed algorithm is revealed via experiments.

Multiple Target Tracking and Forward Velocity Control for Collision Avoidance of Autonomous Mobile Robot (실외 자율주행 로봇을 위한 다수의 동적 장애물 탐지 및 선속도 기반 장애물 회피기법 개발)

  • Kim, Sun-Do;Roh, Chi-Won;Kang, Yeon-Sik;Kang, Sung-Chul;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.7
    • /
    • pp.635-641
    • /
    • 2008
  • In this paper, we used a laser range finder (LRF) to detect both the static and dynamic obstacles for the safe navigation of a mobile robot. LRF sensor measurements containing the information of obstacle's geometry are first processed to extract the characteristic points of the obstacle in the sensor field of view. Then the dynamic states of the characteristic points are approximated using kinematic model, which are tracked by associating the measurements with Probability Data Association Filter. Finally, the collision avoidance algorithm is developed by using fuzzy decision making algorithm depending on the states of the obstacles tracked by the proposed obstacle tracking algorithm. The performance of the proposed algorithm is evaluated through experiments with the experimental mobile robot.

Path Planning for an Intelligent Robot Using Flow Networks (플로우 네트워크를 이용한 지능형 로봇의 경로계획)

  • Kim, Gook-Hwan;Kim, Hyung;Kim, Byoung-Soo;Lee, Soon-Geul
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.255-262
    • /
    • 2011
  • Many intelligent robots have to be given environmental information to perform tasks. In this paper an intelligent robot, that is, a cleaning robot used a sensor fusing method of two sensors: LRF and StarGazer, and then was able to obtain the information. Throughout wall following using laser displacement sensor, LRF, the working area is built during the robot turn one cycle around the area. After the process of wall following, a path planning which is able to execute the work effectively is established using flow network algorithm. This paper describes an algorithm for minimal turning complete coverage path planning for intelligent robots. This algorithm divides the whole working area by cellular decomposition, and then provides the path planning among the cells employing flow networks. It also provides specific path planning inside each cell guaranteeing the minimal turning of the robots. The proposed algorithm is applied to two different working areas, and verified that it is an optimal path planning method.

Pedestrian Detection and Tracking Method for Autonomous Navigation Vehicle using Markov chain Monte Carlo Algorithm (MCMC 방법을 이용한 자율주행 차량의 보행자 탐지 및 추적방법)

  • Hwang, Jung-Won;Kim, Nam-Hoon;Yoon, Jeong-Yeon;Kim, Chang-Hwan
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.113-119
    • /
    • 2012
  • In this paper we propose the method that detects moving objects in autonomous navigation vehicle using LRF sensor data. Object detection and tracking methods are widely used in research area like safe-driving, safe-navigation of the autonomous vehicle. The proposed method consists of three steps: data segmentation, mobility classification and object tracking. In order to make the raw LRF sensor data to be useful, Occupancy grid is generated and the raw data is segmented according to its appearance. For classifying whether the object is moving or static, trajectory patterns are analysed. As the last step, Markov chain Monte Carlo (MCMC) method is used for tracking the object. Experimental results indicate that the proposed method can accurately detect moving objects.