• Title/Summary/Keyword: Lidar Reflectivity

Search Result 3, Processing Time 0.015 seconds

Analysis of Traversable Candidate Region for Unmanned Ground Vehicle Using 3D LIDAR Reflectivity (3D LIDAR 반사율을 이용한 무인지상차량의 주행가능 후보 영역 분석)

  • Kim, Jun;Ahn, Seongyong;Min, Jihong;Bae, Keunsung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.11
    • /
    • pp.1047-1053
    • /
    • 2017
  • The range data acquired by 2D/3D LIDAR, a core sensor for autonomous navigation of an unmanned ground vehicle, is effectively used for ground modeling and obstacle detection. Within the ambiguous boundary of a road environment, however, LIDAR does not provide enough information to analyze the traversable region. This paper presents a new method to analyze a candidate area using the characteristics of LIDAR reflectivity for better detection of a traversable region. We detected a candidate traversable area through the front zone of the vehicle using the learning process of LIDAR reflectivity, after calibration of the reflectivity of each channel. We validated the proposed method of a candidate traversable region detection by performing experiments in the real operating environment of the unmanned ground vehicle.

3D LIDAR Based Vehicle Localization Using Synthetic Reflectivity Map for Road and Wall in Tunnel

  • Im, Jun-Hyuck;Im, Sung-Hyuck;Song, Jong-Hwa;Jee, Gyu-In
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.6 no.4
    • /
    • pp.159-166
    • /
    • 2017
  • The position of autonomous driving vehicle is basically acquired through the global positioning system (GPS). However, GPS signals cannot be received in tunnels. Due to this limitation, localization of autonomous driving vehicles can be made through sensors mounted on them. In particular, a 3D Light Detection and Ranging (LIDAR) system is used for longitudinal position error correction. Few feature points and structures that can be used for localization of vehicles are available in tunnels. Since lanes in the road are normally marked by solid line, it cannot be used to recognize a longitudinal position. In addition, only a small number of structures that are separated from the tunnel walls such as sign boards or jet fans are available. Thus, it is necessary to extract usable information from tunnels to recognize a longitudinal position. In this paper, fire hydrants and evacuation guide lights attached at both sides of tunnel walls were used to recognize a longitudinal position. These structures have highly distinctive reflectivity from the surrounding walls, which can be distinguished using LIDAR reflectivity data. Furthermore, reflectivity information of tunnel walls was fused with the road surface reflectivity map to generate a synthetic reflectivity map. When the synthetic reflectivity map was used, localization of vehicles was able through correlation matching with the local maps generated from the current LIDAR data. The experiments were conducted at an expressway including Maseong Tunnel (approximately 1.5 km long). The experiment results showed that the root mean square (RMS) position errors in lateral and longitudinal directions were 0.19 m and 0.35 m, respectively, exhibiting precise localization accuracy.

Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image (3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정)

  • Jung, Tae-Ki;Song, Jong-Hwa;Im, Jun-Hyuck;Lee, Byung-Hyun;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.12
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.