• Title/Summary/Keyword: LIDAR sensor

Search Result 108, Processing Time 0.022 seconds

GENERATION OF AIRBORNE LIDAR INTENSITY IMAGE BY NORMALIZAING RANGE DIFFERENCES

  • Shin, Jung-Il;Yoon, Jong-Suk;Lee, Kyu-Sung
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.504-507
    • /
    • 2006
  • Airborn Lidar technology has been applied to diverse applications with the advantages of accurate 3D information. Further, Lidar intensity, backscattered signal power, can provid us additional information regarding target's characteristics. Lidar intensity varies by the target reflectance, moisture condition, range, and viewing geometry. This study purposes to generate normalized airborne LiDAR intensity image considering those influential factors such as reflectance, range and geometric/topographic factors (scan angle, ground height, aspect, slope, local incidence angle: LIA). Laser points from one flight line were extracted to simplify the geometric conditions. Laser intensities of sample plots, selected by using a set of reference data and ground survey, werethen statistically analyzed with independent variables. Target reflectance, range between sensor and target, and surface slope were main factors to influence the laser intensity. Intensity of laser points was initially normalized by removing range effect only. However, microsite topographic factor, such as slope angle, was not normalized due to difficulty of automatic calculation.

  • PDF

A Novel Human Detection Scheme using a Human Characteristics Function in a Low Resolution 2D LIDAR (저해상도 2D 라이다의 사람 특성 함수를 이용한 새로운 사람 감지 기법)

  • Kwon, Seong Kyung;Hyun, Eugin;Lee, Jin-Hee;Lee, Jonghun;Son, Sang Hyuk
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.5
    • /
    • pp.267-276
    • /
    • 2016
  • Human detection technologies are widely used in smart homes and autonomous vehicles. However, in order to detect human, autonomous vehicle researchers have used a high-resolution LIDAR and smart home researchers have applied a camera with a narrow detection range. In this paper, we propose a novel method using a low-cost and low-resolution LIDAR that can detect human fast and precisely without complex learning algorithm and additional devices. In other words, human can be distinguished from objects by using a new human characteristics function which is empirically extracted from the characteristics of a human. In addition, we verified the effectiveness of the proposed algorithm through a number of experiments.

Analysis of Traversable Candidate Region for Unmanned Ground Vehicle Using 3D LIDAR Reflectivity (3D LIDAR 반사율을 이용한 무인지상차량의 주행가능 후보 영역 분석)

  • Kim, Jun;Ahn, Seongyong;Min, Jihong;Bae, Keunsung
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.11
    • /
    • pp.1047-1053
    • /
    • 2017
  • The range data acquired by 2D/3D LIDAR, a core sensor for autonomous navigation of an unmanned ground vehicle, is effectively used for ground modeling and obstacle detection. Within the ambiguous boundary of a road environment, however, LIDAR does not provide enough information to analyze the traversable region. This paper presents a new method to analyze a candidate area using the characteristics of LIDAR reflectivity for better detection of a traversable region. We detected a candidate traversable area through the front zone of the vehicle using the learning process of LIDAR reflectivity, after calibration of the reflectivity of each channel. We validated the proposed method of a candidate traversable region detection by performing experiments in the real operating environment of the unmanned ground vehicle.

A Study on Displacement Measurement Hardware of Retaining Walls based on Laser Sensor for Small and Medium-sized Urban Construction Sites

  • Kim, Jun-Sang;Kim, Jung-Yeol;Kim, Young-Suk
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1250-1251
    • /
    • 2022
  • Measuring management is an important part of preventing the collapse of retaining walls in advance by evaluating their stability with a variety of measuring instruments. The current work of measuring management requires considerable human and material resources since measurement companies need to install measuring instruments at various places on the retaining wall and visit the construction site to collect measurement data and evaluate the stability of the retaining wall. It was investigated that the applicability of the current work of measuring management is poor at small and medium-sized urban construction sites(excavation depth<10m) where measuring management is not essential. Therefore, the purpose of this study is to develop a laser sensor-based hardware to support the wall displacement measurements and their control software applicable to small and medium-sized urban construction sites. The 2D lidar sensor, which is more economical than a 3D laser scanner, is applied as element technology. Additionally, the hardware is mounted on the corner strut of the retaining wall, and it collects point cloud data of the retaining wall by rotating the 2D lidar sensor 360° through a servo motor. Point cloud data collected from the hardware can be transmitted through Wi-Fi to a displacement analysis device (notebook). The hardware control software is designed to control the 2D lidar sensor and servo motor in the displacement analysis device by remote access. The process of analyzing the displacement of a retaining wall using the developed hardware and software is as follows: the construction site manager uses the displacement analysis device to 1)collect the initial point cloud data, and after a certain period 2)comparative point cloud data is collected, and 3)the distance between the initial point and comparison point cloud data is calculated in order. As a result of performing an indoor experiment, the analyses show that a displacement of approximately 15 mm can be identified. In the future, the integrated system of the hardware designed here, and the displacement analysis software to be developed can be applied to small and medium-sized urban construction sites through several field experiments. Therefore, effective management of the displacement of the retaining wall is possible in comparison with the current measuring management work in terms of ease of installation, dismantlement, displacement measurement, and economic feasibility.

  • PDF

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Aerosol Direct Radiative Forcing by Three Dimensional Observations from Passive- and Active- Satellite Sensors (수동형-능동형 위성센서 관측자료를 이용한 대기 에어러솔의 3차원 분포 및 복사강제 효과 산정)

  • Lee, Kwon-Ho
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.28 no.2
    • /
    • pp.159-171
    • /
    • 2012
  • Aerosol direct radiative forcing (ADRF) retrieval method was developed by combining data from passive and active satellite sensors. Aerosol optical thickness (AOT) retrieved form the Moderate Resolution Imaging Spectroradiometer (MODIS) as a passive visible sensor and aerosol vertical profile from to the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) as an active laser sensor were investigated an application possibility. Especially, space-born Light Detection and Ranging (Lidar) observation provides a specific knowledge of the optical properties of atmospheric aerosols with spatial, temporal, vertical, and spectral resolutions. On the basis of extensive radiative transfer modeling, it is demonstrated that the use of the aerosol vertical profiles is sensitive to the estimation of ADRF. Throughout the investigation of relationship between aerosol height and ADRF, mean change rates of ADRF per increasing of 1 km aerosol height are smaller at surface than top-of-atmosphere (TOA). As a case study, satellite data for the Asian dust day of March 31, 2007 were used to estimate ADRF. Resulting ADRF values were compared with those retrieved independently from MODIS only data. The absolute difference values are 1.27% at surface level and 4.73% at top of atmosphere (TOA).

Simulation of Ladar Range Images based on Linear FM Signal Analysis (Linear FM 신호분석을 통한 Ladar Range 영상의 시뮬레이션)

  • Min, Seong-Hong;Kim, Seong-Joon;Lee, Im-Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.16 no.2
    • /
    • pp.87-95
    • /
    • 2008
  • Ladar (Laser Detection And Ranging, Lidar) is a sensor to acquire precise distances to the surfaces of target region using laser signals, which can be suitably applied to ATD (Automatic Target Detection) for guided missiles or aerial vehicles recently. It provides a range image in which each measured distance is expressed as the brightness of the corresponding pixel. Since the precise 3D models can be generated from the Ladar range image, more robust identification and recognition of the targets can be possible. If we simulate the data of Ladar sensor, we can efficiently use this simulator to design and develop Ladar sensors and systems and to develop the data processing algorithm. The purposes of this study are thus to simulate the signals of a Ladar sensor based on linear frequency modulation and to create range images from the simulated Ladar signals. We first simulated the laser signals of a Ladar using FM chirp modulator and then computed the distances from the sensor to a target using the FFT process of the simulated signals. Finally, we created the range image using the distances set.

  • PDF

Calibration of VLP-16 Lidar Sensor and Vision Cameras Using the Center Coordinates of a Spherical Object (구형물체의 중심좌표를 이용한 VLP-16 라이다 센서와 비전 카메라 사이의 보정)

  • Lee, Ju-Hwan;Lee, Geun-Mo;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.2
    • /
    • pp.89-96
    • /
    • 2019
  • 360 degree 3-dimensional lidar sensors and vision cameras are commonly used in the development of autonomous driving techniques for automobile, drone, etc. By the way, existing calibration techniques for obtaining th e external transformation of the lidar and the camera sensors have disadvantages in that special calibration objects are used or the object size is too large. In this paper, we introduce a simple calibration method between two sensors using a spherical object. We calculated the sphere center coordinates using four 3-D points selected by RANSAC of the range data of the sphere. The 2-dimensional coordinates of the object center in the camera image are also detected to calibrate the two sensors. Even when the range data is acquired from various angles, the image of the spherical object always maintains a circular shape. The proposed method results in about 2 pixel reprojection error, and the performance of the proposed technique is analyzed by comparing with the existing methods.

A Study on Improvement of Pedestrian Care System for Cooperative Automated Driving (자율협력주행을 위한 보행자Care 시스템 개선에 관한 연구)

  • Lee, Sangsoo;Kim, Jonghwan;Lee, Sunghwa;Kim, Jintae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.111-116
    • /
    • 2021
  • This study is a study on improving the pedestrian Care system, which delivers jaywalking events in real time to the autonomous driving control center and Autonomous driving vehicles in operation and issues warnings and announcements to pedestrians based on pedestrian signals. In order to secure reliability of object detection method of pedestrian Care system, the inspection method combined with camera sensor with Lidar sensor and the improved system algorithm were presented. In addition, for the occurrence events of Lidar sensors and intelligent CCTV received during the operation of autonomous driving vehicles, the system algorithm for the elimination of overlapping events and the improvement of accuracy of the same time, place, and object was presented.

Localization Using 3D-Lidar Based Road Reflectivity Map and IPM Image (3D-Lidar 기반 도로 반사도 지도와 IPM 영상을 이용한 위치추정)

  • Jung, Tae-Ki;Song, Jong-Hwa;Im, Jun-Hyuck;Lee, Byung-Hyun;Jee, Gyu-In
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.12
    • /
    • pp.1061-1067
    • /
    • 2016
  • Position of the vehicle for driving is essential to autonomous navigation. However, there appears GPS position error due to multipath which is occurred by tall buildings in downtown area. In this paper, GPS position error is corrected by using camera sensor and highly accurate map made with 3D-Lidar. Input image through inverse perspective mapping is converted into top-view image, and it works out map matching with the map which has intensity of 3D-Lidar. Performance comparison was conducted between this method and traditional way which does map matching with input image after conversion of map to pinhole camera image. As a result, longitudinal error declined 49% and complexity declined 90%.