• Title/Summary/Keyword: LiDAR intensity

Search Result 48, Processing Time 0.026 seconds

GENERATION OF AIRBORNE LIDAR INTENSITY IMAGE BY NORMALIZAING RANGE DIFFERENCES

  • Shin, Jung-Il;Yoon, Jong-Suk;Lee, Kyu-Sung
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.504-507
    • /
    • 2006
  • Airborn Lidar technology has been applied to diverse applications with the advantages of accurate 3D information. Further, Lidar intensity, backscattered signal power, can provid us additional information regarding target's characteristics. Lidar intensity varies by the target reflectance, moisture condition, range, and viewing geometry. This study purposes to generate normalized airborne LiDAR intensity image considering those influential factors such as reflectance, range and geometric/topographic factors (scan angle, ground height, aspect, slope, local incidence angle: LIA). Laser points from one flight line were extracted to simplify the geometric conditions. Laser intensities of sample plots, selected by using a set of reference data and ground survey, werethen statistically analyzed with independent variables. Target reflectance, range between sensor and target, and surface slope were main factors to influence the laser intensity. Intensity of laser points was initially normalized by removing range effect only. However, microsite topographic factor, such as slope angle, was not normalized due to difficulty of automatic calculation.

  • PDF

A Method of Extracting Features of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Sanyeon Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.191-199
    • /
    • 2023
  • In this paper, we propose a method to extract the features of five sensor-only facilities built as infrastructure for autonomous cooperative driving, which are from point cloud data acquired by LiDAR. In the case of image acquisition sensors installed in autonomous vehicles, the acquisition data is inconsistent due to the climatic environment and camera characteristics, so LiDAR sensor was applied to replace them. In addition, high-intensity reflectors were designed and attached to each facility to make it easier to distinguish it from other existing facilities with LiDAR. From the five sensor-only facilities developed and the point cloud data acquired by the data acquisition system, feature points were extracted based on the average reflective intensity of the high-intensity reflective paper attached to the facility, clustered by the DBSCAN method, and changed to two-dimensional coordinates by a projection method. The features of the facility at each distance consist of three-dimensional point coordinates, two-dimensional projected coordinates, and reflection intensity, and will be used as training data for a model for facility recognition to be developed in the future.

Land Use Classification in Very High Resolution Imagery by Data Fusion (영상 융합을 통한 고해상도 위성 영상의 토지 피복 분류)

  • Seo, Min-Ho;Han, Dong-Yeob;Kim, Yong-Il
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2005.11a
    • /
    • pp.17-22
    • /
    • 2005
  • Generally, pixel-based classification, utilize the similarity of distances between the pixel values in feature space, is applied to land use mapping using satellite remote sensing data. But this method is Improper to be applied to the very high resolution satellite data (VHRS) due to complexity of the spatial structure and the variety of pixel values. In this paper, we performed the hierarchical classification of VHRS imagery by data fusion, which integrated LiDAR height and intensity information. MLC and ISODATA methods were applied to IKONOS-2 imagery with and without LiDAR data prior to the hierarchical classification, and then results was evaluated. In conclusion, the hierarchical method with LiDAR data was the superior than others in VHRS imagery and both MLC and ISODATA classification with LiDAR data were better than without.

  • PDF

Extracting Road Points from LiDAR Data for Urban Area (도심지역 LiDAR자료로부터 도로포인트 추출기법 연구)

  • Jang, Young Woon;Choi, Yun Woong;Cho, Gi Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2D
    • /
    • pp.269-276
    • /
    • 2008
  • Recently, constructing the database of road network is a main key in various social operation as like the transportation, management, security, disaster assesment, and the city plan in our life. However it need high expenses for constructing the data, and relies on many people for finishing the tasks. This study proposed the classification method for discriminating between the road and building points using the entropy theory, then detects the classes as a expecting road from the classified point group using the standard reflectance intensity of road and the characteristics restricted by raw. Hence the main object of this study is to develop a method which can detect the road in urban area using only the LiDAR data.

Flood Simulation by using High Quality Geo-spatial Information (고품질 지형공간정보를 이용한 홍수 시뮬레이션)

  • Lee, Hyun-Jik;Hong, Sung-Hwan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.3
    • /
    • pp.97-104
    • /
    • 2010
  • The important factors in a flood simulation are hydrologic data (such as the rainfall and intensity), a threedimensional terrain model, and the hydrologic inundation calculation matrix. Should any of these factors lack accuracy, flood prediction data becomes unreliable and imprecise. The three-dimensional terrain model is constructed based on existing digital maps, current map updates, and airborne LiDAR data. This research analyzes and offers ways to improve the model's accuracy by comparing flood weakness areas selected according to the existing data on flood locations and design frequency.

A Study of LiDAR's Detection Performance Degradation in Fog and Rain Climate (안개 및 강우 상황에서의 LiDAR 검지 성능 변화에 대한 연구)

  • Kim, Ji yoon;Park, Bum jin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.2
    • /
    • pp.101-115
    • /
    • 2022
  • This study compared the performance of LiDAR in detecting objects in rough weather with that in clear weather. An experiment that reproduced rough weather divided the fog visibility into four stages from 200 m to 50 m and controlled the rainfall by dividing it into 20 mm/h and 50 mm/h. The number of points cloud and intensity were used as the performance indicators. The difference in performance was statistically investigated by a T-Test. The result of the study indicates that the performance of LiDAR decreased in the order in situations of 20 mm/h rainfall, fog visibility less than 200 m, 50 mm/h rainfall, fog visibility less than 150 m, fog visibility less than 100 m, and fog visibility less than 50 m. The decreased performance was greater when the measurement distance was greater and when the color was black rather than white. However, in the case of white, there was no difference in performance at a measurement distance of 10 m even at 50 m fog visibility, which is considered the worst situation in this experiment. This no difference in performance was also statistically significant. These performance verification results are expected to be utilized in the manufacture of road facilities in the future that improve the visibility of sensors.

A study on the classifying vehicles for traffic flow analysis using LiDAR DATA

  • Heo J.Y.;Choi J.W.;Kim Y.I.;Yu K.Y.
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.633-636
    • /
    • 2004
  • Airborne laser scanning thechnology has been studied in many applications, DSM(Digital Surface Model) development, building extraction, 3D virtual city modeling. In this paper, we will evaluate the possibility of airborne laser scanning technology for transportation application, especially for recognizing moving vehicles on road. First, we initially segment the region of roads from all LiDAR DATA using the GIS map and intensity image. Secondly, the segmented region is divided into the roads and vehicles using the height threshold value of local based window. Finally, the vehicles will be classified into the several types of vehicles by MDC(Minimum Distance Classification) method using the vehicle's geometry information, height, length, width, etc

  • PDF

3D GIS Modelling Using Airborne Integrated Rapid Mapping System (AIR-MS(Airborne Integrated Rapid Mapping System)를 이용한 3D GIS 모델링)

  • Sohn, Hong-Gyoo;Yun, Kong-Hyun;Kim, Gi-Tae;Seo, Il-Hong
    • 한국지형공간정보학회:학술대회논문집
    • /
    • 2004.10a
    • /
    • pp.123-128
    • /
    • 2004
  • 최근 디지털 카메라(Digital camera), 다중/고분광 영상(Mumltispectral/Hyperspectral image), LiDAR(Light Detection and Ranging), InSAR(Interferometric SAR)와 같이 지상을 보다 상세하고 높은 정확도로 지상을 매핑할 수 있는 센서들이 출현하고 있다. 이러한 다양한 정보 취득 자료를 충분히 활용하여 통합하기 위해서는 영상에 대하여 정확한 기하보정 또는 정사영상의 제작과 LiDAR 자료와 같은 경우 평면위치의 오차를 조정하여 다중자료들 간의 정확한 지형보정(Coregistration)이 필요하다. 본 연구에서는 AIR-MS 자료를 이용하여 즉, 항공기로부터 취득한 LiDAR(Height와 강도(Intensity) 자료), digital camera을 통합하고, 기존의 컬러항공사진 및 1:1000 수치지도를 이용하여 3D GIS 자료의 생성을 시도하였다.

  • PDF

Real-virtual Point Cloud Augmentation Method for Test and Evaluation of Autonomous Weapon Systems (자율무기체계 시험평가를 위한 실제-가상 연계 포인트 클라우드 증강 기법)

  • Saedong Yeo;Gyuhwan Hwang;Hyunsung Tae
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.3
    • /
    • pp.375-386
    • /
    • 2024
  • Autonomous weapon systems act according to artificial intelligence-based judgement based on recognition through various sensors. Test and evaluation for various scenarios is required depending on the characteristics that artificial intelligence-based judgement is made. As a part of this approach, this paper proposed a LiDAR point cloud augmentation method for mixed-reality based test and evaluation. The augmentation process is achieved by mixing real and virtual LiDAR signals based on the virtual LiDAR synchronized with the pose of the autonomous weapon system. For realistic augmentation of test and evaluation purposes, appropriate intensity values were inserted when generating a point cloud of a virtual object and its validity was verified. In addition, when mixing the generated point cloud of the virtual object with the real point cloud, the proposed method enhances realism by considering the occlusion phenomenon caused by the insertion of the virtual object.

Lane Map-based Vehicle Localization for Robust Lateral Control of an Automated Vehicle (자율주행 차량의 강건한 횡 방향 제어를 위한 차선 지도 기반 차량 위치추정)

  • Kim, Dongwook;Jung, Taeyoung;Yi, Kyong-Su
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.108-114
    • /
    • 2015
  • Automated driving systems require a high level of performance regarding environmental perception, especially in urban environments. Today's on-board sensors such as radars or cameras do not reach a satisfying level of development from the point of view of robustness and availability. Thus, map data is often used as an additional data input to support these systems. An accurate digital map is used as a powerful additional sensor. In this paper, we propose a new approach for vehicle localization using a lane map and a single-layer LiDAR. The maps are created beforehand using a highly accurate DGPS and a single-layer LiDAR. A pose estimation of the vehicle was derived from an iterative closest point (ICP) match of LiDAR's intensity data to the lane map, and the estimated pose was used as an observation inside a Kalmanfilter framework. The achieved accuracy of the proposed localization algorithm is evaluated with a highly accurate DGPS to investigate the performance with respect to lateral vehicle control.