• Title/Summary/Keyword: LIDAR Data

Search Result 338, Processing Time 0.034 seconds

Reconstruction of Buildings from Satellite Image and LIDAR Data

  • Guo, T.;Yasuoka, Y.
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.519-521
    • /
    • 2003
  • Within the paper an approach for the automatic extraction and reconstruction of buildings in urban built-up areas base on fusion of high-resolution satellite image and LIDAR data is presented. The presented data fusion scheme is essentially motivated by the fact that image and range data are quite complementary. Raised urban objects are first segmented from the terrain surface in the LIDAR data by making use of the spectral signature derived from satellite image, afterwards building potential regions are initially detected in a hierarchical scheme. A novel 3D building reconstruction model is also presented based on the assumption that most buildings can be approximately decomposed into polyhedral patches. With the constraints of presented building model, 3D edges are used to generate the hypothesis and follow the verification processes and a subsequent logical processing of the primitive geometric patches leads to 3D reconstruction of buildings with good details of shape. The approach is applied on the test sites and shows a good performance, an evaluation is described as well in the paper.

  • PDF

Localization of Unmanned Ground Vehicle based on Matching of Ortho-edge Images of 3D Range Data and DSM (3차원 거리정보와 DSM의 정사윤곽선 영상 정합을 이용한 무인이동로봇의 위치인식)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.43-54
    • /
    • 2012
  • This paper presents a new localization technique of an UGV(Unmanned Ground Vehicle) by matching ortho-edge images generated from a DSM (Digital Surface Map) which represents the 3D geometric information of an outdoor navigation environment and 3D range data which is obtained from a LIDAR (Light Detection and Ranging) sensor mounted at the UGV. Recent UGV localization techniques mostly try to combine positioning sensors such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), and LIDAR. Especially, ICP (Iterative Closest Point)-based geometric registration techniques have been developed for UGV localization. However, the ICP-based geometric registration techniques are subject to fail to register 3D range data between LIDAR and DSM because the sensing directions of the two data are too different. In this paper, we introduce and match ortho-edge images between two different sensor data, 3D LIDAR and DSM, for the localization of the UGV. Details of new techniques to generating and matching ortho-edge images between LIDAR and DSM are presented which are followed by experimental results from four different navigation paths. The performance of the proposed technique is compared to a conventional ICP-based technique.

A Comparison of Offshore Met-mast and Lidar Wind Measurements at Various Heights (해상기상탑과 윈드 라이다의 높이별 풍황관측자료 비교)

  • Kim, Ji Young;Kim, Min Suek
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.1
    • /
    • pp.12-19
    • /
    • 2017
  • There is a need to substitute offshore met-mast with remote sensing equipment such as wind lidar since the initial installation and O&M costs for offshore met-mast are quite high. In this study, applicability of wind lidar is verified by intercomparison test of wind speed and direction data from offshore met-mast and wind lidar for simultaneous operational period. Results at various heights show no statistical difference in trend and size and data from wind lidar is found to be more accurate and have less error than data from offshore met-mast where error from structural shading effect is significant.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Automatic Extraction of Fractures and Their Characteristics in Rock Masses by LIDAR System and the Split-FX Software (LIDAR와 Split-FX 소프트웨어를 이용한 암반 절리면의 자동추출과 절리의 특성 분석)

  • Kim, Chee-Hwan;Kemeny, John
    • Tunnel and Underground Space
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2009
  • Site characterization for structural stability in rock masses mainly involves the collection of joint property data, and in the current practice, much of this data is collected by hand directly at exposed slopes and outcrops. There are many issues with the collection of this data in the field, including issues of safety, slope access, field time, lack of data quantity, reusability of data and human bias. It is shown that information on joint orientation, spacing and roughness in rock masses, can be automatically extracted from LIDAR (light detection and ranging) point floods using the currently available Split-FX point cloud processing software, thereby reducing processing time, safety and human bias issues.

Footprint extraction of urban buildings with LIDAR data

  • Kanniah, Kasturi Devi;Gunaratnam, Kasturi;Mohd, Mohd Ibrahim Seeni
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.113-119
    • /
    • 2003
  • Building information is extremely important for many applications within the urban environment. Sufficient techniques and user-friendly tools for information extraction from remotely sensed imagery are urgently needed. This paper presents an automatic and manual approach for extracting footprints of buildings in urban areas from airborne Light Detection and Ranging (LIDAR) data. First a digital surface model (DSM) was generated from the LIDAR point data. Then, objects higher than the ground surface are extracted using the generated DSM. Based on general knowledge on the study area and field visits, buildings were separated from other objects. The automatic technique for extracting the building footprints was based on different window sizes and different values of image add backs, while the manual technique was based on image segmentation. A comparison was then made to see how precise the two techniques are in detecting and extracting building footprints. Finally, the results were compared with manually digitized building reference data to conduct an accuracy assessment and the result shows that LIDAR data provide a better shape characterization of each buildings.

  • PDF

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

Building Extraction from Lidar Data and Aerial Imagery using Domain Knowledge about Building Structures

  • Seo, Su-Young
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.3
    • /
    • pp.199-209
    • /
    • 2007
  • Traditionally, aerial images have been used as main sources for compiling topographic maps. In recent years, lidar data has been exploited as another type of mapping data. Regarding their performances, aerial imagery has the ability to delineate object boundaries but omits much of these boundaries during feature extraction. Lidar provides direct information about heights of object surfaces but have limitations with respect to boundary localization. Considering the characteristics of the sensors, this paper proposes an approach to extracting buildings from lidar and aerial imagery, which is based on the complementary characteristics of optical and range sensors. For detecting building regions, relationships among elevation contours are represented into directional graphs and searched for the contours corresponding to external boundaries of buildings. For generating building models, a wing model is proposed to assemble roof surface patches into a complete building model. Then, building models are projected and checked with features in aerial images. Experimental results show that the proposed approach provides an efficient and accurate way to extract building models.

A Study on Automatic Extraction of Buildings Using LIDAR with Aerial Imagery

  • Lee, Young-Jin;Cho, Woo-Sug;Jeong, Soo;Kim, Kyung-Ok
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.241-243
    • /
    • 2003
  • This paper presents an algorithm that automatically extracts buildings among many different features on the earth surface by fusing LIDAR data with panchromatic aerial images. The proposed algorithm consists of three stages such as point level process, polygon level process, parameter space level process. At the first stage, we eliminate gross errors and apply a local maxima filter to detect building candidate points from the raw laser scanning data. After then, a grouping procedure is performed for segmenting raw LIDAR data and the segmented LIDAR data is polygonized by the encasing polygon algorithm developed in the research. At the second stage, we eliminate non-building polygons using several constraints such as area and circularity. At the last stage, all the polygons generated at the second stage are projected onto the aerial stereo images through collinearity condition equations. Finally, we fuse the projected encasing polygons with edges detected by image processing for refining the building segments. The experimental results showed that the RMSEs of building corners in X, Y and Z were ${\pm}$8.1㎝, ${\pm}$24.7㎝, ${\pm}$35.9㎝, respectively.

  • PDF

Outlier Detection from High Sensitive Geiger Mode Imaging LIDAR Data retaining a High Outlier Ratio (높은 이상점 비율을 갖는 고감도 가이거모드 영상 라이다 데이터로부터 이상점 검출)

  • Kim, Seongjoon;Lee, Impyeong;Lee, Youngcheol;Jo, Minsik
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.5
    • /
    • pp.573-586
    • /
    • 2012
  • Point clouds acquired by a LIDAR(Light Detection And Ranging, also LADAR) system often contain erroneous points called outliers seeming not to be on physical surfaces, which should be carefully detected and eliminated before further processing for applications. Particularly in case of LIDAR systems employing with a Gieger-mode array detector (GmFPA) of high sensitivity, the outlier ratio is significantly high, which makes existing algorithms often fail to detect the outliers from such a data set. In this paper, we propose a method to discriminate outliers from a point cloud with high outlier ratio acquired by a GmFPA LIDAR system. The underlying assumption of this method is that a meaningful targe surface occupy at least two adjacent pixels and the ranges from these pixels are similar. We applied the proposed method to simulated LIDAR data of different point density and outlier ratio and analyzed the performance according to different thresholds and data properties. Consequently, we found that the outlier detection probabilities are about 99% in most cases. We also confirmed that the proposed method is robust to data properties and less sensitive to the thresholds. The method will be effectively utilized for on-line realtime processing and post-processing of GmFPA LIDAR data.