• Title/Summary/Keyword: LiDAR intensity

Search Result 48, Processing Time 0.026 seconds

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.

Classification of Terrestrial LiDAR Data Using Factor and Cluster Analysis (요인 및 군집분석을 이용한 지상 라이다 자료의 분류)

  • Choi, Seung-Pil;Cho, Ji-Hyun;Kim, Yeol;Kim, Jun-Seong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.4
    • /
    • pp.139-144
    • /
    • 2011
  • This study proposed a classification method of LIDAR data by using simultaneously the color information (R, G, B) and reflection intensity information (I) obtained from terrestrial LIDAR and by analyzing the association between these data through the use of statistical classification methods. To this end, first, the factors that maximize variance were calculated using the variables, R, G, B, and I, whereby the factor matrix between the principal factor and each variable was calculated. However, although the factor matrix shows basic data by reducing them, it is difficult to know clearly which variables become highly associated by which factors; therefore, Varimax method from orthogonal rotation was used to obtain the factor matrix and then the factor scores were calculated. And, by using a non-hierarchical clustering method, K-mean method, a cluster analysis was performed on the factor scores obtained via K-mean method as factor analysis, and afterwards the classification accuracy of the terrestrial LiDAR data was evaluated.

Urban Change Detection Between Heterogeneous Images Using the Edge Information (이종 공간 데이터를 활용한 에지 정보 기반 도시 지역 변화 탐지)

  • Jae Hong, Oh;Chang No, Lee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.4
    • /
    • pp.259-266
    • /
    • 2015
  • Change detection using the heterogeneous data such as aerial images, aerial LiDAR (Light Detection And Ranging), and satellite images needs to be developed to efficiently monitor the complicating land use change. We approached this problem not relying on the intensity value of the geospatial image, but by using RECC(Relative Edge Cross Correlation) which is based on the edge information over the urban and suburban area. The experiment was carried out for the aerial LiDAR data with high-resolution Kompsat-2 and −3 images. We derived the optimal window size and threshold value for RECC-based change detection, and then we observed the overall change detection accuracy of 80% by comparing the results to the manually acquired reference data.

Experiment for 3D Coregistration between Scanned Point Clouds of Building using Intensity and Distance Images (강도영상과 거리영상에 의한 건물 스캐닝 점군간 3차원 정합 실험)

  • Jeon, Min-Cheol;Eo, Yang-Dam;Han, Dong-Yeob;Kang, Nam-Gi;Pyeon, Mu-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.1
    • /
    • pp.39-45
    • /
    • 2010
  • This study used the keypoint observed simultaneously on two images and on twodimensional intensity image data, which was obtained along with the two point clouds data that were approached for automatic focus among points on terrestrial LiDAR data, and selected matching point through SIFT algorithm. Also, for matching error diploid, RANSAC algorithm was applied to improve the accuracy of focus. As calculating the degree of three-dimensional rotating transformation, which is the transformation-type parameters between two points, and also the moving amounts of vertical/horizontal, the result was compared with the existing result by hand. As testing the building of College of Science at Konkuk University, the difference of the transformation parameters between the one through automatic matching and the one by hand showed 0.011m, 0.008m, and 0.052m in X, Y, Z directions, which concluded to be used as the data for automatic focus.

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

A Study on Measuring Method of Wind Resources for Wind Farm Design (풍력단지 설계를 위한 풍황자원의 측정방법 연구)

  • Sung-Min Han;Geon-Ung Gim;Sang-Man Kim;Chae-Joo Moon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.3
    • /
    • pp.387-396
    • /
    • 2023
  • The representative equipments currently used for weather observations are meteorological masters and wind lidars. According to international regulations, meteorological masters can be used for standalone measurements, but in case of wind lidars, it is mandatory to install a meteorological master that matches the height of the bottom of the wind turbine blade or a 40m meteorological masters and correct the measurement data. Turbulence flow occurs frequently at altitudes below 100m due to its nature, and wind lidars are more susceptible to the effects of turbulence than meteorological masters. However, while the turbulence intensity for meteorological masters is specified by international regulations, there is no separated specification for wind lidars. This study collected data measured under the same conditions using both meteorological masters and wind LiDARs, analyzed the uncertainties and turbulence intensity ratio. The analysis of the data showed that there were partial sections where the proportion of turbulent flow intensity exceeded 3%. Therefore, it is suggested to include a specification for the turbulence intensity error rate for wind LiDARs in the international regulations.

Intensity and Ambient Enhanced Lidar-Inertial SLAM for Unstructured Construction Environment (비정형의 건설환경 매핑을 위한 레이저 반사광 강도와 주변광을 활용한 향상된 라이다-관성 슬램)

  • Jung, Minwoo;Jung, Sangwoo;Jang, Hyesu;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.179-188
    • /
    • 2021
  • Construction monitoring is one of the key modules in smart construction. Unlike structured urban environment, construction site mapping is challenging due to the characteristics of an unstructured environment. For example, irregular feature points and matching prohibit creating a map for management. To tackle this issue, we propose a system for data acquisition in unstructured environment and a framework for Intensity and Ambient Enhanced Lidar Inertial Odometry via Smoothing and Mapping, IA-LIO-SAM, that achieves highly accurate robot trajectories and mapping. IA-LIO-SAM utilizes a factor graph same as Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping (LIO-SAM). Enhancing the existing LIO-SAM, IA-LIO-SAM leverages point's intensity and ambient value to remove unnecessary feature points. These additional values also perform as a new factor of the K-Nearest Neighbor algorithm (KNN), allowing accurate comparisons between stored points and scanned points. The performance was verified in three different environments and compared with LIO-SAM.

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

Characteristics of Wind Environment in Dongbok·Bukchon Wind Farm on Jeju (제주 동복·북촌 풍력발전단지의 바람환경 특성분석)

  • Jeong, Hyeong-Se;Kim, Yeon-Hee;Choi, Hee-Wook
    • New & Renewable Energy
    • /
    • v.18 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Climatic characteristics were described using the LiDAR (Light Detection and Ranging) and the met-mast on Dongbok·Bukchon region. The influences of meteorological conditions on the power performance of wind turbines were presented using the data of Supervisory Control And Data Acquisition (SCADA) and met-mast of the Dongbok·Bukchon Wind Farm (DBWF) located in Jeju Island. The stability was categorized into three parameters (Richardson number, Turbulence intensity, and Wind shear exponent). DBWF was dominant in unstable atmospheric conditions. At wind speeds of 14 m/s or more, the proportion of slightly unstable conditions accounted for more than 50%. A clear difference in the power output of the wind turbine was exhibited in the category of atmospheric stability and turbulence intensity (TI). Particularly, a more sensitive difference in power performance was showed in the rated wind speeds of the wind turbine and wind regime with high TI. When the flow had a high turbulence at low wind speeds and a low turbulence at rated wind speeds, a higher wind energy potential was produced than that in other conditions. Finally, the high-efficiency of the wind farm was confirmed in the slightly unstable atmospheric stability. However, when the unstable state become stronger, the wind farm efficiency was lower than that in the stable state.

A Terrestrial LiDAR Based Method for Detecting Structural Deterioration, and Its Application to Tunnel Maintenance (터널 유지관리를 위한 지상 LiDAR 기반의 구조물 변상탐지 기법 연구)

  • Bae, Sang Woo;Kwak, Jae Hwan;Kim, Tae Ho;Park, Sung Wook;Lee, Jin Duk
    • The Journal of Engineering Geology
    • /
    • v.25 no.2
    • /
    • pp.227-235
    • /
    • 2015
  • In recent years, owing to the frequent occurrence of natural disasters, the inspection and maintenance of structures have become increasingly important on a national scale. However, because most structural inspections are carried out manually, and due to the lack of objectivity in data acquisition, quantitative data are not always available. As a result, researchers are seeking ways to collect and standardize survey data using terrestrial laser scanning, thereby bypassing the limitations associated with visual investigations. However, field data acquired using a laser scanner have been required to measure changes in structure geometry resulting from passive deterioration. In this study, we demonstrate that it is possible to identify the processes of structural deterioration (e.g., efflorescence, leakage, delamination) using intensity data from terrestrial laser scanning. Additionally, we confirm the viability of automated classification of alteration type and objectification of the polygon area by establishing intensity characteristics. Finally, we show that our method is effective for structural inspection and maintenance.