• Title/Summary/Keyword: Point cloud date

Search Result 6, Processing Time 0.024 seconds

Analysis of overlap ratio for registration accuracy improvement of 3D point cloud data at construction sites (건설현장 3차원 점군 데이터 정합 정확성 향상을 위한 중첩비율 분석)

  • Park, Su-Yeul;Kim, Seok
    • Journal of KIBIM
    • /
    • v.11 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • Comparing to general scanning data, the 3D digital map for large construction sites and complex buildings consists of millions of points. The large construction site needs to be scanned multiple times by drone photogrammetry or terrestrial laser scanner (TLS) survey. The scanned point cloud data are required to be registrated with high resolution and high point density. Unlike the registration of 2D data, the matrix of translation and rotation are used for registration of 3D point cloud data. Archiving high accuracy with 3D point cloud data is not easy due to 3D Cartesian coordinate system. Therefore, in this study, iterative closest point (ICP) registration method for improve accuracy of 3D digital map was employed by different overlap ratio on 3D digital maps. This study conducted the accuracy test using different overlap ratios of two digital maps from 10% to 100%. The results of the accuracy test presented the optimal overlap ratios for an ICP registration method on digital maps.

Automation technology for analyzing 3D point cloud data of construction sites

  • Park, Suyeul;Kim, Younggun;Choi, Yungjun;Kim, Seok
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1100-1105
    • /
    • 2022
  • Denoising, registering, and detecting changes of 3D digital map are generally conducted by skilled technicians, which leads to inefficiency and the intervention of individual judgment. The manual post-processing for analyzing 3D point cloud data of construction sites requires a long time and sufficient resources. This study develops automation technology for analyzing 3D point cloud data for construction sites. Scanned data are automatically denoised, and the denoised data are stored in a specific storage. The stored data set is automatically registrated when the data set to be registrated is prepared. In addition, regions with non-homogeneous densities will be converted into homogeneous data. The change detection function is developed to automatically analyze the degree of terrain change occurred between time series data.

  • PDF

A study on the 2D floor plan derivation of the indoor Point Cloud based on pixelation (포인트 클라우드 데이터의 픽셀화 기반 건축물 실내의 2D도면 도출에 관한 연구)

  • Jung, Yong-Il;Oh, Sang-Min;Ryu, Min-Woo;Kang, Nam-Woo;Cho, Hun-hee
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2020.06a
    • /
    • pp.105-106
    • /
    • 2020
  • Recently, a method of deriving an efficient 2D floor plan has been attracting attention for remodeling of old buildings with inaccurate 2D floor plans, and thus, studies on reverse engineering of indoor Point Cloud Date(PCD) have been actively conducted. However, in the case of a indoor PCD, due to interference of indoor objects, available equipment is limited to Mobile Laser Scanner(MLS), which causes a efficiency reduction of data processing. Therefore, this study proposes an automatic derivation algorithm for 2D floor plan of indoor PCD based on pixelation. First, the scanned indoor PCD is projected on the XY coordinate plane. Second, a point distribution of each pixel in the projected PCD is derived using a pixelation. Lastly, 2 floor plan derivation based on the algorithm is performed.

  • PDF

Machine Learning Based MMS Point Cloud Semantic Segmentation (머신러닝 기반 MMS Point Cloud 의미론적 분할)

  • Bae, Jaegu;Seo, Dongju;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_3
    • /
    • pp.939-951
    • /
    • 2022
  • The most important factor in designing autonomous driving systems is to recognize the exact location of the vehicle within the surrounding environment. To date, various sensors and navigation systems have been used for autonomous driving systems; however, all have limitations. Therefore, the need for high-definition (HD) maps that provide high-precision infrastructure information for safe and convenient autonomous driving is increasing. HD maps are drawn using three-dimensional point cloud data acquired through a mobile mapping system (MMS). However, this process requires manual work due to the large numbers of points and drawing layers, increasing the cost and effort associated with HD mapping. The objective of this study was to improve the efficiency of HD mapping by segmenting semantic information in an MMS point cloud into six classes: roads, curbs, sidewalks, medians, lanes, and other elements. Segmentation was performed using various machine learning techniques including random forest (RF), support vector machine (SVM), k-nearest neighbor (KNN), and gradient-boosting machine (GBM), and 11 variables including geometry, color, intensity, and other road design features. MMS point cloud data for a 130-m section of a five-lane road near Minam Station in Busan, were used to evaluate the segmentation models; the average F1 scores of the models were 95.43% for RF, 92.1% for SVM, 91.05% for GBM, and 82.63% for KNN. The RF model showed the best segmentation performance, with F1 scores of 99.3%, 95.5%, 94.5%, 93.5%, and 90.1% for roads, sidewalks, curbs, medians, and lanes, respectively. The variable importance results of the RF model showed high mean decrease accuracy and mean decrease gini for XY dist. and Z dist. variables related to road design, respectively. Thus, variables related to road design contributed significantly to the segmentation of semantic information. The results of this study demonstrate the applicability of segmentation of MMS point cloud data based on machine learning, and will help to reduce the cost and effort associated with HD mapping.

Construction of BIM based Building 3D Spatial Information Using Terrestrial LiDAR (지상 LiDAR를 이용한 BIM 기반 건물의 3D 공간정보 구축 연구)

  • Kim, Kyeong-Min;Lee, Kil-Jae;Cho, Gi-Sung
    • Journal of Cadastre & Land InformatiX
    • /
    • v.46 no.1
    • /
    • pp.23-35
    • /
    • 2016
  • Recently, along with the development of IT, the non-linearity and enlargement in the response to the combination of the building industry and IT have made a wide variety in outer shapes of the buildings. So buildings need a more accurate representation using visually superior three-dimensional space information. Therefore, the study models the shapes of the other buildings in accordance with the heights. Frist of all, we measured the buildings using a Terrestrial LiDAR. Second, we obtained a high-density point cloud date of the buildings. Through this data, we made the BIM model and compared the heights of each floor's outer information layers. And then identified the BIM data status using IFC standards formats. From this data, it proposes a new 3D cadastre and the alternative for the establishment of spatial information.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.