• Title/Summary/Keyword: 3D Point Cloud Data

Search Result 256, Processing Time 0.025 seconds

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Automated Derivation of Cross-sectional Numerical Information of Retaining Walls Using Point Cloud Data (점군 데이터를 활용한 옹벽의 단면 수치 정보 자동화 도출)

  • Han, Jehee;Jang, Minseo;Han, Hyungseo;Jo, Hyoungjun;Shin, Do Hyoung
    • Journal of KIBIM
    • /
    • v.14 no.2
    • /
    • pp.1-12
    • /
    • 2024
  • The paper proposes a methodology that combines the Random Sample Consensus (RANSAC) algorithm and the Point Cloud Encoder-Decoder Network (PCEDNet) algorithm to automatically extract the length of infrastructure elements from point cloud data acquired through 3D LiDAR scans of retaining walls. This methodology is expected to significantly improve time and cost efficiency compared to traditional manual measurement techniques, which are crucial for the data-driven analysis required in the precision-demanding construction sector. Additionally, the extracted positional and dimensional data can contribute to enhanced accuracy and reliability in Scan-to-BIM processes. The results of this study are anticipated to provide important insights that could accelerate the digital transformation of the construction industry. This paper provides empirical data on how the integration of digital technologies can enhance efficiency and accuracy in the construction industry, and offers directions for future research and application.

Point Cloud Data Driven Level of detail Generation in Low Level GPU Devices (Low Level GPU에서 Point Cloud를 이용한 Level of detail 생성에 대한 연구)

  • Kam, JungWon;Gu, BonWoo;Jin, KyoHong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.23 no.6
    • /
    • pp.542-553
    • /
    • 2020
  • Virtual world and simulation need large scale map rendering. However, rendering too many vertices is a computationally complex and time-consuming process. Some game development companies have developed 3D LOD objects for high-speed rendering based on distance between camera and 3D object. Terrain physics simulation researchers need a way to recognize the original object shape from 3D LOD objects. In this paper, we proposed simply automatic LOD framework using point cloud data (PCD). This PCD was created using a 6-direct orthographic ray. Various experiments are performed to validate the effectiveness of the proposed method. We hope the proposed automatic LOD generation framework can play an important role in game development and terrain physic simulation.

A Comparison of 3D Reconstruction through the Passive and Pseudo-Active Acquisition of Images (수동 및 반자동 영상획득을 통한 3차원 공간복원의 비교)

  • Jeona, MiJeong;Kim, DuBeom;Chai, YoungHo
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.3-10
    • /
    • 2016
  • In this paper, two reconstructed point cloud sets with the information of 3D features are analyzed. For a certain 3D reconstruction of the interior of a building, the first image set is taken from the sequential passive camera movement along the regular grid path and the second set is from the application of the laser scanning process. Matched key points over all images are obtained by the SIFT(Scale Invariant Feature Transformation) algorithm and are used for the registration of the point cloud data. The obtained results are point cloud number, average density of point cloud and the generating time for point cloud. Experimental results show the necessity of images from the additional sensors as well as the images from the camera for the more accurate 3D reconstruction of the interior of a building.

Automatic Generation of Clustered Solid Building Models Based on Point Cloud (포인트 클라우드 데이터 기반 군집형 솔리드 건물 모델 자동 생성 기법)

  • Kim, Han-gyeol;Hwang, YunHyuk;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1349-1365
    • /
    • 2020
  • In recent years, in the fields of smart cities and digital twins, research on model generation is increasing due to the advantage of acquiring actual 3D coordinates by using point clouds. In addition, there is an increasing demand for a solid model that can easily modify the shape and texture of the building. In this paper, we propose a method to create a clustered solid building model based on point cloud data. The proposed method consists of five steps. Accordingly, in this paper, we propose a method to create a clustered solid building model based on point cloud data. The proposed method consists of five steps. In the first step, the ground points were removed through the planarity analysis of the point cloud. In the second step, building area was extracted from the ground removed point cloud. In the third step, detailed structural area of the buildings was extracted. In the fourth step, the shape of 3D building models with 3D coordinate information added to the extracted area was created. In the last step, a 3D building solid model was created by giving texture to the building model shape. In order to verify the proposed method, we experimented using point clouds extracted from unmanned aerial vehicle images using commercial software. As a result, 3D building shapes with a position error of about 1m compared to the point cloud was created for all buildings with a certain height or higher. In addition, it was confirmed that 3D models on which texturing was performed having a resolution of less than twice the resolution of the original image was generated.

Spherical Signature Description of 3D Point Cloud and Environmental Feature Learning based on Deep Belief Nets for Urban Structure Classification (도시 구조물 분류를 위한 3차원 점 군의 구형 특징 표현과 심층 신뢰 신경망 기반의 환경 형상 학습)

  • Lee, Sejin;Kim, Donghyun
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.3
    • /
    • pp.115-126
    • /
    • 2016
  • This paper suggests the method of the spherical signature description of 3D point clouds taken from the laser range scanner on the ground vehicle. Based on the spherical signature description of each point, the extractor of significant environmental features is learned by the Deep Belief Nets for the urban structure classification. Arbitrary point among the 3D point cloud can represents its signature in its sky surface by using several neighborhood points. The unit spherical surface centered on that point can be considered to accumulate the evidence of each angular tessellation. According to a kind of point area such as wall, ground, tree, car, and so on, the results of spherical signature description look so different each other. These data can be applied into the Deep Belief Nets, which is one of the Deep Neural Networks, for learning the environmental feature extractor. With this learned feature extractor, 3D points can be classified due to its urban structures well. Experimental results prove that the proposed method based on the spherical signature description and the Deep Belief Nets is suitable for the mobile robots in terms of the classification accuracy.

MPEG-DASH based 3D Point Cloud Content Configuration Method (MPEG-DASH 기반 3차원 포인트 클라우드 콘텐츠 구성 방안)

  • Kim, Doohwan;Im, Jiheon;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.660-669
    • /
    • 2019
  • Recently, with the development of three-dimensional scanning devices and multi-dimensional array cameras, research is continuously conducted on techniques for handling three-dimensional data in application fields such as AR (Augmented Reality) / VR (Virtual Reality) and autonomous traveling. In particular, in the AR / VR field, content that expresses 3D video as point data has appeared, but this requires a larger amount of data than conventional 2D images. Therefore, in order to serve 3D point cloud content to users, various technological developments such as highly efficient encoding / decoding and storage, transfer, etc. are required. In this paper, V-PCC bit stream created using V-PCC encoder proposed in MPEG-I (MPEG-Immersive) V-PCC (Video based Point Cloud Compression) group, It is defined by the MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard, and provides to be composed of segments. Also, in order to provide the user with the information of the 3D coordinate system, the depth information parameter of the signaling message is additionally defined. Then, we design a verification platform to verify the technology proposed in this paper, and confirm it in terms of the algorithm of the proposed technology.

3D-Distortion Based Rate Distortion Optimization for Video-Based Point Cloud Compression

  • Yihao Fu;Liquan Shen;Tianyi Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.435-449
    • /
    • 2023
  • The state-of-the-art video-based point cloud compression(V-PCC) has a high efficiency of compressing 3D point cloud by projecting points onto 2D images. These images are then padded and compressed by High-Efficiency Video Coding(HEVC). Pixels in padded 2D images are classified into three groups including origin pixels, padded pixels and unoccupied pixels. Origin pixels are generated from projection of 3D point cloud. Padded pixels and unoccupied pixels are generated by copying values from origin pixels during image padding. For padded pixels, they are reconstructed to 3D space during geometry reconstruction as well as origin pixels. For unoccupied pixels, they are not reconstructed. The rate distortion optimization(RDO) used in HEVC is mainly aimed at keeping the balance between video distortion and video bitrates. However, traditional RDO is unreliable for padded pixels and unoccupied pixels, which leads to significant waste of bits in geometry reconstruction. In this paper, we propose a new RDO scheme which takes 3D-Distortion into account instead of traditional video distortion for padded pixels and unoccupied pixels. Firstly, these pixels are classified based on the occupancy map. Secondly, different strategies are applied to these pixels to calculate their 3D-Distortions. Finally, the obtained 3D-Distortions replace the sum square error(SSE) during the full RDO process in intra prediction and inter prediction. The proposed method is applied to geometry frames. Experimental results show that the proposed algorithm achieves an average of 31.41% and 6.14% bitrate saving for D1 metric in Random Access setting and All Intra setting on geometry videos compared with V-PCC anchor.

A Study on Automatic Modeling of Pipelines Connection Using Point Cloud (포인트 클라우드를 이용한 파이프라인 연결 자동 모델링에 관한 연구)

  • Lee, Jae Won;Patil, Ashok Kumar;Holi, Pavitra;Chai, Young Ho
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.3
    • /
    • pp.341-352
    • /
    • 2016
  • Manual 3D pipeline modeling from LiDAR scanned point cloud data is laborious and time-consuming process. This paper presents a method to extract the pipe, elbow and branch information which is essential to the automatic modeling of the pipeline connection. The pipe geometry is estimated from the point cloud data through the Hough transform and the elbow position is calculated by the medial axis intersection for assembling the nearest pair of pipes. The branch is also created for a pair of pipe segments by estimating the virtual points on one pipe segment and checking for any feasible intersection with the other pipe's endpoint within the pre-defined range of distance. As a result of the automatic modeling, a complete 3D pipeline model is generated by connecting the extracted information of pipes, elbows and branches.

A Basic Study on Data Structure and Process of Point Cloud based on Terrestrial LiDAR for Guideline of Reverse Engineering of Architectural MEP (건축 MEP 역설계 지침을 위한 라이다 기반 포인트 클라우드 데이터 자료 구조 및 프로세스 기초 연구)

  • Kim, Ji-Eun;Park, Sang-Chul;Kang, Tae-Wook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.8
    • /
    • pp.5695-5706
    • /
    • 2015
  • Recently adoption of BIM technology for building renovation and remodeling has been increased in construction industry. However most buildings have trouble in 2D drawing-based BIM modeling, because 2D drawings have not been updated real situations continually. Applying reverse engineering, this study analysed the point cloud data structure and the process for guideline of reverse engineering of architectural MEP, and deducted the relating considerations. To active usage of 3D scanning technique in domestic, the objective of this study is to analyze the point cloud data processing from real site with terrestrial LiDAR and the process from data gathering to data acquisition.