• Title/Summary/Keyword: 3D Point cloud

Search Result 380, Processing Time 0.031 seconds

Improving Performance of File-referring Octree Based on Point Reallocation of Point Cloud File (포인트 클라우드 파일의 측점 재배치를 통한 파일 참조 옥트리의 성능 향상)

  • Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.437-442
    • /
    • 2015
  • Recently, the size of point cloud is increasing rapidly with the high advancement of 3D terrestrial laser scanners. The study aimed for improving a file-referring octree, introduced in the preceding study, which had been intended to generate an octree and to query points from a large point cloud, gathered by 3D terrestrial laser scanners. To the end, every leaf node of the octree was designed to store only one file-pointer of its first point. Also, the point cloud file was re-constructed to store points sequentially, which belongs to a same leaf node. An octree was generated from a point cloud, composed of about 300 million points, while time was measured during querying proximate points within a given distance with series of points. Consequently, the present method performed better than the preceding one from every aspect of generating, storing and restoring octree, so as querying points and memorizing usage. In fact, the query speed increased by 2 times, and the memory efficiency by 4 times. Therefore, this method has explicitly improved from the preceding one. It also can be concluded in that an octree can be generated, as points can be queried from a huge point cloud, of which larger than the main memory.

Development of a 3D earthwork model based on reverse engineering

  • Kim, Sung-Keun
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.641-642
    • /
    • 2015
  • Unlike for other building processes, BIM for earthwork does not need a large variety of 3D model shapes; however, it requires a 3D model that can efficiently reflect the changing features of the ground shape and provide soil type-dependent workload calculation and information on equipment for optimal management. Objects for earthwork have not yet been defined because the current BIM system does not provide them. The BIM technology commonly applied in the manufacturing center uses real-object data obtained through 3D scanning to generate 3D parametric solid models. 3D scanning, which is used when there are no existing 3D models, has the advantage of being able to rapidly generate parametric solid models. In this study, A method to generate 3D models for earthwork operations using reverse engineering is suggested. 3D scanning is used to create a point cloud of a construction site and the point cloud data are used to generate a surface model, which was then converted into a parametric model with 3D objects for earthwork

  • PDF

AR Anchor System Using Mobile Based 3D GNN Detection

  • Jeong, Chi-Seo;Kim, Jun-Sik;Kim, Dong-Kyun;Kwon, Soon-Chul;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.54-60
    • /
    • 2021
  • AR (Augmented Reality) is a technology that provides virtual content to the real world and provides additional information to objects in real-time through 3D content. In the past, a high-performance device was required to experience AR, but it was possible to implement AR more easily by improving mobile performance and mounting various sensors such as ToF (Time-of-Flight). Also, the importance of mobile augmented reality is growing with the commercialization of high-speed wireless Internet such as 5G. Thus, this paper proposes a system that can provide AR services via GNN (Graph Neural Network) using cameras and sensors on mobile devices. ToF of mobile devices is used to capture depth maps. A 3D point cloud was created using RGB images to distinguish specific colors of objects. Point clouds created with RGB images and Depth Map perform downsampling for smooth communication between mobile and server. Point clouds sent to the server are used for 3D object detection. The detection process determines the class of objects and uses one point in the 3D bounding box as an anchor point. AR contents are provided through app and web through class and anchor of the detected object.

3D Measurement Method Based on Point Cloud and Solid Model for Urban SingleTrees (Point cloud와 solid model을 기반으로 한 단일수목 입체적 정량화기법 연구)

  • Park, Haekyung
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1139-1149
    • /
    • 2017
  • Measuring tree's volume is very important input data of various environmental analysis modeling However, It's difficult to use economical and equipment to measure a fragmented small green space in the city. In addition, Trees are sensitive to seasons, so we need new and easier equipment and quantification methods for measuring trees than lidar for high frequency monitoring. In particular, the tree's size in a city affect management costs, ecosystem services, safety, and so need to be managed and informed on the individual tree-based. In this study, we aim to acquire image data with UAV(Unmanned Aerial Vehicle), which can be operated at low cost and frequently, and quickly and easily quantify a single tree using SfM-MVS(Structure from Motion-Multi View Stereo), and we evaluate the impact of reducing number of images on the point density of point clouds generated from SfM-MVS and the quantification of single trees. Also, We used the Watertight model to estimate the volume of a single tree and to shape it into a 3D structure and compare it with the quantification results of 3 different type of 3D models. The results of the analysis show that UAV, SfM-MVS and solid model can quantify and shape a single tree with low cost and high time resolution easily. This study is only for a single tree, Therefore, in order to apply it to a larger scale, it is necessary to follow up research to develop it, such as convergence with various spatial information data, improvement of quantification technique and flight plan for enlarging green space.

3D Scanning Data Coordination and As-Built-BIM Construction Process Optimization - Utilization of Point Cloud Data for Structural Analysis

  • Kim, Tae Hyuk;Woo, Woontaek;Chung, Kwangryang
    • Architectural research
    • /
    • v.21 no.4
    • /
    • pp.111-116
    • /
    • 2019
  • The premise of this research is the recent advancement of Building Information Modeling(BIM) Technology and Laser Scanning Technology(3D Scanning). The purpose of the paper is to amplify the potential offered by the combination of BIM and Point Cloud Data (PCD) for structural analysis. Today, enormous amounts of construction site data can be potentially categorized and quantified through BIM software. One of the extraordinary strengths of BIM software comes from its collaborative feature, which can combine different sources of data and knowledge. There are vastly different ways to obtain multiple construction site data, and 3D scanning is one of the effective ways to collect close-to-reality construction site data. The objective of this paper is to emphasize the prospects of pre-scanning and post-scanning automation algorithms. The research aims to stimulate the recent development of 3D scanning and BIM technology to develop Scan-to-BIM. The paper will review the current issues of Scan-to-BIM tasks to achieve As-Built BIM and suggest how it can be improved. This paper will propose a method of coordinating and utilizing PCD for construction and structural analysis during construction.

Automatic Building Modeling Method Using Planar Analysis of Point Clouds from Unmanned Aerial Vehicles (무인항공기에서 생성된 포인트 클라우드의 평면성 분석을 통한 자동 건물 모델 생성 기법)

  • Kim, Han-gyeol;Hwang, YunHyuk;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.973-985
    • /
    • 2019
  • In this paper, we propose a method to separate the ground and building areas and generate building models automatically through planarity analysis using UAV (Unmanned Aerial Vehicle) based point cloud. In this study, proposed method includes five steps. In the first step, the planes of the point cloud were extracted by analyzing the planarity of the input point cloud. In the second step, the extracted planes were analyzed to find a plane corresponding to the ground surface. Then, the points corresponding to the plane were removed from the point cloud. In the third step, we generate ortho-projected image from the point cloud ground surface removed. In the fourth step, the outline of each object was extracted from the ortho-projected image. Then, the non-building area was removed using the area, area / length ratio. Finally, the building's outer points were constructed using the building's ground height and the building's height. Then, 3D building models were created. In order to verify the proposed method, we used point clouds made using the UAV images. Through experiments, we confirmed that the 3D models of the building were generated automatically.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Generating a Rectangular Net from Unorganized Point Cloud Data Using an Implicit Surface Scheme (음 함수 곡면기법을 이용한 임의의 점 군 데이터로부터의 사각망 생성)

  • Yoo, D.J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.12 no.4
    • /
    • pp.274-282
    • /
    • 2007
  • In this paper, a method of constructing a rectangular net from unorganized point cloud data is presented. In the method an implicit surface that fits the given point data is generated by using principal component analysis(PCA) and adaptive domain decomposition method(ADDM). Then a complete and quality rectangular net can be obtained by extracting voxel data from the implicit surface and projecting exterior faces of extracted voxels onto the implicit surface. The main advantage of the proposed method is that a quality rectangular net can be extracted from randomly scattered 3D points only without any further information. Furthermore the results of this works can be used to obtain many useful information including a slicing data, a solid STL model and a NURBS surface model in many areas involved in treatment of large amount of point data by proper processing of implicit surface and rectangular net generated previously.

Matching for the Elbow Cylinder Shape in the Point Cloud Using the PCA (주성분 분석을 통한 포인트 클라우드 굽은 실린더 형태 매칭)

  • Jin, YoungHoon
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.392-398
    • /
    • 2017
  • The point-cloud representation of an object is performed by scanning a space through a laser scanner that is extracting a set of points, and the points are then integrated into the same coordinate system through a registration. The set of the completed registration-integrated point clouds is classified into meaningful regions, shapes, and noises through a mathematical analysis. In this paper, the aim is the matching of a curved area like a cylinder shape in 3D point-cloud data. The matching procedure is the attainment of the center and radius data through the extraction of the cylinder-shape candidates from the sphere that is fitted through the RANdom Sample Consensus (RANSAC) in the point cloud, and completion requires the matching of the curved region with the Catmull-Rom spline from the extracted center-point data using the Principal Component Analysis (PCA). Not only is the proposed method expected to derive a fast estimation result via linear and curved cylinder estimations after a center-axis estimation without constraint and segmentation, but it should also increase the work efficiency of reverse engineering.