• Title/Summary/Keyword: 3D point cloud

Search Result 389, Processing Time 0.042 seconds

Pointwise CNN for 3D Object Classification on Point Cloud

  • Song, Wei;Liu, Zishu;Tian, Yifei;Fong, Simon
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.787-800
    • /
    • 2021
  • Three-dimensional (3D) object classification tasks using point clouds are widely used in 3D modeling, face recognition, and robotic missions. However, processing raw point clouds directly is problematic for a traditional convolutional network due to the irregular data format of point clouds. This paper proposes a pointwise convolution neural network (CNN) structure that can process point cloud data directly without preprocessing. First, a 2D convolutional layer is introduced to percept coordinate information of each point. Then, multiple 2D convolutional layers and a global max pooling layer are applied to extract global features. Finally, based on the extracted features, fully connected layers predict the class labels of objects. We evaluated the proposed pointwise CNN structure on the ModelNet10 dataset. The proposed structure obtained higher accuracy compared to the existing methods. Experiments using the ModelNet10 dataset also prove that the difference in the point number of point clouds does not significantly influence on the proposed pointwise CNN structure.

Valve Modeling and Model Extraction on 3D Point Cloud data (잡음이 있는 3차원 점군 데이터에서 밸브 모델링 및 모델 추출)

  • Oh, Ki Won;Choi, Kang Sun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.12
    • /
    • pp.77-86
    • /
    • 2015
  • It is difficult to extract small valve automatically in noisy 3D point cloud obtained from LIDAR because small object is affected by noise considerably. In this paper, we assume that the valve is a complex model consisting of torus, cylinder and plane represents handle, rib and center plane to extract a pose of the valve. And to extract the pose, we received additional input: center of the valve. We generated histogram of distance between the center and each points of point cloud, and obtain pose of valve by extracting parameters of handle, rib and center plane. Finally, the valve is reconstructed.

Multiple Depth and RGB Camera-based System to Acquire Point Cloud for MR Content Production (MR 콘텐츠 제작을 위한 다중 깊이 및 RGB 카메라 기반의 포인트 클라우드 획득 시스템)

  • Kim, Kyung-jin;Park, Byung-seo;Kim, Dong-wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.445-446
    • /
    • 2019
  • Recently, attention has been focused on mixed reality (MR) technology, which provides an experience that can not be realized in reality by fusing virtual information into the real world. Mixed reality has the advantage of having excellent interaction with reality and maximizing immersion feeling. In this paper, we propose a method to acquire a point cloud for the production of mixed reality contents using multiple Depth and RGB camera system.

  • PDF

Improving Performance of File-referring Octree Based on Point Reallocation of Point Cloud File (포인트 클라우드 파일의 측점 재배치를 통한 파일 참조 옥트리의 성능 향상)

  • Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.5
    • /
    • pp.437-442
    • /
    • 2015
  • Recently, the size of point cloud is increasing rapidly with the high advancement of 3D terrestrial laser scanners. The study aimed for improving a file-referring octree, introduced in the preceding study, which had been intended to generate an octree and to query points from a large point cloud, gathered by 3D terrestrial laser scanners. To the end, every leaf node of the octree was designed to store only one file-pointer of its first point. Also, the point cloud file was re-constructed to store points sequentially, which belongs to a same leaf node. An octree was generated from a point cloud, composed of about 300 million points, while time was measured during querying proximate points within a given distance with series of points. Consequently, the present method performed better than the preceding one from every aspect of generating, storing and restoring octree, so as querying points and memorizing usage. In fact, the query speed increased by 2 times, and the memory efficiency by 4 times. Therefore, this method has explicitly improved from the preceding one. It also can be concluded in that an octree can be generated, as points can be queried from a huge point cloud, of which larger than the main memory.

Development of a 3D earthwork model based on reverse engineering

  • Kim, Sung-Keun
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.641-642
    • /
    • 2015
  • Unlike for other building processes, BIM for earthwork does not need a large variety of 3D model shapes; however, it requires a 3D model that can efficiently reflect the changing features of the ground shape and provide soil type-dependent workload calculation and information on equipment for optimal management. Objects for earthwork have not yet been defined because the current BIM system does not provide them. The BIM technology commonly applied in the manufacturing center uses real-object data obtained through 3D scanning to generate 3D parametric solid models. 3D scanning, which is used when there are no existing 3D models, has the advantage of being able to rapidly generate parametric solid models. In this study, A method to generate 3D models for earthwork operations using reverse engineering is suggested. 3D scanning is used to create a point cloud of a construction site and the point cloud data are used to generate a surface model, which was then converted into a parametric model with 3D objects for earthwork

  • PDF

AR Anchor System Using Mobile Based 3D GNN Detection

  • Jeong, Chi-Seo;Kim, Jun-Sik;Kim, Dong-Kyun;Kwon, Soon-Chul;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.54-60
    • /
    • 2021
  • AR (Augmented Reality) is a technology that provides virtual content to the real world and provides additional information to objects in real-time through 3D content. In the past, a high-performance device was required to experience AR, but it was possible to implement AR more easily by improving mobile performance and mounting various sensors such as ToF (Time-of-Flight). Also, the importance of mobile augmented reality is growing with the commercialization of high-speed wireless Internet such as 5G. Thus, this paper proposes a system that can provide AR services via GNN (Graph Neural Network) using cameras and sensors on mobile devices. ToF of mobile devices is used to capture depth maps. A 3D point cloud was created using RGB images to distinguish specific colors of objects. Point clouds created with RGB images and Depth Map perform downsampling for smooth communication between mobile and server. Point clouds sent to the server are used for 3D object detection. The detection process determines the class of objects and uses one point in the 3D bounding box as an anchor point. AR contents are provided through app and web through class and anchor of the detected object.

3D Measurement Method Based on Point Cloud and Solid Model for Urban SingleTrees (Point cloud와 solid model을 기반으로 한 단일수목 입체적 정량화기법 연구)

  • Park, Haekyung
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_2
    • /
    • pp.1139-1149
    • /
    • 2017
  • Measuring tree's volume is very important input data of various environmental analysis modeling However, It's difficult to use economical and equipment to measure a fragmented small green space in the city. In addition, Trees are sensitive to seasons, so we need new and easier equipment and quantification methods for measuring trees than lidar for high frequency monitoring. In particular, the tree's size in a city affect management costs, ecosystem services, safety, and so need to be managed and informed on the individual tree-based. In this study, we aim to acquire image data with UAV(Unmanned Aerial Vehicle), which can be operated at low cost and frequently, and quickly and easily quantify a single tree using SfM-MVS(Structure from Motion-Multi View Stereo), and we evaluate the impact of reducing number of images on the point density of point clouds generated from SfM-MVS and the quantification of single trees. Also, We used the Watertight model to estimate the volume of a single tree and to shape it into a 3D structure and compare it with the quantification results of 3 different type of 3D models. The results of the analysis show that UAV, SfM-MVS and solid model can quantify and shape a single tree with low cost and high time resolution easily. This study is only for a single tree, Therefore, in order to apply it to a larger scale, it is necessary to follow up research to develop it, such as convergence with various spatial information data, improvement of quantification technique and flight plan for enlarging green space.

3D Scanning Data Coordination and As-Built-BIM Construction Process Optimization - Utilization of Point Cloud Data for Structural Analysis

  • Kim, Tae Hyuk;Woo, Woontaek;Chung, Kwangryang
    • Architectural research
    • /
    • v.21 no.4
    • /
    • pp.111-116
    • /
    • 2019
  • The premise of this research is the recent advancement of Building Information Modeling(BIM) Technology and Laser Scanning Technology(3D Scanning). The purpose of the paper is to amplify the potential offered by the combination of BIM and Point Cloud Data (PCD) for structural analysis. Today, enormous amounts of construction site data can be potentially categorized and quantified through BIM software. One of the extraordinary strengths of BIM software comes from its collaborative feature, which can combine different sources of data and knowledge. There are vastly different ways to obtain multiple construction site data, and 3D scanning is one of the effective ways to collect close-to-reality construction site data. The objective of this paper is to emphasize the prospects of pre-scanning and post-scanning automation algorithms. The research aims to stimulate the recent development of 3D scanning and BIM technology to develop Scan-to-BIM. The paper will review the current issues of Scan-to-BIM tasks to achieve As-Built BIM and suggest how it can be improved. This paper will propose a method of coordinating and utilizing PCD for construction and structural analysis during construction.

Automatic Building Modeling Method Using Planar Analysis of Point Clouds from Unmanned Aerial Vehicles (무인항공기에서 생성된 포인트 클라우드의 평면성 분석을 통한 자동 건물 모델 생성 기법)

  • Kim, Han-gyeol;Hwang, YunHyuk;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.973-985
    • /
    • 2019
  • In this paper, we propose a method to separate the ground and building areas and generate building models automatically through planarity analysis using UAV (Unmanned Aerial Vehicle) based point cloud. In this study, proposed method includes five steps. In the first step, the planes of the point cloud were extracted by analyzing the planarity of the input point cloud. In the second step, the extracted planes were analyzed to find a plane corresponding to the ground surface. Then, the points corresponding to the plane were removed from the point cloud. In the third step, we generate ortho-projected image from the point cloud ground surface removed. In the fourth step, the outline of each object was extracted from the ortho-projected image. Then, the non-building area was removed using the area, area / length ratio. Finally, the building's outer points were constructed using the building's ground height and the building's height. Then, 3D building models were created. In order to verify the proposed method, we used point clouds made using the UAV images. Through experiments, we confirmed that the 3D models of the building were generated automatically.

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.