• Title/Summary/Keyword: 3D Point Data

Search Result 1,128, Processing Time 0.03 seconds

Gradient field based method for segmenting 3D point cloud (Gradient Field 기반 3D 포인트 클라우드 지면분할 기법)

  • Vu, Hoang;Chu, Phuong;Cho, Seoungjae;Zhang, Weiqiang;Wen, Mingyun;Sim, Sungdae;Kwak, Kiho;Cho, Kyungeun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.733-734
    • /
    • 2016
  • This study proposes a novel approach for ground segmentation of 3D point cloud. We combine two techniques: gradient threshold segmentation, and mean height evaluation. Acquired 3D point cloud is represented as a graph data structures by exploiting the structure of 2D reference image. The ground parts nearing the position of the sensor are segmented based on gradient threshold technique. For sparse regions, we separate the ground and nonground by using a technique called mean height evaluation. The main contribution of this study is a new ground segmentation algorithm which works well with 3D point clouds from various environments. The processing time is acceptable and it allows the algorithm running in real time.

The Fast 3D mesh generation method for a large scale of point data (대단위 점 데이터를 위한 빠른 삼차원 삼각망 생성방법)

  • Lee, Sang-Han;Park, Kang
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.705-711
    • /
    • 2000
  • This paper presents a fast 3D mesh generation method using a surface based method with a stitching algorithm. This method uses the surface based method since the volume based method that uses 3D Delaunay triangulation can hardly deal with a large scale of scanned points. To reduce the processing time, this method also uses a stitching algorithm: after dividing the whole point data into several sections and performing mesh generation on individual sections, the meshes from several sections are stitched into one mesh. Stitching method prevents the surface based method from increasing the processing time exponentially as the number of the points increases. This method works well with different types of scanned points: a scattered type points from a conventional 3D scanner and a cross-sectional type from CT or MRI.

  • PDF

Design of Spatial Relationship for 3D Geometry Model (3차원 기하모델에 대한 공간 관계 연산 설계)

  • Yi Dong-Heon;Hong Sung-Eon;Park Soo-Hong
    • Spatial Information Research
    • /
    • v.13 no.2 s.33
    • /
    • pp.119-128
    • /
    • 2005
  • Most spatial data handled in GIS is two-dimensional. These two-dimensional data is established by selecting 2D aspects form 3D, or by projecting 3D onto 2D space. During this conversion, without user's intention, data are abstracted and omitted. This unwanted data loss causes disadvantages such as restrictingof the range of data application and describing inaccurate real world. Recently, three dimensional data is getting wide interests and demands. One of the examplesis Database Management System which can store and manage three dimensional spatial data. However, this DBMS does not support spatial query which is the essence of the database management system. So, various studies are needed in this field. This research designs spatial relationship that is defined in space database standard using the three-dimension space model. The spatial data model, which is used in this research, is the one defined in OGC for GMS3, and designing tool is DE-9IM based on Point-Set Topology blow as the best method for topological operation.

  • PDF

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Accuracy Analysis of Satellite Imagery in Road Construction Site Using UAV (도로 토목 공사 현장에서 UAV를 활용한 위성 영상 지도의 정확도 분석)

  • Shin, Seung-Min;Ban, Chang-Woo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.753-762
    • /
    • 2021
  • Google provides mapping services using satellite imagery, this is widely used for the study. Since about 20 years ago, research and business using drones have been expanding. Pix4D is widely used to create 3D information models using drones. This study compared the distance error by comparing the result of the road construction site with the DSM data of Google Earth and Pix4 D. Through this, we tried to understand the reliability of the result of distance measurement in Google Earth. A DTM result of 3.08 cm/pixel was obtained as a result of matching with 49666 key points for each image. The length and altitude of Pix4D and Google Earth were measured and compared using the obtained PCD. As a result, the average error of the distance based on the data of Pix4D was measured to be 0.68 m, confirming that the error was relatively small. As a result of measuring the altitude of Google Earth and Pix4D and comparing them, it was confirmed that the maximum error was 83.214m, which was measured using satellite images, but the error was quite large and there was inaccuracy. Through this, it was confirmed that there are difficulties in analyzing and acquiring data at road construction sites using Google Earth, and the result was obtained that point cloud data using drones is necessary.

A Study on the recognition of moving objects by segmenting 2D Laser Scanner points (2D Laser Scanner 포인트의 자동 분리를 통한 이동체의 구분에 관한 연구)

  • Lee Sang-Yeop;Han Soo-Hee;Yu Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.177-180
    • /
    • 2006
  • In this paper we proposed a method of automatic point segmentation acquired by 2D laser scanner to recognize moving objects. Recently, Laser scanner is noticed as a new method in the field of close range 3D modeling. But the majority of the researches are pointed on precise 3D modeling of static objects using expensive 3D laser scanner. 2D laser scanner is relatively cheap and can obtain 2D coordinate information of moving object's surface or can be utilized as 3D laser scanner by rotating the system body. In these reasons, some researches are in progress, which are adopting 2D laser scanner to robot control systems or detection of objects moving along linear trajectory. In our study, we automatically segmented point data of 2D laser scanner thus we could recognize each of the object passing through a section.

  • PDF

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

An Error Analysis of the 3D Automatic Face Recognition Apparatus (3D-AFRA) Hardware (3차원 안면자동분석 사상체질진단기의 Hardware 오차분석)

  • Kwak, Chang-Kyu;Seok, Jae-Hwa;Song, Jung-Hoon;Kim, Hyun-Jin;Hwang, Min-Woo;Yoo, Jung-Hee;Kho, Byung-Hee;Kim, Jong-Won;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.2
    • /
    • pp.22-29
    • /
    • 2007
  • 1. Objectives Sasang Contitutional Medicine, a part of the traditional Korean medical lore, treats illness through a constitutional typing system that categorizespeople into four constitutional types. A few of the important criteria for differentiating the constitutional types are external appearances, inner state of mind, and pathological patterns. We had been developing a 3D Automatic Face Recognition Apparatus (3D-AFRA) in order to evaluate the external appearances with more objectivity. This apparatus provides a 3D image and numerical data on facial configuration, and this study aims to evaluate the mechanical accuracy of the 3D-AFRA hardware. 2. Methods Several objects of different shapes (cube, cylinder, cone, pyramid) were each scanned 10 times using the 3D Automatic Face Recognition Apparatus (3D-AFRA). The results were then compared and analyzed with data retrieved through a laser scanner known for its high accuracy. The error rates were analyzed for each grid point of facial contour scanned with Rapidform2006 (Rapidform2006 is a 3D scanning software that collects grid point data for contours of various products and products and product parts through 3D scanners and other 3D measuring devices; the grid point data thusly acquired is then used to reconstruct highly precise polygon and curvature models). 3. Results and Conclusions The average error rate was 0.22mm for the cube, 0.22mm for the cylinder, 0.125mm for the cone, and 0.172mm for the pyramid. The visual data comparing error rates for measurement figures retrieved with Rapidform2006 is shown in $Fig.3{\sim}Fig.6$. Blue tendency indicates smaller error rates, while red indicates greater error rates The protruding corners of the cube display red, indicating greater error rates. The cylinder shows greater error rates on the edges. The pyramid displays greater error rates on the base surface and around the vertex. The cone also shows greater error around the protruding edge.

  • PDF

Construction of Tree Management Information Using Point Cloud Data (포인트클라우드 데이터를 이용한 수목관리정보 구축 방안)

  • Lee, Keun-Wang;Park, Joon-Kyu
    • Journal of Digital Convergence
    • /
    • v.18 no.11
    • /
    • pp.427-432
    • /
    • 2020
  • In order to establish an effective forest management plan, it is necessary to investigate tree management information such as tree height and DBH(Diameter at breast height). However, research on convergence and application of data acquisition technology to improve the efficiency of existing forest survey methods is insufficient. Therefore, in this study, tree management information was constructed and analyzed using point cloud data acquired through a 3D scanner. Data on the study site was acquired using fixed and mobile 3D scanners, and the efficiency of the mobile 3D scanner was presented through comparison of working hours. In addition, tree management information for object management was constructed by classifying vegetation by object using point cloud data, and by constructing information on chest height diameter and height. As a result of the accuracy evaluation compared with the conventional measurement method, the difference in tree height was 0.02-0.09m and DBH was 0.01-0.04m. If information on the location of vegetation and crowns of each object is constructed through additional research in the future, the efficiency of the work related to forest management information construction can be greatly increased.

A Study on Automatic Modeling of Pipelines Connection Using Point Cloud (포인트 클라우드를 이용한 파이프라인 연결 자동 모델링에 관한 연구)

  • Lee, Jae Won;Patil, Ashok Kumar;Holi, Pavitra;Chai, Young Ho
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.3
    • /
    • pp.341-352
    • /
    • 2016
  • Manual 3D pipeline modeling from LiDAR scanned point cloud data is laborious and time-consuming process. This paper presents a method to extract the pipe, elbow and branch information which is essential to the automatic modeling of the pipeline connection. The pipe geometry is estimated from the point cloud data through the Hough transform and the elbow position is calculated by the medial axis intersection for assembling the nearest pair of pipes. The branch is also created for a pair of pipe segments by estimating the virtual points on one pipe segment and checking for any feasible intersection with the other pipe's endpoint within the pre-defined range of distance. As a result of the automatic modeling, a complete 3D pipeline model is generated by connecting the extracted information of pipes, elbows and branches.