• Title/Summary/Keyword: 3D Point Data

Search Result 1,132, Processing Time 0.029 seconds

A Study on the Quality of Photometric Scanning Under Variable Illumination Conditions

  • Jeon, Hyoungjoon;Hafeez, Jahanzeb;Hamacher, Alaric;Lee, Seunghyun;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.6 no.4
    • /
    • pp.88-95
    • /
    • 2017
  • The conventional scan methods are based on a laser scanner and a depth camera, which requires high cost and complicated post-processing. Whereas in photometric scanning method, the 3D modeling data is acquired through multi-view images. This is advantageous compared to the other methods. The quality of a photometric 3D model depends on the environmental conditions or the object characteristics, but the quality is lower as compared to other methods. Therefore, various methods for improving the quality of photometric scanning are being studied. In this paper, we aim to investigate the effect of illumination conditions on the quality of photometric scanning data. To do this, 'Moai' statue is 3D printed with a size of $600(H){\times}1,000(V){\times}600(D)$. The printed object is photographed under the hard light and soft light environments. We obtained the modeling data by photometric scanning method and compared it with the ground truth of 'Moai'. The 'Point-to-Point' method used to analyseanalyze the modeling data using open source tool 'CloudCompare'. As a result of comparison, it is confirmed that the standard deviation value of the 3D model generated under the soft light is 0.090686 and the standard deviation value of the 3D model generated under the hard light is 0.039954. This proves that the higher quality 3D modeling data can be obtained in a hard light environment. The results of this paper are expected to be applied for the acquisition of high-quality data.

A Study of 3D Modeling of Compressed Urban LiDAR Data Using VRML (VRML을 이용한 도심지역 LiDAR 압축자료의 3차원 표현)

  • Jang, Young-Woon;Choi, Yun-Woong;Cho, Gi-Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.2
    • /
    • pp.3-8
    • /
    • 2011
  • Recently, the demand for enterprise for service map providing and portal site services of a 3D virtual city model for public users has been expanding. Also, accuracy of the data, transfer rate and the update for the update for the lapse of time emerge are considered as more impertant factors, by providing 3D information with the web or mobile devices. With the latest technology, we have seen various 3D data through the web. With the VRML progressing actively, because it can provide a virtual display of the world and all aspects of interaction with web. It offers installation of simple plug-in without extra cost on the web. LiDAR system can obtain spatial data easily and accurately, as supprted by numerous researches and applications. However, in general, LiDAR data is obtained in the form of an irregular point cloud. So, in case of using data without converting, high processor is needed for presenting 2D forms from point data composed of 3D data and the data increase. This study expresses urban LiDAR data in 3D, 2D raster data that was applied by compressing algorithm that was used for solving the problems of large storage space and processing. For expressing 3D, algorithm that converts compressed LiDAR data into code Suited to VRML was made. Finally, urban area was expressed in 3D with expressing ground and feature separately.

Gradient field based method for segmenting 3D point cloud (Gradient Field 기반 3D 포인트 클라우드 지면분할 기법)

  • Vu, Hoang;Chu, Phuong;Cho, Seoungjae;Zhang, Weiqiang;Wen, Mingyun;Sim, Sungdae;Kwak, Kiho;Cho, Kyungeun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.733-734
    • /
    • 2016
  • This study proposes a novel approach for ground segmentation of 3D point cloud. We combine two techniques: gradient threshold segmentation, and mean height evaluation. Acquired 3D point cloud is represented as a graph data structures by exploiting the structure of 2D reference image. The ground parts nearing the position of the sensor are segmented based on gradient threshold technique. For sparse regions, we separate the ground and nonground by using a technique called mean height evaluation. The main contribution of this study is a new ground segmentation algorithm which works well with 3D point clouds from various environments. The processing time is acceptable and it allows the algorithm running in real time.

The Fast 3D mesh generation method for a large scale of point data (대단위 점 데이터를 위한 빠른 삼차원 삼각망 생성방법)

  • Lee, Sang-Han;Park, Kang
    • Proceedings of the KSME Conference
    • /
    • 2000.11a
    • /
    • pp.705-711
    • /
    • 2000
  • This paper presents a fast 3D mesh generation method using a surface based method with a stitching algorithm. This method uses the surface based method since the volume based method that uses 3D Delaunay triangulation can hardly deal with a large scale of scanned points. To reduce the processing time, this method also uses a stitching algorithm: after dividing the whole point data into several sections and performing mesh generation on individual sections, the meshes from several sections are stitched into one mesh. Stitching method prevents the surface based method from increasing the processing time exponentially as the number of the points increases. This method works well with different types of scanned points: a scattered type points from a conventional 3D scanner and a cross-sectional type from CT or MRI.

  • PDF

Design of Spatial Relationship for 3D Geometry Model (3차원 기하모델에 대한 공간 관계 연산 설계)

  • Yi Dong-Heon;Hong Sung-Eon;Park Soo-Hong
    • Spatial Information Research
    • /
    • v.13 no.2 s.33
    • /
    • pp.119-128
    • /
    • 2005
  • Most spatial data handled in GIS is two-dimensional. These two-dimensional data is established by selecting 2D aspects form 3D, or by projecting 3D onto 2D space. During this conversion, without user's intention, data are abstracted and omitted. This unwanted data loss causes disadvantages such as restrictingof the range of data application and describing inaccurate real world. Recently, three dimensional data is getting wide interests and demands. One of the examplesis Database Management System which can store and manage three dimensional spatial data. However, this DBMS does not support spatial query which is the essence of the database management system. So, various studies are needed in this field. This research designs spatial relationship that is defined in space database standard using the three-dimension space model. The spatial data model, which is used in this research, is the one defined in OGC for GMS3, and designing tool is DE-9IM based on Point-Set Topology blow as the best method for topological operation.

  • PDF

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Accuracy Analysis of Satellite Imagery in Road Construction Site Using UAV (도로 토목 공사 현장에서 UAV를 활용한 위성 영상 지도의 정확도 분석)

  • Shin, Seung-Min;Ban, Chang-Woo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.24 no.6_2
    • /
    • pp.753-762
    • /
    • 2021
  • Google provides mapping services using satellite imagery, this is widely used for the study. Since about 20 years ago, research and business using drones have been expanding. Pix4D is widely used to create 3D information models using drones. This study compared the distance error by comparing the result of the road construction site with the DSM data of Google Earth and Pix4 D. Through this, we tried to understand the reliability of the result of distance measurement in Google Earth. A DTM result of 3.08 cm/pixel was obtained as a result of matching with 49666 key points for each image. The length and altitude of Pix4D and Google Earth were measured and compared using the obtained PCD. As a result, the average error of the distance based on the data of Pix4D was measured to be 0.68 m, confirming that the error was relatively small. As a result of measuring the altitude of Google Earth and Pix4D and comparing them, it was confirmed that the maximum error was 83.214m, which was measured using satellite images, but the error was quite large and there was inaccuracy. Through this, it was confirmed that there are difficulties in analyzing and acquiring data at road construction sites using Google Earth, and the result was obtained that point cloud data using drones is necessary.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

A Study on the recognition of moving objects by segmenting 2D Laser Scanner points (2D Laser Scanner 포인트의 자동 분리를 통한 이동체의 구분에 관한 연구)

  • Lee Sang-Yeop;Han Soo-Hee;Yu Ki-Yun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.177-180
    • /
    • 2006
  • In this paper we proposed a method of automatic point segmentation acquired by 2D laser scanner to recognize moving objects. Recently, Laser scanner is noticed as a new method in the field of close range 3D modeling. But the majority of the researches are pointed on precise 3D modeling of static objects using expensive 3D laser scanner. 2D laser scanner is relatively cheap and can obtain 2D coordinate information of moving object's surface or can be utilized as 3D laser scanner by rotating the system body. In these reasons, some researches are in progress, which are adopting 2D laser scanner to robot control systems or detection of objects moving along linear trajectory. In our study, we automatically segmented point data of 2D laser scanner thus we could recognize each of the object passing through a section.

  • PDF

An Error Analysis of the 3D Automatic Face Recognition Apparatus (3D-AFRA) Hardware (3차원 안면자동분석 사상체질진단기의 Hardware 오차분석)

  • Kwak, Chang-Kyu;Seok, Jae-Hwa;Song, Jung-Hoon;Kim, Hyun-Jin;Hwang, Min-Woo;Yoo, Jung-Hee;Kho, Byung-Hee;Kim, Jong-Won;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.2
    • /
    • pp.22-29
    • /
    • 2007
  • 1. Objectives Sasang Contitutional Medicine, a part of the traditional Korean medical lore, treats illness through a constitutional typing system that categorizespeople into four constitutional types. A few of the important criteria for differentiating the constitutional types are external appearances, inner state of mind, and pathological patterns. We had been developing a 3D Automatic Face Recognition Apparatus (3D-AFRA) in order to evaluate the external appearances with more objectivity. This apparatus provides a 3D image and numerical data on facial configuration, and this study aims to evaluate the mechanical accuracy of the 3D-AFRA hardware. 2. Methods Several objects of different shapes (cube, cylinder, cone, pyramid) were each scanned 10 times using the 3D Automatic Face Recognition Apparatus (3D-AFRA). The results were then compared and analyzed with data retrieved through a laser scanner known for its high accuracy. The error rates were analyzed for each grid point of facial contour scanned with Rapidform2006 (Rapidform2006 is a 3D scanning software that collects grid point data for contours of various products and products and product parts through 3D scanners and other 3D measuring devices; the grid point data thusly acquired is then used to reconstruct highly precise polygon and curvature models). 3. Results and Conclusions The average error rate was 0.22mm for the cube, 0.22mm for the cylinder, 0.125mm for the cone, and 0.172mm for the pyramid. The visual data comparing error rates for measurement figures retrieved with Rapidform2006 is shown in $Fig.3{\sim}Fig.6$. Blue tendency indicates smaller error rates, while red indicates greater error rates The protruding corners of the cube display red, indicating greater error rates. The cylinder shows greater error rates on the edges. The pyramid displays greater error rates on the base surface and around the vertex. The cone also shows greater error around the protruding edge.

  • PDF