• Title/Summary/Keyword: 3D Point Data

Search Result 1,136, Processing Time 0.028 seconds

Underground Facility Survey and 3D Visualization Using Drones (드론을 활용한 지하시설물측량 및 3D 시각화)

  • Kim, Min Su;An, Hyo Won;Choi, Jae Hoon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.1
    • /
    • pp.1-14
    • /
    • 2022
  • In order to conduct rapid, accurate and safe surveying at the excavation site, In this study, the possibility of underground facility survey using drones and the expected effect of 3D visualization were obtained as follows. Phantom4Pro 20MP drones have a 30m flight altitude and a redundant 85% flight plan, securing a GSD (Ground Sampling Distance) value of 0.85mm and 4points of GCP (Groud Control Point)and 2points of check point were calculated, and 7.3mm of ground control point and 11mm of check point were obtained. The importance of GCP was confirmed when measured with low-cost drones. If there is no ground reference point, the error range of X value is derived from -81.2 cm to +90.0 cm, and the error range of Y value is +6.8 cm to 155.9 cm. This study classifies point cloud data using the Pix4D program. I'm sorting underground facility data and road pavement data, and visualized 3D data of road and underground facilities of actual model through overlapping process. Overlaid point cloud data can be used to check the location and depth of the place you want through the Open Source program CloudCompare. This study will become a new paradigm of underground facility surveying.

3D Scanning Data Coordination and As-Built-BIM Construction Process Optimization - Utilization of Point Cloud Data for Structural Analysis

  • Kim, Tae Hyuk;Woo, Woontaek;Chung, Kwangryang
    • Architectural research
    • /
    • v.21 no.4
    • /
    • pp.111-116
    • /
    • 2019
  • The premise of this research is the recent advancement of Building Information Modeling(BIM) Technology and Laser Scanning Technology(3D Scanning). The purpose of the paper is to amplify the potential offered by the combination of BIM and Point Cloud Data (PCD) for structural analysis. Today, enormous amounts of construction site data can be potentially categorized and quantified through BIM software. One of the extraordinary strengths of BIM software comes from its collaborative feature, which can combine different sources of data and knowledge. There are vastly different ways to obtain multiple construction site data, and 3D scanning is one of the effective ways to collect close-to-reality construction site data. The objective of this paper is to emphasize the prospects of pre-scanning and post-scanning automation algorithms. The research aims to stimulate the recent development of 3D scanning and BIM technology to develop Scan-to-BIM. The paper will review the current issues of Scan-to-BIM tasks to achieve As-Built BIM and suggest how it can be improved. This paper will propose a method of coordinating and utilizing PCD for construction and structural analysis during construction.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Fusing Algorithm for Dense Point Cloud in Multi-view Stereo (Multi-view Stereo에서 Dense Point Cloud를 위한 Fusing 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.798-807
    • /
    • 2020
  • As technologies using digital camera have been developed, 3D images can be constructed from the pictures captured by using multiple cameras. The 3D image data is represented in a form of point cloud which consists of 3D coordinate of the data and the related attributes. Various techniques have been proposed to construct the point cloud data. Among them, Structure-from-Motion (SfM) and Multi-view Stereo (MVS) are examples of the image-based technologies in this field. Based on the conventional research, the point cloud data generated from SfM and MVS may be sparse because the depth information may be incorrect and some data have been removed. In this paper, we propose an efficient algorithm to enhance the point cloud so that the density of the generated point cloud increases. Simulation results show that the proposed algorithm outperforms the conventional algorithms objectively and subjectively.

Spherical Point Tracing for Synthetic Vehicle Data Generation with 3D LiDAR Point Cloud Data (3차원 LiDAR 점군 데이터에서의 가상 차량 데이터 생성을 위한 구면 점 추적 기법)

  • Sangjun Lee;Hakil Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.329-332
    • /
    • 2023
  • 3D Object Detection using deep neural network has been developed a lot for obstacle detection in autonomous vehicles because it can recognize not only the class of target object but also the distance from the object. But in the case of 3D Object Detection models, the detection performance for distant objects is lower than that for nearby objects, which is a critical issue for autonomous vehicles. In this paper, we introduce a technique that increases the performance of 3D object detection models, particularly in recognizing distant objects, by generating virtual 3D vehicle data and adding it to the dataset used for model training. We used a spherical point tracing method that leverages the characteristics of 3D LiDAR sensor data to create virtual vehicles that closely resemble real ones, and we demonstrated the validity of the virtual data by using it to improve recognition performance for objects at all distances in model training.

The Analysis on the Torso Type Dress Form Developed Through the 3-D Virtual Body Modeling of the Korean Female Fashion Models (국내 여성 패션모델의 3차원 가상인체 모델링을 통한 토르소형 인대 개발과 그 특성 분석)

  • Park, Gin Ah
    • Journal of the Korean Society of Costume
    • /
    • v.65 no.2
    • /
    • pp.157-175
    • /
    • 2015
  • The study was aimed to develop a torso-type dress form representing body features of the female fashion models in Korea. To fulfill this purpose, 5 female fashion models aged between 20 and 26 having the average body measurements of professional fashion models in Korea were selected and their 3-D whole body scanned data were analysed. The 3-D whole body scanning method enabled to generate a virtual female fashion model within the CAD system by measuring the subjects' body shapes and sizes. In addition, the virtual model's body data led the development of a standard female fashion model dress form for the efficient fashion show preparation. In order to manufacture the real dress form for female fashion models, 3-D printing technology was adopted. The consequent results are as follows: (1) the body measurements (unit: cm) of the developed dress form were: biacromion length, 36.0, bust point to bust point, 16.6, front/back interscye lengths, 32.0/33.0, neck point to breast point, 26.0, neck point to breast point to waist line, 41.5, waist front/back lengths, 34.5/38.5, waist to hip length, 24.0, bust circumference, 85.0, underbust circumference, 75.0, waist circumference, 65.0, hip circumference, 92.0. (2) the body measurements differences between the developed and existing dress forms were highlighted with the body measurements of neck point to breast point and waist to hip length. (3) the body shape features of the developed dress form showed that bust, shoulder blade, shoulder slope, abdomen and back waist line to hip line parts were more realistically manufactured.

Design of RBFNNs Pattern Classifier Realized with the Aid of PSO and Multiple Point Signature for 3D Face Recognition (3차원 얼굴 인식을 위한 PSO와 다중 포인트 특징 추출을 이용한 RBFNNs 패턴분류기 설계)

  • Oh, Sung-Kwun;Oh, Seung-Hun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.797-803
    • /
    • 2014
  • In this paper, 3D face recognition system is designed by using polynomial based on RBFNNs. In case of 2D face recognition, the recognition performance reduced by the external environmental factors such as illumination and facial pose. In order to compensate for these shortcomings of 2D face recognition, 3D face recognition. In the preprocessing part, according to the change of each position angle the obtained 3D face image shapes are changed into front image shapes through pose compensation. the depth data of face image shape by using Multiple Point Signature is extracted. Overall face depth information is obtained by using two or more reference points. The direct use of the extracted data an high-dimensional data leads to the deterioration of learning speed as well as recognition performance. We exploit principle component analysis(PCA) algorithm to conduct the dimension reduction of high-dimensional data. Parameter optimization is carried out with the aid of PSO for effective training and recognition. The proposed pattern classifier is experimented with and evaluated by using dataset obtained in IC & CI Lab.

Comparison of Size between direct-measurement and 3D body scanning (중국 성인여성의 직접계측과 3D Body scanning 치수 비교 연구)

  • Cha, Su-Joung
    • Journal of Fashion Business
    • /
    • v.16 no.1
    • /
    • pp.150-159
    • /
    • 2012
  • This study intend to analyze differences between 3D body scanning sizes and direct measurement sizes of same subjects. The subjects of study are female students of university in China. 3D data analyze as a 3D Body Measurement Soft System. The conclusion found is as below: In case of circumferences, error between direct-measurement size and 3D body scanning size is from 4.9mm to 62.2mm. The neck circumference size of directmeasurement is bigger than 3D body scanning size. The height error range is from 0.6mm to 51mm. Height of underbust, waist and hip are that direct-measurement sizes are higher than 3D body scanning sizes. Gap of width is from 3.8mm to 21.9mm. The gap range is too narrow relatively to others. Only direct-measurement size of neck width is wider than 3D body scanning size. Error range of length is from 0.3mm to 41.8mm. 3D body scanning sizes of lateral neck to waistline, upperarm length, arm length, neck shoulder point to breast point, shoulder center point to breast point, lateral shoulder to breast point are longer than direct-measurement sizes. They have a negative margin of error. I intend to set up same measurement point between direct-measurement and 3D body scanning but they have some errors because direct-measurement point is applied by a person. 3D body scanning measurement point is settled by automatic system. A measurement point of direct-measurement and 3D body scanning isn't unite. So we need to make a standard of setting up measurement points.

Rapid Manufacturing of 3D Prototype from 3D scan data using VLM-ST (단속형 가변적층쾌속조형공정을 이용한 3차원 스캔데이터로부터 3차원 시작품의 쾌속 제작)

  • 이상호;안동규;김효찬;양동열;박두섭;채희창
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.536-539
    • /
    • 2002
  • The reverse engineering (RE) technology can quickly generate 3D point cloud data of an object by capturing the surface of a model using a 3D scanner. In the rapid prototyping (RP) technology, prototypes are rapidly produced from 3D CAD models in a layer-by-layer additive basis. In this paper, a physical human head shape is duplicated using a new RP process, the Transfer-type Variable Lamination Manufacturing process using expandable polystyrene foam sheet (VLM-ST), after the point cloud data of a human head shape measured from 3D SNX scanner are converted to STL file. From the duplicated human head shape, it has been shown that the VLM-ST process in connection with the 3D scanner is a fast and efficient process in that shapes with free surface, such as the human head shape, can be duplicated with ease. Considering the measurement time and the shape duplication time, the use of 3D SNX scanner and the VLM-ST process is expected to reduce the lead-time fur the development of new products in comparison with the other existing RE-RP connected manufacturing systems.

  • PDF