• Title/Summary/Keyword: 3D Point cloud

Search Result 388, Processing Time 0.027 seconds

The Maintenance and Management Method of Deteriorated Facilities Using 4D map Based on UAV and 3D Point Cloud (3D Point Cloud 기반 4D map 생성을 통한 노후화 시설물 유지 관리 방안)

  • Kim, Yong-Gu;Kwon, Jong-Wook
    • Journal of the Korea Institute of Building Construction
    • /
    • v.19 no.3
    • /
    • pp.239-246
    • /
    • 2019
  • According to the survey on the status of aged buildings in Korea, A number of concrete buildings deterioration such as houses and apartment buildings has been increased rapidly. To solve this problem, the research related to the facility management, that is one of the importance factor, for monitoring buildings has been increased. The research is divided into Survey-based and Technique-based. However, the problem is that Survey-based research is required a lot of time, money and manpower for management. Also, safety cannot be guaranteed in the case of high-rise buildings. Technique-based research has limitations to applying to the current facility maintenance system, as detailed information of deteriorated facilities is difficult to grasp and errors in accuracy are feared. Therefore, this paper contribute to improve the environment of facility management by 4D maps using UAV, camera and Pix4D mapper program to make 3D model. In addition, it is expected to suggest that residents will be offered easy verification to their buildings deterioration.

Point Cloud Content in Form of Interactive Holograms (포인트 클라우드 형태의 인터랙티브 홀로그램 콘텐츠)

  • Kim, Dong-Hyun;Kim, Sang-Wook
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.9
    • /
    • pp.40-47
    • /
    • 2012
  • Existing art, media art, accompanied by a new path of awareness and perception instrumentalized by the human body, creating a new way to watch the interaction is proposed. Western art way to create visual images of the point cloud that represented a form that is similar to the Pointage. This traditional painting techniques using digital technology means reconfiguration. In this paper, a new appreciation of fusion of aesthetic elements and digital technology, making the point cloud in the form of video. And this holographic film projection of the spectator, and gestures to interact with the video content is presented. A Process of making contents is intent planning, content creation, content production point cloud in the form of image, 3D gestures for interaction design process, go through the process of holographic film projection. Visual and experiential content of memory recall process takes place in the consciousness of the people expressed. Complete the process of memory recall, uncertain memories, memories materialized, recalled. Uncertain remember the vague shapes of the point cloud in the form of an image represented by the image. As embodied memories through the act of interaction to manipulate images recall is complete.

Automatic Local Update of Triangular Mesh Models Based on Measurement Point Clouds (측정된 점데이터 기반 삼각형망 곡면 메쉬 모델의 국부적 자동 수정)

  • Woo, Hyuck-Je;Lee, Jong-Dae;Lee, Kwan-H.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.11 no.5
    • /
    • pp.335-343
    • /
    • 2006
  • Design changes for an original surface model are frequently required in a manufacturing area: for example, when the physical parts are modified or when the parts are partially manufactured from analogous shapes. In this case, an efficient 3D model updating method by locally adding scan data for the modified area is highly desirable. For this purpose, this paper presents a new procedure to update an initial model that is composed of combinatorial triangular facets based on a set of locally added point data. The initial surface model is first created from the initial point set by Tight Cocone, which is a water-tight surface reconstructor; and then the point cloud data for the updates is locally added onto the initial model maintaining the same coordinate system. In order to update the initial model, the special region on the initial surface that needs to be updated is recognized through the detection of the overlapping area between the initial model and the boundary of the newly added point cloud. After that, the initial surface model is eventually updated to the final output by replacing the recognized region with the newly added point cloud. The proposed method has been implemented and tested with several examples. This algorithm will be practically useful to modify the surface model with physical part changes and free-form surface design.

A Fast Correspondence Matching for Iterative Closest Point Algorithm (ICP 계산속도 향상을 위한 빠른 Correspondence 매칭 방법)

  • Shin, Gunhee;Choi, Jaehee;Kim, Kwangki
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.373-380
    • /
    • 2022
  • This paper considers a method of fast correspondence matching for iterative closest point (ICP) algorithm. In robotics, the ICP algorithm and its variants have been widely used for pose estimation by finding the translation and rotation that best align two point clouds. In computational perspectives, the main difficulty is to find the correspondence point on the reference point cloud to each observed point. Jump-table-based correspondence matching is one of the methods for reducing computation time. This paper proposes a method that corrects errors in an existing jump-table-based correspondence matching algorithm. The criterion activating the use of jump-table is modified so that the correspondence matching can be applied to the situations, such as point-cloud registration problems with highly curved surfaces, for which the existing correspondence-matching method is non-applicable. For demonstration, both hardware and simulation experiments are performed. In a hardware experiment using Hokuyo-10LX LiDAR sensor, our new algorithm shows 100% correspondence matching accuracy and 88% decrease in computation time. Using the F1TENTH simulator, the proposed algorithm is tested for an autonomous driving scenario with 2D range-bearing point cloud data and also shows 100% correspondence matching accuracy.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

Noncontact measurements of the morphological phenotypes of sorghum using 3D LiDAR point cloud

  • Eun-Sung, Park;Ajay Patel, Kumar;Muhammad Akbar Andi, Arief;Rahul, Joshi;Hongseok, Lee;Byoung-Kwan, Cho
    • Korean Journal of Agricultural Science
    • /
    • v.49 no.3
    • /
    • pp.483-493
    • /
    • 2022
  • It is important to improve the efficiency of plant breeding and crop yield to fulfill increasing food demands. In plant phenotyping studies, the capability to correlate morphological traits such as plant height, stem diameter, leaf length, leaf width, leaf angle and size of panicle of the plants has an important role. However, manual phenotyping of plants is prone to human errors and is labor intensive and time-consuming. Hence, it is important to develop techniques that measure plant phenotypic traits accurately and rapidly. The aim of this study was to determine the feasibility of point cloud data based on a 3D light detection and ranging (LiDAR) system for plant phenotyping. The obtained results were then verified through manually acquired data from the sorghum samples. This study measured the plant height, plant crown diameter and the panicle height and diameter. The R2 of each trait was 0.83, 0.94, 0.90, and 0.90, and the root mean square error (RMSE) was 6.8 cm, 1.82 cm, 5.7 mm, and 7.8 mm, respectively. The results showed good correlation between the point cloud data and manually acquired data for plant phenotyping. The results indicate that the 3D LiDAR system has potential to measure the phenotypes of sorghum in a rapid and accurate way.

Precision comparison of 3D photogrammetry scans according to the number and resolution of images

  • Park, JaeWook;Kim, YunJung;Kim, Lyoung Hui;Kwon, SoonChul;Lee, SeungHyun
    • International journal of advanced smart convergence
    • /
    • v.10 no.2
    • /
    • pp.108-122
    • /
    • 2021
  • With the development of 3D graphics software and the speed of computer hardware, it is an era that can be realistically expressed not only in movie visual effects but also in console games. In the production of such realistic 3D models, 3D scans are increasingly used because they can obtain hyper-realistic results with relatively little effort. Among the various 3D scanning methods, photogrammetry can be used only with a camera. Therefore, no additional hardware is required, so its demand is rapidly increasing. Most 3D artists shoot as many images as possible with a video camera, etc., and then calculate using all of those images. Therefore, the photogrammetry method is recognized as a task that requires a lot of memory and long hardware operation. However, research on how to obtain precise results with 3D photogrammetry scans is insufficient, and a large number of photos is being utilized, which leads to increased production time and data capacity and decreased productivity. In this study, point cloud data generated according to changes in the number and resolution of photographic images were produced, and an experiment was conducted to compare them with original data. Then, the precision was measured using the average distance value and standard deviation of each vertex of the point cloud. By comparing and analyzing the difference in the precision of the 3D photogrammetry scans according to the number and resolution of images, this paper presents a direction for obtaining the most precise and effective results to 3D artists.

Large Point Cloud-based Pipe Shape Reverse Engineering Automation Method (대용량 포인트 클라우드 기반 파이프 형상 역설계 자동화 방법 연구)

  • Kang, Tae-Wook;Kim, Ji-Eum
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.692-698
    • /
    • 2016
  • Recently, the facility extension construction and maintenance market portion has increased instead of decreased the newly facility construction. In this context, it is important to examine the reverse engineering of MEP (Mechanical Electrical and Plumbing) facilities, which have the high operation and management cost in the architecture domains. The purpose of this study was to suggest the Large Point Cloud-based Pipe Shape Reverse Engineering Method. To conduct the study, the related researches were surveyed and the reverse engineering automation method of the pipe shapes considering large point cloud was proposed. Based on the method, the prototype was developed and the results were validated. The proposed method is suitable for large data processing considering the validation results because the rendering performance standard deviation related to the 3D point cloud massive data searching was 0.004 seconds.

Obstacle Detection for Generating the Motion of Humanoid Robot (휴머노이드 로봇의 움직임 생성을 위한 장애물 인식방법)

  • Park, Chan-Soo;Kim, Doik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.12
    • /
    • pp.1115-1121
    • /
    • 2012
  • This paper proposes a method to extract accurate plane of an object in unstructured environment for a humanoid robot by using a laser scanner. By panning and tilting 2D laser scanner installed on the head of a humanoid robot, 3D depth map of unstructured environment is generated. After generating the 3D depth map around a robot, the proposed plane extraction method is applied to the 3D depth map. By using the hierarchical clustering method, points on the same plane are extracted from the point cloud in the 3D depth map. After segmenting the plane from the point cloud, dimensions of the planes are calculated. The accuracy of the extracted plane is evaluated with experimental results, which show the effectiveness of the proposed method to extract planes around a humanoid robot in unstructured environment.