• Title/Summary/Keyword: 3D Point cloud

Search Result 383, Processing Time 0.031 seconds

Supporting ROI transmission of 3D Point Cloud Data based on 3D Manifesto (3차원 Manifesto 기반 3D Point Cloud Data의 ROI 전송 지원 방안)

  • Im, Jiehon;Kim, Junsik;Rhyu, Sungryeul;Kim, Hoejung;Kim, Sang IL;Kim, Kyuheon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.4
    • /
    • pp.21-26
    • /
    • 2018
  • Recently, the emergence of 3D cameras, 3D scanners and various cameras including Lidar is expected to be applied to applications such as AR, VR, and autonomous mobile vehicles that deal with 3D data. In Particular, the 3D point cloud data consisting of tens to hundreds of thousands of 3D points is rapidly increased in capacity compared with 2D data, Efficient encoding / decoding technology for smooth service within a limited bandwidth, and efficient service provision technology for differentiating the area of interest and the surrounding area are needed. In this paper, we propose a new quality parameter considering characteristics of 3D point cloud instead of quality change based on assumed video codec in MPEG V-PCC used in 3D point cloud compression, 3D Grid division method and representation for selectively transmitting 3D point clouds according to user's area of interest, and propose a new 3D Manifesto. By using the proposed technique, it is possible to generate more bitrate images, and it is confirmed that the efficiency of network, decoder, and renderer can be increased while selectively transmitting as needed.

Complete 3D Surface Reconstruction from Unstructured Point Cloud

  • Kim, Seok-Il;Li, Rixie
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.12
    • /
    • pp.2034-2042
    • /
    • 2006
  • In this study, a complete 3D surface reconstruction method is proposed based on the concept that the vertices, of surface model can be completely matched to the unstructured point cloud. In order to generate the initial mesh model from the point cloud, the mesh subdivision of bounding box and shrink-wrapping algorithm are introduced. The control mesh model for well representing the topology of point cloud is derived from the initial mesh model by using the mesh simplification technique based on the original QEM algorithm, and the parametric surface model for approximately representing the geometry of point cloud is derived by applying the local subdivision surface fitting scheme on the control mesh model. And, to reconstruct the complete matching surface model, the insertion of isolated points on the parametric surface model and the mesh optimization are carried out. Especially, the fast 3D surface reconstruction is realized by introducing the voxel-based nearest-point search algorithm, and the simulation results reveal the availability of the proposed surface reconstruction method.

Density Scalability of Video Based Point Cloud Compression by Using SHVC Codec (SHVC 비디오 기반 포인트 클라우드 밀도 스케일러빌리티 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.709-722
    • /
    • 2020
  • Point Cloud which is a cluster of numerous points can express 3D object beyond the 2D plane. Each point contains 3D coordinate and color data basically, reflectance or etc. additionally. Point Cloud demand research and development much higher effective compression technology. Video-based Point Cloud Compression (V-PCC) technology in development and standardization based on the established video codec. Despite its high effective compression technology, point cloud service will be limited by terminal spec and network conditions. 2D video had the same problems. To remedy this kind of problem, 2D video is using Scalable High efficiency Video Coding (SHVC), Dynamic Adaptive Streaming over HTTP (DASH) or diverse technology. This paper proposed a density scalability method using SHVC codec in V-PCC.

Dense Thermal 3D Point Cloud Generation of Building Envelope by Drone-based Photogrammetry

  • Jo, Hyeon Jeong;Jang, Yeong Jae;Lee, Jae Wang;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.2
    • /
    • pp.73-79
    • /
    • 2021
  • Recently there are growing interests on the energy conservation and emission reduction. In the fields of architecture and civil engineering, the energy monitoring of structures is required to response the energy issues. In perspective of thermal monitoring, thermal images gains popularity for their rich visual information. With the rapid development of the drone platform, aerial thermal images acquired using drone can be used to monitor not only a part of structure, but wider coverage. In addition, the stereo photogrammetric process is expected to generate 3D point cloud with thermal information. However thermal images show very poor in resolution with narrow field of view that limit the use of drone-based thermal photogrammety. In the study, we aimed to generate 3D thermal point cloud using visible and thermal images. The visible images show high spatial resolution being able to generate precise and dense point clouds. Then we extract thermal information from thermal images to assign them onto the point clouds by precisely establishing photogrammetric collinearity between the point clouds and thermal images. From the experiment, we successfully generate dense 3D thermal point cloud showing 3D thermal distribution over the building structure.

Fusing Algorithm for Dense Point Cloud in Multi-view Stereo (Multi-view Stereo에서 Dense Point Cloud를 위한 Fusing 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.798-807
    • /
    • 2020
  • As technologies using digital camera have been developed, 3D images can be constructed from the pictures captured by using multiple cameras. The 3D image data is represented in a form of point cloud which consists of 3D coordinate of the data and the related attributes. Various techniques have been proposed to construct the point cloud data. Among them, Structure-from-Motion (SfM) and Multi-view Stereo (MVS) are examples of the image-based technologies in this field. Based on the conventional research, the point cloud data generated from SfM and MVS may be sparse because the depth information may be incorrect and some data have been removed. In this paper, we propose an efficient algorithm to enhance the point cloud so that the density of the generated point cloud increases. Simulation results show that the proposed algorithm outperforms the conventional algorithms objectively and subjectively.

Point Cloud Generation Method Based on Lidar and Stereo Camera for Creating Virtual Space (가상공간 생성을 위한 라이다와 스테레오 카메라 기반 포인트 클라우드 생성 방안)

  • Lim, Yo Han;Jeong, In Hyeok;Lee, San Sung;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1518-1525
    • /
    • 2021
  • Due to the growth of VR industry and rise of digital twin industry, the importance of implementing 3D data same as real space is increasing. However, the fact that it requires expertise personnel and huge amount of time is a problem. In this paper, we propose a system that generates point cloud data with same shape and color as a real space, just by scanning the space. The proposed system integrates 3D geometric information from lidar and color information from stereo camera into one point cloud. Since the number of 3D points generated by lidar is not enough to express a real space with good quality, some of the pixels of 2D image generated by camera are mapped to the correct 3D coordinate to increase the number of points. Additionally, to minimize the capacity, overlapping points are filtered out so that only one point exists in the same 3D coordinates. Finally, 6DoF pose information generated from lidar point cloud is replaced with the one generated from camera image to position the points to a more accurate place. Experimental results show that the proposed system easily and quickly generates point clouds very similar to the scanned space.

Map Error Measuring Mechanism Design and Algorithm Robust to Lidar Sparsity (라이다 점군 밀도에 강인한 맵 오차 측정 기구 설계 및 알고리즘)

  • Jung, Sangwoo;Jung, Minwoo;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.3
    • /
    • pp.189-198
    • /
    • 2021
  • In this paper, we introduce the software/hardware system that can reliably calculate the distance from sensor to the model regardless of point cloud density. As the 3d point cloud map is widely adopted for SLAM and computer vision, the accuracy of point cloud map is of great importance. However, the 3D point cloud map obtained from Lidar may reveal different point cloud density depending on the choice of sensor, measurement distance and the object shape. Currently, when measuring map accuracy, high reflective bands are used to generate specific points in point cloud map where distances are measured manually. This manual process is time and labor consuming being highly affected by Lidar sparsity level. To overcome these problems, this paper presents a hardware design that leverage high intensity point from three planar surface. Furthermore, by calculating distance from sensor to the device, we verified that the automated method is much faster than the manual procedure and robust to sparsity by testing with RGB-D camera and Lidar. As will be shown, the system performance is not limited to indoor environment by progressing the experiment using Lidar sensor at outdoor environment.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

A Study on the Effective Preprocessing Methods for Accelerating Point Cloud Registration

  • Chungsu, Jang;Yongmin, Kim;Taehyun, Kim;Sunyong, Choi;Jinwoo, Koh;Seungkeun, Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.111-127
    • /
    • 2023
  • In visual slam and 3D data modeling, the Iterative Closest Point method is a primary fundamental algorithm, and many technical fields have used this method. However, it relies on search methods that take a high search time. This paper solves this problem by applying an effective point cloud refinement method. And this paper also accelerates the point cloud registration process with an indexing scheme using the spatial decomposition method. Through some experiments, the results of this paper show that the proposed point cloud refinement method helped to produce better performance.

Object Detection with LiDAR Point Cloud and RGBD Synthesis Using GNN

  • Jung, Tae-Won;Jeong, Chi-Seo;Lee, Jong-Yong;Jung, Kye-Dong
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.192-198
    • /
    • 2020
  • The 3D point cloud is a key technology of object detection for virtual reality and augmented reality. In order to apply various areas of object detection, it is necessary to obtain 3D information and even color information more easily. In general, to generate a 3D point cloud, it is acquired using an expensive scanner device. However, 3D and characteristic information such as RGB and depth can be easily obtained in a mobile device. GNN (Graph Neural Network) can be used for object detection based on these characteristics. In this paper, we have generated RGB and RGBD by detecting basic information and characteristic information from the KITTI dataset, which is often used in 3D point cloud object detection. We have generated RGB-GNN with i-GNN, which is the most widely used LiDAR characteristic information, and color information characteristics that can be obtained from mobile devices. We compared and analyzed object detection accuracy using RGBD-GNN, which characterizes color and depth information.