• Title/Summary/Keyword: 3D LIDAR

Search Result 145, Processing Time 0.026 seconds

Supporting ROI transmission of 3D Point Cloud Data based on 3D Manifesto (3차원 Manifesto 기반 3D Point Cloud Data의 ROI 전송 지원 방안)

  • Im, Jiehon;Kim, Junsik;Rhyu, Sungryeul;Kim, Hoejung;Kim, Sang IL;Kim, Kyuheon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.4
    • /
    • pp.21-26
    • /
    • 2018
  • Recently, the emergence of 3D cameras, 3D scanners and various cameras including Lidar is expected to be applied to applications such as AR, VR, and autonomous mobile vehicles that deal with 3D data. In Particular, the 3D point cloud data consisting of tens to hundreds of thousands of 3D points is rapidly increased in capacity compared with 2D data, Efficient encoding / decoding technology for smooth service within a limited bandwidth, and efficient service provision technology for differentiating the area of interest and the surrounding area are needed. In this paper, we propose a new quality parameter considering characteristics of 3D point cloud instead of quality change based on assumed video codec in MPEG V-PCC used in 3D point cloud compression, 3D Grid division method and representation for selectively transmitting 3D point clouds according to user's area of interest, and propose a new 3D Manifesto. By using the proposed technique, it is possible to generate more bitrate images, and it is confirmed that the efficiency of network, decoder, and renderer can be increased while selectively transmitting as needed.

Generation of 3D Campus Models using Multi-Sensor Data (다중센서데이터를 이용한 캠퍼스 3차원 모델의 구축)

  • Choi Kyoung-Ah;Kang Moon-Kwon;Shin Hyo-Sung;Lee Im-Pyeong
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.205-210
    • /
    • 2006
  • With the development of recent technology such as telematics, LBS, and ubiquitous, the applications of 3D GIS are rapidly increased. As 3D GIS is mainly based on urban models consisting of the realistic digital models of the objects existing in an urban area, demands for urban models and its continuous update is expected to be drastically increased. The purpose of this study is thus to propose more efficient and precise methods to construct urban models with its experimental verification. Applying the proposed methods, the terrain and sophisticated building models are constructed for the area of $270,600m^2$ with 23 buildings in the University of Seoul. For the terrain models, airborne imagery and LIDAR data is used, while the ground imagery is mainly used for the building models. It is found that the generated models reflect the correct geometry of the buildings and terrain surface. The textures of building surfaces, generated automatically using the projective transformation however, are not well-constructed because of being blotted out and shaded by objects such as trees, near buildings, and other obstacles. Consequently, the algorithms on the texture extraction should be improved to construct more realistic 3D models. Furthermore, the inside of buildings should be modeled for various potential applications in the future.

  • PDF

Basic Research about Building Data of Virtual Reality Space Using forborne LiDAR Data (LiDAR 자료를 이용한 가상현실공간 자료 구축에 관한 기초적 연구)

  • Choi, Hyun;Kim, Na-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.2
    • /
    • pp.419-424
    • /
    • 2009
  • This paper show about the possibility of practical application after building VR(virtual reality) data based on Airborne LiDAR data which determines complicated topography quickly for the 3D-GIS construction. In this paper, we collected Airborne LiDAR data, digital map, serial photo and a basic design. The results are expected some effective determination by 3D-GIS construction based on LiDAR data. Hereafter, because the research will be able to be given quickly topography information on ubiquitous environment the field of construction and GIS will be able to be helped.

SYNTHESIS OF STEREO-MATE THROUGH THE FUSION OF A SINGLE AERIAL PHOTO AND LIDAR DATA

  • Chang, Ho-Wook;Choi, Jae-Wan;Kim, Hye-Jin;Lee, Jae-Bin;Yu, Ki-Yun
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.508-511
    • /
    • 2006
  • Generally, stereo pair images are necessary for 3D viewing. In the absence of quality stereo-pair images, it is possible to synthesize a stereo-mate suitable for 3D viewing with a single image and a depth-map. In remote sensing, DEM is usually used as a depth-map. In this paper, LiDAR data was used instead of DEM to make a stereo pair from a single aerial photo. Each LiDAR point was assigned a brightness value from the original single image by registration of the image and LiDAR data. And then, imaginary exposure station and image plane were assumed. Finally, LiDAR points with already-assigned brightness values were back-projected to the imaginary plane for synthesis of a stereo-mate. The imaginary exposure station and image plane were determined to have only a horizontal shift from the original image's exposure station and plane. As a result, the stereo-mate synthesized in this paper fulfilled epipolar geometry and yielded easily-perceivable 3D viewing effect together with the original image. The 3D viewing effect was tested with anaglyph at the end.

  • PDF

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Calibration of Laser scanning Mobile Mapping System using Lynx Mobile Mapper (Lynx Mobile Mapper를 이용한 레이저스캐너 기반 차량 MMS의 캘리브레이션)

  • Jeong, Tae-Jun;Yun, Hong-Sic;Hwang, Jin-Sang;Kim, Yong-Hyun;Lee, Ha-Jun
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.207-208
    • /
    • 2010
  • In this paper, we carried out calibration of laser scanning MMS(Mobile Mapping System) using Lynx Mobile Mapper, a new MMS developed at Optech Incorporated. Laser scanning MMS could be defined as an integration of several subsystems. Subsystems are composed of laser scanner, gps receiver and antenna, INS(Inertial Navigation System), DMI(Distance Measurement Instrument). These are obtained 3D spatial information by direct-georeferencing technology. To obtain 3D spatial information, calibration of laser scanning MMS is required prior to operation system, it is similar to airborme lidar system. 145 checkpoints were used to accuracy estimation. The accuracy results are about 5cm(RMSE) for calibration in all directions(east, north, ellipsoidal height).

  • PDF

Development of Simulation Environment for Autonomous Driving Algorithm Validation based on ROS (ROS 기반 자율주행 알고리즘 성능 검증을 위한 시뮬레이션 환경 개발)

  • Kwak, Jisub;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.1
    • /
    • pp.20-25
    • /
    • 2022
  • This paper presents a development of simulation environment for validation of autonomous driving (AD) algorithm based on Robot Operating System (ROS). ROS is one of the commonly-used frameworks utilized to control autonomous vehicles. For the evaluation of AD algorithm, a 3D autonomous driving simulator has been developed based on LGSVL. Two additional sensors are implemented in the simulation vehicle. First, Lidar sensor is mounted on the ego vehicle for real-time driving environment perception. Second, GPS sensor is equipped to estimate ego vehicle's position. With the vehicle sensor configuration in the simulation, the AD algorithm can predict the local environment and determine control commands with motion planning. The simulation environment has been evaluated with lane changing and keeping scenarios. The simulation results show that the proposed 3D simulator can successfully imitate the operation of a real-world vehicle.

Cluster-Based Spin Images for Characterizing Diffuse Objects in 3D Range Data

  • Lee, Heezin;Oh, Sangyoon
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.6
    • /
    • pp.377-382
    • /
    • 2014
  • Detecting and segmenting diffuse targets in laser ranging data is a critical problem for tactical reconnaissance. In this study, we propose a new method that facilitates the characterization of diffuse irregularly shaped objects using "spin images," i.e., local 2D histograms of laser returns oriented in 3D space, and a clustering process. The proposed "cluster-based spin imaging" method resolves the problem of using standard spin images for diffuse targets and it eliminates much of the computational complexity that characterizes the production of conventional spin images. The direct processing of pre-segmented laser points, including internal points that penetrate through a diffuse object's topmost surfaces, avoids some of the requirements of the approach used at present for spin image generation, while it also greatly reduces the high computational time overheads incurred by searches to find correlated images. We employed 3D airborne range data over forested terrain to demonstrate the effectiveness of this method in discriminating the different geometric structures of individual tree clusters. Our experiments showed that cluster-based spin images have the potential to separate classes in terms of different ages and portions of tree crowns.

Common Optical System for the Fusion of Three-dimensional Images and Infrared Images

  • Kim, Duck-Lae;Jung, Bo Hee;Kong, Hyun-Bae;Ok, Chang-Min;Lee, Seung-Tae
    • Current Optics and Photonics
    • /
    • v.3 no.1
    • /
    • pp.8-15
    • /
    • 2019
  • We describe a common optical system that merges a LADAR system, which generates a point cloud, and a more traditional imaging system operating in the LWIR, which generates image data. The optimum diameter of the entrance pupil was determined by analysis of detection ranges of the LADAR sensor, and the result was applied to design a common optical system using LADAR sensors and LWIR sensors; the performance of these sensors was then evaluated. The minimum detectable signal of the $128{\times}128-pixel$ LADAR detector was calculated as 20.5 nW. The detection range of the LADAR optical system was calculated to be 1,000 m, and according to the results, the optimum diameter of the entrance pupil was determined to be 15.7 cm. The modulation transfer function (MTF) in relation to the diffraction limit of the designed common optical system was analyzed and, according to the results, the MTF of the LADAR optical system was 98.8% at the spatial frequency of 5 cycles per millimeter, while that of the LWIR optical system was 92.4% at the spatial frequency of 29 cycles per millimeter. The detection, recognition, and identification distances of the LWIR optical system were determined to be 5.12, 2.82, and 1.96 km, respectively.

Quality Enhancement of 3D Volumetric Contents Based on 6DoF for 5G Telepresence Service

  • Byung-Seo Park;Woosuk Kim;Jin-Kyum Kim;Dong-Wook Kim;Young-Ho Seo
    • Journal of Web Engineering
    • /
    • v.21 no.3
    • /
    • pp.729-750
    • /
    • 2022
  • In general, the importance of 6DoF (degree of freedom) 3D (dimension) volumetric contents technology is emerging in 5G (generation) telepresence service, Web-based (WebGL) graphics, computer vision, robotics, and next-generation augmented reality. Since it is possible to acquire RGB images and depth images in real-time through depth sensors that use various depth acquisition methods such as time of flight (ToF) and lidar, many changes have been made in object detection, tracking, and recognition research. In this paper, we propose a method to improve the quality of 3D models for 5G telepresence by processing images acquired through depth and RGB cameras on a multi-view camera system. In this paper, the quality is improved in two major ways. The first concerns the shape of the 3D model. A method of removing noise outside the object by applying a mask obtained from a color image and a combined filtering operation to obtain the difference in depth information between pixels inside the object were proposed. Second, we propose an illumination compensation method for images acquired through a multi-view camera system for photo-realistic 3D model generation. It is assumed that the three-dimensional volumetric shooting is done indoors, and the location and intensity of illumination according to time are constant. Since the multi-view camera uses a total of 8 pairs and converges toward the center of space, the intensity and angle of light incident on each camera are different even if the illumination is constant. Therefore, all cameras take a color correction chart and use a color optimization function to obtain a color conversion matrix that defines the relationship between the eight acquired images. Using this, the image input from all cameras is corrected based on the color correction chart. It was confirmed that the quality of the 3D model could be improved by effectively removing noise due to the proposed method when acquiring images of a 3D volumetric object using eight cameras. It has been experimentally proven that the color difference between images is reduced.