• 제목/요약/키워드: Point Cloud Fusion

Search Result 25, Processing Time 0.027 seconds

Applicability Assessment of Disaster Rapid Mapping: Focused on Fusion of Multi-sensing Data Derived from UAVs and Disaster Investigation Vehicle (재난조사 특수차량과 드론의 다중센서 자료융합을 통한 재난 긴급 맵핑의 활용성 평가)

  • Kim, Seongsam;Park, Jesung;Shin, Dongyoon;Yoo, Suhong;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.841-850
    • /
    • 2019
  • The purpose of this study is to strengthen the capability of rapid mapping for disaster through improving the positioning accuracy of mapping and fusion of multi-sensing point cloud data derived from Unmanned Aerial Vehicles (UAVs) and disaster investigation vehicle. The positioning accuracy was evaluated for two procedures of drone mapping with Agisoft PhotoScan: 1) general geo-referencing by self-calibration, 2) proposed geo-referencing with optimized camera model by using fixed accurate Interior Orientation Parameters (IOPs) derived from indoor camera calibration test and bundle adjustment. The analysis result of positioning accuracy showed that positioning RMS error was improved 2~3 m to 0.11~0.28 m in horizontal and 2.85 m to 0.45 m in vertical accuracy, respectively. In addition, proposed data fusion approach of multi-sensing point cloud with the constraints of the height showed that the point matching error was greatly reduced under about 0.07 m. Accordingly, our proposed data fusion approach will enable us to generate effectively and timelinessly ortho-imagery and high-resolution three dimensional geographic data for national disaster management in the future.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

FUSION OF LASER SCANNING DATA, DIGITAL MAPS, AERIAL PHOTOGRAPHS AND SATELLITE IMAGES FOR BUILDING MODELLING

  • Han, Seung-Hee;Bae, Yeon-Soung;Kim, Hong-Jin;Bae, Sang-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.899-902
    • /
    • 2006
  • For a quick and accurate 3D modelling of a building, laser scanning data, digital maps, aerial photographs and satellite images should be fusioned. Moreover, library establishment according to a standard structure of a building and effective texturing method are required in order to determine the structure of a building. In this study, we made a standard library by categorizing Korean village forms and presented a model that can predict a structure of a building from a shape of the roof on an aerial photo image. We made an ortho image using the high-definition digital image and considerable amount of ground scanning point cloud and mapped this image. These methods enabled a more quick and accurate building modelling.

  • PDF

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

A Deep Convolutional Neural Network Based 6-DOF Relocalization with Sensor Fusion System (센서 융합 시스템을 이용한 심층 컨벌루션 신경망 기반 6자유도 위치 재인식)

  • Jo, HyungGi;Cho, Hae Min;Lee, Seongwon;Kim, Euntai
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.87-93
    • /
    • 2019
  • This paper presents a 6-DOF relocalization using a 3D laser scanner and a monocular camera. A relocalization problem in robotics is to estimate pose of sensor when a robot revisits the area. A deep convolutional neural network (CNN) is designed to regress 6-DOF sensor pose and trained using both RGB image and 3D point cloud information in end-to-end manner. We generate the new input that consists of RGB and range information. After training step, the relocalization system results in the pose of the sensor corresponding to each input when a new input is received. However, most of cases, mobile robot navigation system has successive sensor measurements. In order to improve the localization performance, the output of CNN is used for measurements of the particle filter that smooth the trajectory. We evaluate our relocalization method on real world datasets using a mobile robot platform.

Fusion of point cloud and integral-imaging technique for full-parallax 3D display (완전시차를 가지는 3 차원 디스플레이를 위한 포인트 클라우드와 집적영상기술의 융합)

  • Hong, Seokmin;Kang, Hyunmin;Oh, Hyunju;Park, Jiyong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.292-294
    • /
    • 2022
  • 본 논문은 3 차원 이미징 기술과 컴퓨터 그래픽스 기반의 시뮬레이션 분야에서 매우 성공적인 두 기술의 융합을 기반으로 진행한 연구를 제안한다. 먼저 3 차원 디스플레이 시스템에 재생할 집적 영상 이미지를 생성하는 방법에 대해 설명한다. 이는 3 차원 포인트 클라우드에서 가상 핀홀 배열로 입사각을 역투영하는 계산방식을 통해 해당 이미지를 생성한다. 우리는 재생되는 3 차원 영상의 초점면을 자유롭게 선택하는 방법에 대해서도 설명한다. 또한, 복수의 관찰자에게 동시에 다양한 시점 정보를 기반으로 몰입감 넘치는 3 차원 영상을 제공하는 3 차원 디스플레이 시스템을 소개하고, 다양한 실험결과를 기반으로 결론을 제시한다.

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Estimation of Tree Heights from Seasonal Airborne LiDAR Data (계절별 항공라이다 자료에 의한 수고 추정)

  • Jeon, Min-Cheol;Jung, Tae-Woong;Eo, Yang-Dam;Kim, Jin-Kwang
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.4
    • /
    • pp.441-448
    • /
    • 2010
  • This paper estimates the tree height using Airborne LiDAR that is obtained for each season to analyze its influence based on a canopyclosure and data fusion. The tree height was estimated by extracting the First Return (RF) from the tree and the Last Return (LR) from the surface of earth to assume each tree via image segmentation and to obtain the height of each tree. Each data on tree height that is collected from seasonal data and the result of tree height acquired from the data fusion were compared. A tree height measuring device was used to measure on site and its accuracy was compared. Also, its applicability on the result of fused data that is obtained through the Airborne LiDAR is examined. As a result of the experiment, the result of image segmentation for an individual tree was closer to the result of site study for 1 meter interval when compared to the 0.5 meter interval of point cloud. In case of the tree height, the application of fused data enables a closer site measurement result than the application of data for each season.