• Title/Summary/Keyword: Multi-sensor data fusion

Search Result 128, Processing Time 0.026 seconds

Adjustment of Exterior Orientation Parameters Geometric Registration of Aerial Images and LIDAR Data (항공영상과 라이다데이터의 기하학적 정합을 위한 외부표정요소의 조정)

  • Hong, Ju-Seok;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.585-597
    • /
    • 2009
  • This research aims to develop a registration method to remove the geometric inconsistency between aerial images and LIDAR data acquired from an airborne multi-sensor system. The proposed method mainly includes registration primitives extraction, correspondence establishment, and EOP(Exterior Orientation Parameters) adjustment. As the registration primitives, we extracts planar patches and intersection edges from the LIDAR data and object points and linking edges from the aerial images. The extracted primitives are then categorized into horizontal and vertical ones; and their correspondences are established. These correspondent pairs are incorporated as stochastic constraints into the bundle block adjustment, which finally precisely adjusts the exterior orientation parameters of the images. According to the experimental results from the application of the proposed method to real data, we found that the attitude parameters of EOPs were meaningfully adjusted and the geometric inconsistency of the primitives used for the adjustment is reduced from 2 m to 2 cm before and after the registration. Hence, the results of this research can contribute to data fusion for the high quality 3D spatial information.

Classification of Multi-temporal SAR Data by Using Data Transform Based Features and Multiple Classifiers (자료변환 기반 특징과 다중 분류자를 이용한 다중시기 SAR자료의 분류)

  • Yoo, Hee Young;Park, No-Wook;Hong, Sukyoung;Lee, Kyungdo;Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this study, a novel land-cover classification framework for multi-temporal SAR data is presented that can combine multiple features extracted through data transforms and multiple classifiers. At first, data transforms using principle component analysis (PCA) and 3D wavelet transform are applied to multi-temporal SAR dataset for extracting new features which were different from original dataset. Then, three different classifiers including maximum likelihood classifier (MLC), neural network (NN) and support vector machine (SVM) are applied to three different dataset including data transform based features and original backscattering coefficients, and as a result, the diverse preliminary classification results are generated. These results are combined via a majority voting rule to generate a final classification result. From an experiment with a multi-temporal ENVISAT ASAR dataset, every preliminary classification result showed very different classification accuracy according to the used feature and classifier. The final classification result combining nine preliminary classification results showed the best classification accuracy because each preliminary classification result provided complementary information on land-covers. The improvement of classification accuracy in this study was mainly attributed to the diversity from combining not only different features based on data transforms, but also different classifiers. Therefore, the land-cover classification framework presented in this study would be effectively applied to the classification of multi-temporal SAR data and also be extended to multi-sensor remote sensing data fusion.

Case Study in Applying Product-Line Approach for Developing the Multi-Sensor Data Fusion System (다중센서데이터 융합시스템 개발의 제품 계열적 접근에 관한 사례연구)

  • Hong, Ki-Sam;Yoon, Hee-Byung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.263-266
    • /
    • 2005
  • 다중센서데이터 융합시스템(MSDFS)은 여러 센서로부터 획득된 이질의 데이터를 정규화된 포맷으로 융합하고 단일 센서에서의 획득오차를 최소한으로 줄여 표적의 정확한 식별 및 판단을 지원하는 시스템이다. 이 시스템들은 고유의 기능을 수행하는 모듈들에 대한 고수준의 재사용성을 요구하므로, 현재의 소프트웨어공학 기법을 적용시 공통부분에 대한 효율적 설계가 어렵다. 따라서 본 논문에서는 시스템 개발에 이러한 비효율적인 요소를 제거하는 제품-계열 개발방법론을 MSDFS의 임베디드 소프트웨어 설계에 적용한다. 이를 위해 분석 대상에 대한 영역지정에서부터 재사용가능한 컴포넌트의 식별까지 설계 하며, 마지막으로 설계된 모델에 대한 검증을 위해 GQM 패러다임을 적용한다. 또한 산출물에 대한 성능평가 기준을 제시하여 시스템 개발을 효과적으로 향상시킬 수 있는 방안을 제시한다.

  • PDF

Data Fusion Algorithm of Multi-Sensor for Optimal Path Planning of Mobile Robots (이동 로봇의 최적 경로 설계를 위한 다중 센서 융합 알고리즘)

  • Jung, Jin-Gu;Kim, Young-Kyun;Chwa, Dong-Kyoung;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1787-1788
    • /
    • 2007
  • 최근 장애물 감지, 경로 생성 등 많은 분야에서 여러 종류의 센서를 사용한 연구가 많이 진행되고 있다. 다중의 센서를 이용하면 개별 센서를 사용한 경우보다 정밀한 데이터의 측정이 가능하다. 이 논문에서는 효율적인 장애물 인식이나, 경로 생성을 위해 다중 센서로부터 측정된 데이터를 융합시키는 알고리즘을 제안하였고, 모의실험을 통해서는 이동 로봇의 기본 경로에 장애물이 존재한 상황에서 하나의 센서를 사용한 경우보다 최적화된 경로를 얻을 수 있다.

  • PDF

Recognition of Tactilie Image Dependent on Imposed Force Using Fuzzy Fusion Algorithm (접촉력에 따라 변하는 Tactile 영상의 퍼지 융합을 통한 인식기법)

  • 고동환;한헌수
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.95-103
    • /
    • 1998
  • This paper deals with a problem occuring in recognition of tactile images due to the effects of imposed force at a me urement moment. Tactile image of a contact surface, used for recognition of the surface type, varies depending on the forces imposed so that a false recognition may result in. This paper fuzzifies two parameters of the contour of a tactile image with the membership function formed by considering the imposed force. Two fuzzifed paramenters are fused by the average Minkowski's dist; lnce. The proposed algorithm was implemented on the multisensor system cnmposed of an optical tact le sensor and a 6 axes forceltorque sensor. By the experiments, the proposed algorithm has shown average recognition ratio greater than 869% over all imposed force ranges and object models which is about 14% enhancement comparing to the case where only the contour information is used. The pro- ~oseda lgorithm can be used for end-effectors manipulating a deformable or fragile objects or for recognition of 3D objects by implementing on multi-fingered robot hand.

  • PDF

The Effectiveness Analysis of Multistatic Sonar Network Via Detection Peformance (표적탐지성능을 이용한 다중상태 소나의 효과도 분석)

  • Jang, Jae-Hoon;Ku, Bon-Hwa;Hong, Woo-Young;Kim, In-Ik;Ko, Han-Seok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.9 no.1 s.24
    • /
    • pp.24-32
    • /
    • 2006
  • This paper is to analyze the effectiveness of multistatic sonar network based on detection performance. The multistatic sonar network is a distributed detection system that places a source and multi-receivers apart. So it needs a detection technique that relates to decision rule and optimization of sonar system to improve the detection performance. For this we propose a data fusion procedure using Bayesian decision and optimal sensor arrangement by optimizing a bistatic sonar. Also, to analyze the detection performance effectively, we propose the environmental model that simulates a propagation loss and target strength suitable for multistatic sonar networks in real surroundings. The effectiveness analysis on the multistatic sonar network confirms itself as a promising tool for effective allocation of detection resources in multistatic sonar system.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Radiometric Cross Calibration of KOMPSAT-3 and Lnadsat-8 for Time-Series Harmonization (KOMPSAT-3와 Landsat-8의 시계열 융합활용을 위한 교차검보정)

  • Ahn, Ho-yong;Na, Sang-il;Park, Chan-won;Hong, Suk-young;So, Kyu-ho;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1523-1535
    • /
    • 2020
  • In order to produce crop information using remote sensing, we use classification and growth monitoring based on crop phenology. Therefore, time-series satellite images with a short period are required. However, there are limitations to acquiring time-series satellite data, so it is necessary to use fusion with other earth observation satellites. Before fusion of various satellite image data, it is necessary to overcome the inherent difference in radiometric characteristics of satellites. This study performed Korea Multi-Purpose Satellite-3 (KOMPSAT-3) cross calibration with Landsat-8 as the first step for fusion. Top of Atmosphere (TOA) Reflectance was compared by applying Spectral Band Adjustment Factor (SBAF) to each satellite using hyperspectral sensor band aggregation. As a result of cross calibration, KOMPSAT-3 and Landsat-8 satellites showed a difference in reflectance of less than 4% in Blue, Green, and Red bands, and 6% in NIR bands. KOMPSAT-3, without on-board calibrator, idicate lower radiometric stability compared to ladnsat-8. In the future, efforts are needed to produce normalized reflectance data through BRDF (Bidirectional reflectance distribution function) correction and SBAF application for spectral characteristics of agricultural land.

Time Synchronization Error and Calibration in Integrated GPS/INS Systems

  • Ding, Weidong;Wang, Jinling;Li, Yong;Mumford, Peter;Rizos, Chris
    • ETRI Journal
    • /
    • v.30 no.1
    • /
    • pp.59-67
    • /
    • 2008
  • The necessity for the precise time synchronization of measurement data from multiple sensors is widely recognized in the field of global positioning system/inertial navigation system (GPS/INS) integration. Having precise time synchronization is critical for achieving high data fusion performance. The limitations and advantages of various time synchronization scenarios and existing solutions are investigated in this paper. A criterion for evaluating synchronization accuracy requirements is derived on the basis of a comparison of the Kalman filter innovation series and the platform dynamics. An innovative time synchronization solution using a counter and two latching registers is proposed. The proposed solution has been implemented with off-the-shelf components and tested. The resolution and accuracy analysis shows that the proposed solution can achieve a time synchronization accuracy of 0.1 ms if INS can provide a hard-wired timing signal. A synchronization accuracy of 2 ms was achieved when the test system was used to synchronize a low-grade micro-electromechanical inertial measurement unit (IMU), which has only an RS-232 data output interface.

  • PDF

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.