• 제목/요약/키워드: sensor data fusion

검색결과 382건 처리시간 0.029초

Evidential Fusion of Multsensor Multichannel Imagery

  • Lee Sang-Hoon
    • 대한원격탐사학회지
    • /
    • 제22권1호
    • /
    • pp.75-85
    • /
    • 2006
  • This paper has dealt with a data fusion for the problem of land-cover classification using multisensor imagery. Dempster-Shafer evidence theory has been employed to combine the information extracted from the multiple data of same site. The Dempster-Shafer's approach has two important advantages for remote sensing application: one is that it enables to consider a compound class which consists of several land-cover types and the other is that the incompleteness of each sensor data due to cloud-cover can be modeled for the fusion process. The image classification based on the Dempster-Shafer theory usually assumes that each sensor is represented by a single channel. The evidential approach to image classification, which utilizes a mass function obtained under the assumption of class-independent beta distribution, has been discussed for the multiple sets of mutichannel data acquired from different sensors. The proposed method has applied to the KOMPSAT-1 EOC panchromatic imagery and LANDSAT ETM+ data, which were acquired over Yongin/Nuengpyung area of Korean peninsula. The experiment has shown that it is greatly effective on the applications in which it is hard to find homogeneous regions represented by a single land-cover type in training process.

항공기 센서 실시간 항적 정보와 항공전자 전술데이터링크 정보융합 구조 연구 (A Study on a Information Fusion Architecture of Avionics Realtime Track and Tactical Data Link)

  • 강신우;이영서;박상웅;안태식
    • 한국항행학회논문지
    • /
    • 제26권5호
    • /
    • pp.325-330
    • /
    • 2022
  • 항공기에 탑재된 센서들은 임무 수행에 필수 요소이며 센서들을 통해 얻어진 데이터의 융합은 임무 효율을 높이고 조종사의 부담을 줄이기 위해 적용되고 있다. 센서들로부터 얻어진 데이터를 특정 대상에 대해 일관되고 보다 정리된 형태로 조종사에게 제공하기 위해 데이터 융합이 적용되어 발전하고 있다. 현재 운용되고 있는 군용 항공기는 Link-16 과 같은 전술데이터링크에 연동하여 향상된 전술 상황을 조종사에게 전시하여 임무 효율을 높이고 있다. 항공기에 탑재된 센서가 고성능화 되면서 얻어진 정확도가 향상된 센서 데이터와 전술데이터링크를 통해 수신한 전술상황정보를 융합하여 조종사에게 고신뢰성의 전술상황 및 임무 환경을 제공하고 효율적인 임무 수행과 높은 생존성을 기대할 수 있다. 본 논문에서는 항공기 실시간 센서 데이터와 전술데이터링크를 통해 얻어지는 데이터를 종합된 정보 형태로 제공하기 위한 융합 구조를 보인다.

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • 대한원격탐사학회지
    • /
    • 제26권3호
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

Fuzzy Neural Network Based Sensor Fusion and It's Application to Mobile Robot in Intelligent Robotic Space

  • Jin, Tae-Seok;Lee, Min-Jung;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권4호
    • /
    • pp.293-298
    • /
    • 2006
  • In this paper, a sensor fusion based robot navigation method for the autonomous control of a miniature human interaction robot is presented. The method of navigation blends the optimality of the Fuzzy Neural Network(FNN) based control algorithm with the capabilities in expressing knowledge and learning of the networked Intelligent Robotic Space(IRS). States of robot and IR space, for examples, the distance between the mobile robot and obstacles and the velocity of mobile robot, are used as the inputs of fuzzy logic controller. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a sensor fusion technique is introduced, where the sensory data of ultrasonic sensors and a vision sensor are fused into the identification process. Preliminary experiment and results are shown to demonstrate the merit of the introduced navigation control algorithm.

Aerial Object Detection and Tracking based on Fusion of Vision and Lidar Sensors using Kalman Filter for UAV

  • Park, Cheonman;Lee, Seongbong;Kim, Hyeji;Lee, Dongjin
    • International journal of advanced smart convergence
    • /
    • 제9권3호
    • /
    • pp.232-238
    • /
    • 2020
  • In this paper, we study on aerial objects detection and position estimation algorithm for the safety of UAV that flight in BVLOS. We use the vision sensor and LiDAR to detect objects. We use YOLOv2 architecture based on CNN to detect objects on a 2D image. Additionally we use a clustering method to detect objects on point cloud data acquired from LiDAR. When a single sensor used, detection rate can be degraded in a specific situation depending on the characteristics of sensor. If the result of the detection algorithm using a single sensor is absent or false, we need to complement the detection accuracy. In order to complement the accuracy of detection algorithm based on a single sensor, we use the Kalman filter. And we fused the results of a single sensor to improve detection accuracy. We estimate the 3D position of the object using the pixel position of the object and distance measured to LiDAR. We verified the performance of proposed fusion algorithm by performing the simulation using the Gazebo simulator.

Reducing Spectral Signature Confusion of Optical Sensor-based Land Cover Using SAR-Optical Image Fusion Techniques

  • ;Tateishi, Ryutaro;Wikantika, Ketut;M.A., Mohammed Aslam
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.107-109
    • /
    • 2003
  • Optical sensor-based land cover categories produce spectral signature confusion along with degraded classification accuracy. In the classification tasks, the goal of fusing data from different sensors is to reduce the classification error rate obtained by single source classification. This paper describes the result of land cover/land use classification derived from solely of Landsat TM (TM) and multisensor image fusion between JERS 1 SAR (JERS) and TM data. The best radar data manipulation is fused with TM through various techniques. Classification results are relatively good. The highest Kappa Coefficient is derived from classification using principal component analysis-high pass filtering (PCA+HPF) technique with the Overall Accuracy significantly high.

  • PDF

센서융합을 이용한 3차원 물체의 동작 예측 (3D motion estimation using multisensor data fusion)

  • 양우석;장종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1993년도 한국자동제어학술회의논문집(국내학술편); Seoul National University, Seoul; 20-22 Oct. 1993
    • /
    • pp.679-684
    • /
    • 1993
  • This article presents an approach to estimate the general 3D motion of a polyhedral object using multiple, sensory data some of which may not provide sufficient information for the estimation of object motion. Motion can be estimated continuously from each sensor through the analysis of the instantaneous state of an object. We have introduced a method based on Moore-Penrose pseudo-inverse theory to estimate the instantaneous state of an object. A linear feedback estimation algorithm is discussed to estimate the object 3D motion. Then, the motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown object. The techniques of multisensor data fusion can be categorized into three methods: averaging, decision, and guiding. We present a fusion algorithm which combines averaging and decision.

  • PDF

자율이동로봇의 효율적인 충돌회피를 위한 오도메트리 정보와 거리센서 데이터 융합기법 (A Data Fusion Method of Odometry Information and Distance Sensor for Effective Obstacle Avoidance of a Autonomous Mobile Robot)

  • 서동진;고낙용
    • 전기학회논문지
    • /
    • 제57권4호
    • /
    • pp.686-691
    • /
    • 2008
  • This paper proposes the concept of "virtual sensor data" and its application for real time obstacle avoidance. The virtual sensor data is virtual distance which takes care of the movement of the obstacle as well as that of the robot. In practical application, the virtual sensor data is calculated from the odometry data and the range sensor data. The virtual sensor data can be used in all the methods which use distance data for collision avoidance. Since the virtual sensor data considers the movement of the robot and the obstacle, the methods utilizing the virtual sensor data results in more smooth and safer collision-free motion.

다중 홉 클러스터 센서 네트워크에서 속성 기반 ID를 이용한 효율적인 융합과 라우팅 알고리즘 (Efficient Aggregation and Routing Algorithm using Local ID in Multi-hop Cluster Sensor Network)

  • 이보형;이태진
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 통신소사이어티 추계학술대회논문집
    • /
    • pp.135-139
    • /
    • 2003
  • Sensor networks consist of sensor nodes with small-size, low-cost, low-power, and multi-functions to sense, to process and to communicate. Minimizing power consumption of sensors is an important issue in sensor networks due to limited power in sensor networks. Clustering is an efficient way to reduce data flow in sensor networks and to maintain less routing information. In this paper, we propose a multi-hop clustering mechanism using global and local ID to reduce transmission power consumption and an efficient routing method for improved data fusion and transmission.

  • PDF

카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법 (Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model)

  • 임이지;최대선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.1099-1110
    • /
    • 2023
  • 자율주행 및 robot navigation의 인식 시스템은 성능 향상을 위해 다중 센서를 융합(Multi-Sensor Fusion)을 한 후, 객체 인식 및 추적, 차선 감지 등의 비전 작업을 한다. 현재 카메라와 라이다 센서의 융합을 기반으로 한 딥러닝 모델에 대한 연구가 활발히 이루어지고 있다. 그러나 딥러닝 모델은 입력 데이터의 변조를 통한 적대적 공격에 취약하다. 기존의 다중 센서 기반 자율주행 인식 시스템에 대한 공격은 객체 인식 모델의 신뢰 점수를 낮춰 장애물 오검출을 유도하는 데에 초점이 맞춰져 있다. 그러나 타겟 모델에만 공격이 가능하다는 한계가 있다. 센서 융합단계에 대한 공격의 경우 융합 이후의 비전 작업에 대한 오류를 연쇄적으로 유발할 수 있으며, 이러한 위험성에 대한 고려가 필요하다. 또한 시각적으로 판단하기 어려운 라이다의 포인트 클라우드 데이터에 대한 공격을 진행하여 공격 여부를 판단하기 어렵도록 한다. 본 연구에서는 이미지 스케일링 기반 카메라-라이다 융합 모델(camera-LiDAR calibration model)인 LCCNet 의 정확도를 저하시키는 공격 방법을 제안한다. 제안 방법은 입력 라이다의 포인트에 스케일링 공격을 하고자 한다. 스케일링 알고리즘과 크기별 공격 성능 실험을 진행한 결과 평균 77% 이상의 융합 오류를 유발하였다.