• Title/Summary/Keyword: Sensor fusion

Search Result 815, Processing Time 0.032 seconds

Development of a Vehicle Positioning Algorithm Using Reference Images (기준영상을 이용한 차량 측위 알고리즘 개발)

  • Kim, Hojun;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1131-1142
    • /
    • 2018
  • The autonomous vehicles are being developed and operated widely because of the advantages of reducing the traffic accident and saving time and cost for driving. The vehicle localization is an essential component for autonomous vehicle operation. In this paper, localization algorithm based on sensor fusion is developed for cost-effective localization using in-vehicle sensors, GNSS, an image sensor and reference images that made in advance. Information of the reference images can overcome the limitation of the low positioning accuracy that occurs when only the sensor information is used. And it also can acquire estimated result of stable position even if the car is located in the satellite signal blockage area. The particle filter is used for sensor fusion that can reflect various probability density distributions of individual sensors. For evaluating the performance of the algorithm, a data acquisition system was built and the driving data and the reference image data were acquired. Finally, we can verify that the vehicle positioning can be performed with an accuracy of about 0.7 m when the route image and the reference image information are integrated with the route path having a relatively large error by the satellite sensor.

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Efficient Aggregation and Routing Algorithm using Local ID in Multi-hop Cluster Sensor Network (다중 홉 클러스터 센서 네트워크에서 속성 기반 ID를 이용한 효율적인 융합과 라우팅 알고리즘)

  • 이보형;이태진
    • Proceedings of the IEEK Conference
    • /
    • 2003.11c
    • /
    • pp.135-139
    • /
    • 2003
  • Sensor networks consist of sensor nodes with small-size, low-cost, low-power, and multi-functions to sense, to process and to communicate. Minimizing power consumption of sensors is an important issue in sensor networks due to limited power in sensor networks. Clustering is an efficient way to reduce data flow in sensor networks and to maintain less routing information. In this paper, we propose a multi-hop clustering mechanism using global and local ID to reduce transmission power consumption and an efficient routing method for improved data fusion and transmission.

  • PDF

A Data Fusion Algorithm of the Nonlinear System Based on Filtering Step By Step

  • Wen Cheng-Lin;Ge Quan-Bo
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2006
  • This paper proposes a data fusion algorithm of nonlinear multi sensor dynamic systems of synchronous sampling based on filtering step by step. Firstly, the object state variable at the next time index can be predicted by the previous global information with the systems, then the predicted estimation can be updated in turn by use of the extended Kalman filter when all of the observations aiming at the target state variable arrive. Finally a fusion estimation of the object state variable is obtained based on the system global information. Synchronously, we formulate the new algorithm and compare its performances with those of the traditional nonlinear centralized and distributed data fusion algorithms by the indexes that include the computational complexity, data communicational burden, time delay and estimation accuracy, etc.. These compared results indicate that the performance from the new algorithm is superior to the performances from the two traditional nonlinear data fusion algorithms.

Reducing Spectral Signature Confusion of Optical Sensor-based Land Cover Using SAR-Optical Image Fusion Techniques

  • ;Tateishi, Ryutaro;Wikantika, Ketut;M.A., Mohammed Aslam
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.107-109
    • /
    • 2003
  • Optical sensor-based land cover categories produce spectral signature confusion along with degraded classification accuracy. In the classification tasks, the goal of fusing data from different sensors is to reduce the classification error rate obtained by single source classification. This paper describes the result of land cover/land use classification derived from solely of Landsat TM (TM) and multisensor image fusion between JERS 1 SAR (JERS) and TM data. The best radar data manipulation is fused with TM through various techniques. Classification results are relatively good. The highest Kappa Coefficient is derived from classification using principal component analysis-high pass filtering (PCA+HPF) technique with the Overall Accuracy significantly high.

  • PDF

Multisensor-Based Navigation of a Mobile Robot Using a Fuzzy Inference in Dynamic Environments (동적환경에서 퍼지추론을 이용한 이동로봇의 다중센서기반의 자율주행)

  • 진태석;이장명
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.11
    • /
    • pp.79-90
    • /
    • 2003
  • In this paper, we propose a multisensor-based navigation algorithm for a mobile robot, which is intelligently searching the goal location in unknown dynamic environments using multi-ultrasonic sensor. Instead of using “sensor fusion” method which generates the trajectory of a robot based upon the environment model and sensory data, “command fusion” method by fuzzy inference is used to govern the robot motions. The major factors for robot navigation are represented as a cost function. Using the data of the robot states and the environment, the weight value of each factor using fuzzy inference is determined for an optimal trajectory in dynamic environments. For the evaluation of the proposed algorithm, we performed simulations in PC as well as experiments with IRL-2002. The results show that the proposed algorithm is apt to identify obstacles in unknown environments to guide the robot to the goal location safely.

3D motion estimation using multisensor data fusion (센서융합을 이용한 3차원 물체의 동작 예측)

  • 양우석;장종환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10a
    • /
    • pp.679-684
    • /
    • 1993
  • This article presents an approach to estimate the general 3D motion of a polyhedral object using multiple, sensory data some of which may not provide sufficient information for the estimation of object motion. Motion can be estimated continuously from each sensor through the analysis of the instantaneous state of an object. We have introduced a method based on Moore-Penrose pseudo-inverse theory to estimate the instantaneous state of an object. A linear feedback estimation algorithm is discussed to estimate the object 3D motion. Then, the motion estimated from each sensor is fused to provide more accurate and reliable information about the motion of an unknown object. The techniques of multisensor data fusion can be categorized into three methods: averaging, decision, and guiding. We present a fusion algorithm which combines averaging and decision.

  • PDF

A Study on Indoor Mobile Robot Navigation Used Space and Time Sensor Fusion

  • Jin, Tae-Seok;Ko, Jae-Pyung;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.104.2-104
    • /
    • 2002
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system , the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is il lustrated by examples and...

  • PDF