• Title/Summary/Keyword: Sensor fusion

Search Result 815, Processing Time 0.032 seconds

A Method of Obstacle Detection in the Dust Environment for Unmanned Ground Vehicle (먼지 환경의 무인차량 운용을 위한 장애물 탐지 기법)

  • Choe, Tok-Son;Ahn, Seong-Yong;Park, Yong-Woon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.13 no.6
    • /
    • pp.1006-1012
    • /
    • 2010
  • For the autonomous navigation of an unmanned ground vehicle in the rough terrain and combat, the dust environment should necessarily be overcome. Therefore, we propose a robust obstacle detection methodology using laser range sensor and radar. Laser range sensor has a good angle and distance accuracy, however, it has a weakness in the dust environment. On the other hand, radar has not better the angle and distance accuracy than laser range sensor, it has a robustness in the dust environment. Using these characteristics of laser range sensor and radar, we use laser range sensor as a main sensor for normal times and radar as a assist sensor for the dust environment. For fusion of laser range sensor and radar information, the angle and distance data of the laser range sensor and radar are separately transformed to the angle and distance data of virtual range sensor which is located in the center of the vehicle. Through distance comparison of laser range sensor and radar in the same angle, the distance data of a fused virtual range sensor are changed to the distance data of the laser range sensor, if the distance of laser range sensor and radar are similar. In the other case, the distance data of the fused virtual range sensor are changed to the distance data of the radar. The suggested methodology is verified by real experiment.

Experimental Research on Radar and ESM Measurement Fusion Technique Using Probabilistic Data Association for Cooperative Target Tracking (협동 표적 추적을 위한 확률적 데이터 연관 기반 레이더 및 ESM 센서 측정치 융합 기법의 실험적 연구)

  • Lee, Sae-Woom;Kim, Eun-Chan;Jung, Hyo-Young;Kim, Gi-Sung;Kim, Ki-Seon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.5C
    • /
    • pp.355-364
    • /
    • 2012
  • Target processing mechanisms are necessary to collect target information, real-time data fusion, and tactical environment recognition for cooperative engagement ability. Among these mechanisms, the target tracking starts from predicting state of speed, acceleration, and location by using sensors' measurements. However, it can be a problem to give the reliability because the measurements have a certain uncertainty. Thus, a technique which uses multiple sensors is needed to detect the target and increase the reliability. Also, data fusion technique is necessary to process the data which is provided from heterogeneous sensors for target tracking. In this paper, a target tracking algorithm is proposed based on probabilistic data association(PDA) by fusing radar and ESM sensor measurements. The radar sensor's azimuth and range measurements and the ESM sensor's bearing-only measurement are associated by the measurement fusion method. After gating associated measurements, state estimation of the target is performed by PDA filter. The simulation results show that the proposed algorithm provides improved estimation under linear and circular target motions.

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF