• Title/Summary/Keyword: Data fusion

Search Result 1,564, Processing Time 0.031 seconds

Design of a Multi-Sensor Data Simulator and Development of Data Fusion Algorithm (다중센서자료 시뮬레이터 설계 및 자료융합 알고리듬 개발)

  • Lee, Yong-Jae;Lee, Ja-Seong;Go, Seon-Jun;Song, Jong-Hwa
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.5
    • /
    • pp.93-100
    • /
    • 2006
  • This paper presents a multi-sensor data simulator and a data fusion algorithm for tracking high dynamic flight target from Radar and Telemetry System. The designed simulator generates time-asynchronous multiple sensor data with different data rates and communication delays. Measurement noises are incorporated by using realistic sensor models. The proposed fusion algorithm is designed by a 21st order distributed Kalman Filter which is based on the PVA model with sensor bias states. A fault detection and correction logics are included in the algorithm for bad data and sensor faults. The designed algorithm is verified by using both simulation data and actual real data.

Data Alignment for Data Fusion in Wireless Multimedia Sensor Networks Based on M2M

  • Cruz, Jose Roberto Perez;Hernandez, Saul E. Pomares;Cote, Enrique Munoz De
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.1
    • /
    • pp.229-240
    • /
    • 2012
  • Advances in MEMS and CMOS technologies have motivated the development of low cost/power sensors and wireless multimedia sensor networks (WMSN). The WMSNs were created to ubiquitously harvest multimedia content. Such networks have allowed researchers and engineers to glimpse at new Machine-to-Machine (M2M) Systems, such as remote monitoring of biosignals for telemedicine networks. These systems require the acquisition of a large number of data streams that are simultaneously generated by multiple distributed devices. This paradigm of data generation and transmission is known as event-streaming. In order to be useful to the application, the collected data requires a preprocessing called data fusion, which entails the temporal alignment task of multimedia data. A practical way to perform this task is in a centralized manner, assuming that the network nodes only function as collector entities. However, by following this scheme, a considerable amount of redundant information is transmitted to the central entity. To decrease such redundancy, data fusion must be performed in a collaborative way. In this paper, we propose a collaborative data alignment approach for event-streaming. Our approach identifies temporal relationships by translating temporal dependencies based on a timeline to causal dependencies of the media involved.

The Improvement of Target Motion Analysis(TMA) for Submarine with Data Fusion (정보융합 기법을 활용한 잠수함 표적기동분석 성능향상 연구)

  • Lim, Young-Taek;Ko, Soon-Ju;Song, Taek-Lyul
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.6
    • /
    • pp.697-703
    • /
    • 2009
  • Target Motion Analysis(TMA) means to detect target position, velocity and course for using passive sonar system with bearing-only measurement. In this paper, we apply the TMA algorithm for a submarine with Multi-Sensor Data Fusion(MSDF) and we will decide the best TMA algorithm for a submarine by a series of computer simulation runs.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

Generalized IHS-Based Satellite Imagery Fusion Using Spectral Response Functions

  • Kim, Yong-Hyun;Eo, Yang-Dam;Kim, Youn-Soo;Kim, Yong-Il
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.497-505
    • /
    • 2011
  • Image fusion is a technical method to integrate the spatial details of the high-resolution panchromatic (HRP) image and the spectral information of low-resolution multispectral (LRM) images to produce high-resolution multispectral images. The most important point in image fusion is enhancing the spatial details of the HRP image and simultaneously maintaining the spectral information of the LRM images. This implies that the physical characteristics of a satellite sensor should be considered in the fusion process. Also, to fuse massive satellite images, the fusion method should have low computation costs. In this paper, we propose a fast and efficient satellite image fusion method. The proposed method uses the spectral response functions of a satellite sensor; thus, it rationally reflects the physical characteristics of the satellite sensor to the fused image. As a result, the proposed method provides high-quality fused images in terms of spectral and spatial evaluations. The experimental results of IKONOS images indicate that the proposed method outperforms the intensity-hue-saturation and wavelet-based methods.

Evaluation of Geo-based Image Fusion on Mobile Cloud Environment using Histogram Similarity Analysis

  • Lee, Kiwon;Kang, Sanggoo
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • Mobility and cloud platform have become the dominant paradigm to develop web services dealing with huge and diverse digital contents for scientific solution or engineering application. These two trends are technically combined into mobile cloud computing environment taking beneficial points from each. The intention of this study is to design and implement a mobile cloud application for remotely sensed image fusion for the further practical geo-based mobile services. In this implementation, the system architecture consists of two parts: mobile web client and cloud application server. Mobile web client is for user interface regarding image fusion application processing and image visualization and for mobile web service of data listing and browsing. Cloud application server works on OpenStack, open source cloud platform. In this part, three server instances are generated as web server instance, tiling server instance, and fusion server instance. With metadata browsing of the processing data, image fusion by Bayesian approach is performed using functions within Orfeo Toolbox (OTB), open source remote sensing library. In addition, similarity of fused images with respect to input image set is estimated by histogram distance metrics. This result can be used as the reference criterion for user parameter choice on Bayesian image fusion. It is thought that the implementation strategy for mobile cloud application based on full open sources provides good points for a mobile service supporting specific remote sensing functions, besides image fusion schemes, by user demands to expand remote sensing application fields.

Developing Data Fusion Method for Indoor Space Modeling based on IndoorGML Core Module

  • Lee, Jiyeong;Kang, Hye Young;Kim, Yun Ji
    • Spatial Information Research
    • /
    • v.22 no.2
    • /
    • pp.31-44
    • /
    • 2014
  • According to the purpose of applications, the application program will utilize the most suitable data model and 3D modeling data would be generated based on the selected data model. In these reasons, there are various data sets to represent the same geographical features. The duplicated data sets bring serious problems in system interoperability and data compatibility issues, as well in finance issues of geo-spatial information industries. In order to overcome the problems, this study proposes a spatial data fusion method using topological relationships among spatial objects in the feature classes, called Topological Relation Model (TRM). The TRM is a spatial data fusion method implemented in application-level, which means that the geometric data generated by two different data models are used directly without any data exchange or conversion processes in an application system to provide indoor LBSs. The topological relationships are defined and described by the basic concepts of IndoorGML. After describing the concepts of TRM, experimental implementations of the proposed data fusion method in 3D GIS are presented. In the final section, the limitations of this study and further research are summarized.

A Study on Mobile Robot Navigation Using a New Sensor Fusion

  • Tack, Han-Ho;Jin, Tae-Seok;Lee, Sang-Bae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.471-475
    • /
    • 2003
  • This paper proposes a sensor-fusion technique where the data sets for the previous moments are properly transformed and fused into the current data sets to enable accurate measurement, such as, distance to an obstacle and location of the service robot itself. In the conventional fusion schemes, the measurement is dependent on the current data sets. As the results, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the measurement improvement. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in an unstructured environment as well as structured environment.

  • PDF

TEXTURE ANALYSIS, IMAGE FUSION AND KOMPSAT-1

  • Kressler, F.P.;Kim, Y.S.;Steinnocher, K.T.
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.792-797
    • /
    • 2002
  • In the following paper two algorithms, suitable for the analysis of panchromatic data as provided by KOMPSAT-1 will be presented. One is a texture analysis which will be used to create a settlement mask based on the variations of gray values. The other is a fusion algorithm which allows the combination of high resolution panchromatic data with medium resolution multispectral data. The procedure developed for this purpose uses the spatial information present in the high resolution image to spatially enhance the low resolution image, while keeping the distortion of the multispectral information to a minimum. This makes it possible to use the fusion results for standard multispecatral classification routines. The procedures presented here can be automated to large extent, making them suitable for a standard processing routine of satellite data.

  • PDF

Fault Diagnosis of Induction Motors Using Data Fusion of Vibration and Current Signals (진동 및 전류신호의 데이터융합을 이용한 유도전동기의 결함진단)

  • 김광진;한천
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.14 no.11
    • /
    • pp.1091-1100
    • /
    • 2004
  • This paper presents an approach for the monitoring and detection of faults in induction machine by using data fusion technique and Dempster-Shafer theory Features are extracted from motor stator current and vibration signals. Neural network is trained and Hosted by the selected features of the measured data. The fusion of classification results from vibration and current classifiers increases the diagnostic accuracy. The efficiency of the proposed system is demonstrated by detecting motor electric and mechanical faults originated from the induction motors. The results of the test confirm that the proposed system has potential for real time application.