• Title/Summary/Keyword: Information Fusion

Search Result 1,871, Processing Time 0.026 seconds

Environmental Survey Data Analysis by Data Fusion Techniques

  • Cho, Kwang-Hyun;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.4
    • /
    • pp.1201-1208
    • /
    • 2006
  • Data fusion is generally defined as the use of techniques that combine data from multiple sources and gather that information in order to achieve inferences. Data fusion is also called data combination or data matching. Data fusion is divided in five branch types which are exact matching, judgemental matching, probability matching, statistical matching, and data linking. Currently, Gyeongnam province is executing the social survey every year with the provincials. But, they have the limit of the analysis as execute the different survey to 3 year cycles. In this paper, we study to data fusion of environmental survey data using sas macro. We can use data fusion outputs in environmental preservation and environmental improvement.

  • PDF

An Adaptive FIHS Fusion Using Spatial and Spectral Band Characteristics of Remote Sensing Image (위성 영상의 공간 및 분광대역 특성을 활용한 적응 FIHS 융합)

  • Seo, Yong-Su;Kim, Joong-Gon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.12 no.4
    • /
    • pp.125-135
    • /
    • 2009
  • Owing to its fast computing capability for fusing images, the FIHS(Fast Intensity Hue Saturation) fusion is widely used for fusion purposes. However, the FIHS fusion also distorts color in the same way such as the IHS(Intensity Hue Saturation) fusion technique. In this paper, a FIHS fusion technique(FIHS-BR) which reduces color distortion by using the ratio of each spectral band and an adaptive FIHS fusion(FIHS-SABR) using spatial information and the ratio of each spectral band are proposed. The proposed FIHS-BR fusion reduces color distortion by adding different spatial detail improvement values for each spectral band. The spatial detail improvement values are derived from the ratio of spectral band. And the proposed FIHS-SABR fusion reduces more color distortion by readjusting the spatial detail improvement values for each spectral band according to the ratio of the spectral bands. The spatial detail improvement values are derived adaptively from the characteristics of spatial information of the local image. To evaluate the performance of the proposed FIHS-BR fusion and FIHS-SABR fusion, a computer simulation is performed for IKONOS remote sensing image. Results from the experiments show that the proposed methods have less color distortion for the forest regions which reveal severe color distortion in the traditional FIHS fusion. From the evaluation results of the characteristics of spectral information for fused image, we show that the proposed methods have best results.

  • PDF

Classification of Textured Images Based on Discrete Wavelet Transform and Information Fusion

  • Anibou, Chaimae;Saidi, Mohammed Nabil;Aboutajdine, Driss
    • Journal of Information Processing Systems
    • /
    • v.11 no.3
    • /
    • pp.421-437
    • /
    • 2015
  • This paper aims to present a supervised classification algorithm based on data fusion for the segmentation of the textured images. The feature extraction method we used is based on discrete wavelet transform (DWT). In the segmentation stage, the estimated feature vector of each pixel is sent to the support vector machine (SVM) classifier for initial labeling. To obtain a more accurate segmentation result, two strategies based on information fusion were used. We first integrated decision-level fusion strategies by combining decisions made by the SVM classifier within a sliding window. In the second strategy, the fuzzy set theory and rules based on probability theory were used to combine the scores obtained by SVM over a sliding window. Finally, the performance of the proposed segmentation algorithm was demonstrated on a variety of synthetic and real images and showed that the proposed data fusion method improved the classification accuracy compared to applying a SVM classifier. The results revealed that the overall accuracies of SVM classification of textured images is 88%, while our fusion methodology obtained an accuracy of up to 96%, depending on the size of the data base.

An Improved Multi-resolution image fusion framework using image enhancement technique

  • Jhee, Hojin;Jang, Chulhee;Jin, Sanghun;Hong, Yonghee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.12
    • /
    • pp.69-77
    • /
    • 2017
  • This paper represents a novel framework for multi-scale image fusion. Multi-scale Kalman Smoothing (MKS) algorithm with quad-tree structure can provide a powerful multi-resolution image fusion scheme by employing Markov property. In general, such approach provides outstanding image fusion performance in terms of accuracy and efficiency, however, quad-tree based method is often limited to be applied in certain applications due to its stair-like covariance structure, resulting in unrealistic blocky artifacts at the fusion result where finest scale data are void or missed. To mitigate this structural artifact, in this paper, a new scheme of multi-scale fusion framework is proposed. By employing Super Resolution (SR) technique on MKS algorithm, fine resolved measurement is generated and blended through the tree structure such that missed detail information at data missing region in fine scale image is properly inferred and the blocky artifact can be successfully suppressed at fusion result. Simulation results show that the proposed method provides significantly improved fusion results in the senses of both Root Mean Square Error (RMSE) performance and visual improvement over conventional MKS algorithm.

A Trade-off Image Fusion Technique Using Fast Intensity-Hue-Saturation Transform (Fast IHS 변환을 이용한 trade-off 영상 융합기법)

  • Kim, Yong-Hyun;Kim, Youn-Soo
    • Aerospace Engineering and Technology
    • /
    • v.8 no.2
    • /
    • pp.26-32
    • /
    • 2009
  • In the satellite image fusion, the most important point is to preserve both the spatial detail of panchromatic(PAN) image and the spectral information of multispectral(MS) image. Among various image fusion techniques, fusion technique using Intensity-Hue-Saturation(IHS) transform is widely used and it has advantage that computation is very simple. In this study, a fusion technique using fast IHS transform and trade-off parameter $\alpha^i$ proposed. Proposed fusion technique permits customization of the trade-off between the spectral information and spatial detail quality of the fused image through the evaluation of two quality indices: a spectral index(the spectral ERGAS) and a spatial one(the spatial ERGAS). Based on the result of experiment using IKONOS image, we confirmed the proposed fusion technique was more effective in preserving spatial detail and spectral information than existing fusion techniques using fast IHS transform.

  • PDF

Study of the structural damage identification method based on multi-mode information fusion

  • Liu, Tao;Li, AiQun;Ding, YouLiang;Zhao, DaLiang
    • Structural Engineering and Mechanics
    • /
    • v.31 no.3
    • /
    • pp.333-347
    • /
    • 2009
  • Due to structural complicacy, structural health monitoring for civil engineering needs more accurate and effectual methods of damage identification. This study aims to import multi-source information fusion (MSIF) into structural damage diagnosis to improve the validity of damage detection. Firstly, the essential theory and applied mathematic methods of MSIF are introduced. And then, the structural damage identification method based on multi-mode information fusion is put forward. Later, on the basis of a numerical simulation of a concrete continuous box beam bridge, it is obviously indicated that the improved modal strain energy method based on multi-mode information fusion has nicer sensitivity to structural initial damage and favorable robusticity to noise. Compared with the classical modal strain energy method, this damage identification method needs much less modal information to detect structural initial damage. When the noise intensity is less than or equal to 10%, this method can identify structural initial damage well and truly. In a word, this structural damage identification method based on multi-mode information fusion has better effects of structural damage identification and good practicability to actual structures.

Street Fashion Information Analysis System Design Using Data Fusion

  • Park, Hee-Chang;Park, Hye-Won
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.35-45
    • /
    • 2005
  • Data fusion is method to combination data. The purpose of this study is to design and implementation for street fashion information analysis system using data fusion. It can offer variety and actually information because it can fuse image data and survey data for street fashion. Data fusion method exists exact matching method, judgemental matching method, probability matching method, statistical matching method, data linking method, etc. In this study, we use exact matching method. Our system can be visual information analysis of customer's viewpoint because it can analyze both each data and fused data for image data and survey data.

  • PDF

Multimodal Biometric Using a Hierarchical Fusion of a Person's Face, Voice, and Online Signature

  • Elmir, Youssef;Elberrichi, Zakaria;Adjoudj, Reda
    • Journal of Information Processing Systems
    • /
    • v.10 no.4
    • /
    • pp.555-567
    • /
    • 2014
  • Biometric performance improvement is a challenging task. In this paper, a hierarchical strategy fusion based on multimodal biometric system is presented. This strategy relies on a combination of several biometric traits using a multi-level biometric fusion hierarchy. The multi-level biometric fusion includes a pre-classification fusion with optimal feature selection and a post-classification fusion that is based on the similarity of the maximum of matching scores. The proposed solution enhances biometric recognition performances based on suitable feature selection and reduction, such as principal component analysis (PCA) and linear discriminant analysis (LDA), as much as not all of the feature vectors components support the performance improvement degree.

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.