• 제목/요약/키워드: fusion method

검색결과 1,951건 처리시간 0.027초

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • 대한원격탐사학회지
    • /
    • 제26권3호
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

상관계수를 이용하여 인식률을 향상시킨 rank-level fusion 방법 (Rank-level Fusion Method That Improves Recognition Rate by Using Correlation Coefficient)

  • 안정호;정재열;정익래
    • 정보보호학회논문지
    • /
    • 제29권5호
    • /
    • pp.1007-1017
    • /
    • 2019
  • 현재 대부분의 생체인증 시스템은 단일 생체정보를 이용하여 사용자를 인증하고 있는데, 이러한 방식은 노이즈로 인한 문제, 데이터에 대한 민감성 문제, 스푸핑, 인식률의 한계 등 많은 문제점들을 가지고 있다. 이를 해결하기 위한 방법 중 하나로 다중 생체정보를 이용하는 방법이 제시되고 있다. 다중 생체인증 시스템은 각각의 생체정보에 대해서 information fusion을 수행하여 새로운 정보를 생성한 뒤, 그 정보를 활용하여 사용자를 인증하는 방식이다. Information fusion 방법들 중에서 score-level fusion 방법을 보편적으로 많이 사용한다. 하지만 정규화 작업이 필요하다는 문제점을 갖고 있고, 데이터가 같아도 정규화 방법에 따라 인식률이 달라진다는 문제점을 갖고 있다. 이에 대한 대안으로 정규화 작업이 필요 없는 rank-level fusion 방법이 제시되고 있다. 하지만 기존의 rank-level fusion 방법들은 score-level fusion 방법보다 인식률이 낮다. 이러한 문제점을 해결하기 위해 상관계수를 이용하여 score-level fusion 방법보다 인식률이 높은 rank-level fusion 방법을 제안한다. 실험은 홍채정보(CASIA V3)와 얼굴정보(FERET V1)를 이용하여 기존의 존재하는 rank-level fusion 방법들의 인식률과 본 논문에서 제안하는 fusion 방법의 인식률을 비교하였다. 또한 score-level fusion 방법들과도 인식률을 비교하였다. 그 결과로 인식률이 약 0.3%에서 3.3%까지 향상되었다.

FUSESHARP: A MULTI-IMAGE FOCUS FUSION METHOD USING DISCRETE WAVELET TRANSFORM AND UNSHARP MASKING

  • GARGI TRIVEDI;RAJESH SANGHAVI
    • Journal of applied mathematics & informatics
    • /
    • 제41권5호
    • /
    • pp.1115-1128
    • /
    • 2023
  • In this paper, a novel hybrid method for multi-focus image fusion is proposed. The method combines the advantages of wavelet transform-based methods and focus-measure-based methods to achieve an improved fusion result. The input images are first decomposed into different frequency sub-bands using the discrete wavelet transform (DWT). The focus measure of each sub-band is then calculated using the Laplacian of Gaussian (LoG) operator, and the sub-band with the highest focus measure is selected as the focused sub-band. The focused sub-band is sharpened using an unsharp masking filter to preserve the details in the focused part of the image.Finally, the sharpened focused sub-bands from all input images are fused using the maximum intensity fusion method to preserve the important information from all focus images. The proposed method has been evaluated using standard multi focus image fusion datasets and has shown promising results compared to existing methods.

Fast and Efficient Satellite Imagery Fusion Using DT-CWT Proportional and Wavelet Zero-Padding

  • Kim, Yong-Hyun;Oh, Jae-Hong;Kim, Yong-Il
    • 한국측량학회지
    • /
    • 제33권6호
    • /
    • pp.517-526
    • /
    • 2015
  • Among the various image fusion or pan-sharpening methods, those wavelet-based methods provide superior radiometric quality. However, the fusion processing is not only simple but also flexible, since many low- and high-frequency sub-bands are often produced in the wavelet domain. To address this issue, a novel DT-CWT (Dual-Tree Complex Wavelet Transform) proportional to the fusion method by a WZP (Wavelet Zero-Padding) is proposed. The proposed method produces a single high-frequency image in the spatial domain that is injected into the LRM (Low-Resolution Multispectral) image. Thus, a wavelet domain fusion can be simplified to spatial domain fusion. In addition, in the proposed DT-CWTP (DT-CWT Proportional) fusion method, it is unnecessary to decompose the LRM image by adopting WZP. The comparison indicates that the proposed fusion method is nearly five times faster than the DT-CWT with SW (Substitute-Wavelet) fusion method, meanwhile simultaneously maintaining the radiometric quality. The conducted experiments with WorldView-2 satellite images demonstrated promising results with the computation efficiency and fused image quality.

특허분석을 통한 유망융합기술의 예측 (A Study on Forecast of the Promising Fusion Technology by US Patent Analysis)

  • 강희종;엄미정;김동명
    • 기술혁신연구
    • /
    • 제14권3호
    • /
    • pp.93-116
    • /
    • 2006
  • This study provides a quantitative forecasting method to identify promising fusion technology and it also applies the method based on patent analysis to IT. This study defines fusion technology, promising technology, fusion index, promising index and promising fusion technology. From the analysis, this study found that the next generation computer network is the most promising in IT area. This result is consistent with the forecasts made by the interviews and discussion of experts.

  • PDF

Generalized IHS-Based Satellite Imagery Fusion Using Spectral Response Functions

  • Kim, Yong-Hyun;Eo, Yang-Dam;Kim, Youn-Soo;Kim, Yong-Il
    • ETRI Journal
    • /
    • 제33권4호
    • /
    • pp.497-505
    • /
    • 2011
  • Image fusion is a technical method to integrate the spatial details of the high-resolution panchromatic (HRP) image and the spectral information of low-resolution multispectral (LRM) images to produce high-resolution multispectral images. The most important point in image fusion is enhancing the spatial details of the HRP image and simultaneously maintaining the spectral information of the LRM images. This implies that the physical characteristics of a satellite sensor should be considered in the fusion process. Also, to fuse massive satellite images, the fusion method should have low computation costs. In this paper, we propose a fast and efficient satellite image fusion method. The proposed method uses the spectral response functions of a satellite sensor; thus, it rationally reflects the physical characteristics of the satellite sensor to the fused image. As a result, the proposed method provides high-quality fused images in terms of spectral and spatial evaluations. The experimental results of IKONOS images indicate that the proposed method outperforms the intensity-hue-saturation and wavelet-based methods.

A New Method of Remote Sensing Image Fusion Based on Modified Kohonen Networks

  • Shuhe, Zhao;Xiuwan, Chen;Junfeng, Chen;Yinghai, Ke
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1337-1339
    • /
    • 2003
  • In this article, a new remote sensing image fusion model based on modified Kohonen networks is given. And a new fusion rule based on modified voting rule was established. Select Shaoxing City as the study site, located at Zhejiang Province, P.R.China. The fusion experiment between Landsat TM data (30m) and IRS-C Pan data (5.8m) was performed using the given fusion method. The fusion results show that the new method can gain better result in apply ing to the lower hill area, and the whole classification accuracy was 10% higher than the basic Kohonen method. The confusion between the woodlands and the waterbodies was also diminished.

  • PDF

A Study on the Performance Enhancement of Radar Target Classification Using the Two-Level Feature Vector Fusion Method

  • Kim, In-Ha;Choi, In-Sik;Chae, Dae-Young
    • Journal of electromagnetic engineering and science
    • /
    • 제18권3호
    • /
    • pp.206-211
    • /
    • 2018
  • In this paper, we proposed a two-level feature vector fusion technique to improve the performance of target classification. The proposed method combines feature vectors of the early-time region and late-time region in the first-level fusion. In the second-level fusion, we combine the monostatic and bistatic features obtained in the first level. The radar cross section (RCS) of the 3D full-scale model is obtained using the electromagnetic analysis tool FEKO, and then, the feature vector of the target is extracted from it. The feature vector based on the waveform structure is used as the feature vector of the early-time region, while the resonance frequency extracted using the evolutionary programming-based CLEAN algorithm is used as the feature vector of the late-time region. The study results show that the two-level fusion method is better than the one-level fusion method.

Perceptual Fusion of Infrared and Visible Image through Variational Multiscale with Guide Filtering

  • Feng, Xin;Hu, Kaiqun
    • Journal of Information Processing Systems
    • /
    • 제15권6호
    • /
    • pp.1296-1305
    • /
    • 2019
  • To solve the problem of poor noise suppression capability and frequent loss of edge contour and detailed information in current fusion methods, an infrared and visible light image fusion method based on variational multiscale decomposition is proposed. Firstly, the fused images are separately processed through variational multiscale decomposition to obtain texture components and structural components. The method of guided filter is used to carry out the fusion of the texture components of the fused image. In the structural component fusion, a method is proposed to measure the fused weights with phase consistency, sharpness, and brightness comprehensive information. Finally, the texture components of the two images are fused. The structure components are added to obtain the final fused image. The experimental results show that the proposed method displays very good noise robustness, and it also helps realize better fusion quality.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • 한국측량학회지
    • /
    • 제36권3호
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.