• 제목/요약/키워드: Remote Sensing Image Fusion

Search Result 136, Processing Time 0.032 seconds

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.

Hyperspectral Image Fusion Algorithm Based on Two-Stage Spectral Unmixing Method (2단계 분광혼합기법 기반의 하이퍼스펙트럴 영상융합 알고리즘)

  • Choi, Jae-Wan;Kim, Dae-Sung;Lee, Byoung-Kil;Yu, Ki-Yun;Kim, Yong-Il
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.4
    • /
    • pp.295-304
    • /
    • 2006
  • Image fusion is defined as making new image by merging two or more images using special algorithms. In case of remote sensing, it means fusing multispectral low-resolution remotely sensed image with panchromatic high-resolution image. Generally, hyperspectral image fusion is accomplished by utilizing fusion technique of multispectral imagery or spectral unmixing model. But, the former may distort spectral information and the latter needs endmember data or additional data, and has a problem with not preserving spatial information well. This study proposes a new algorithm based on two stage spectral unmixing model for preserving hyperspectral image's spectral information. The proposed fusion technique is implemented and tested using Hyperion and ALI images. it is shown to work well on maintaining more spatial/spectral information than the PCA/GS fusion algorithms.

Evidential Fusion of Multsensor Multichannel Imagery

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.1
    • /
    • pp.75-85
    • /
    • 2006
  • This paper has dealt with a data fusion for the problem of land-cover classification using multisensor imagery. Dempster-Shafer evidence theory has been employed to combine the information extracted from the multiple data of same site. The Dempster-Shafer's approach has two important advantages for remote sensing application: one is that it enables to consider a compound class which consists of several land-cover types and the other is that the incompleteness of each sensor data due to cloud-cover can be modeled for the fusion process. The image classification based on the Dempster-Shafer theory usually assumes that each sensor is represented by a single channel. The evidential approach to image classification, which utilizes a mass function obtained under the assumption of class-independent beta distribution, has been discussed for the multiple sets of mutichannel data acquired from different sensors. The proposed method has applied to the KOMPSAT-1 EOC panchromatic imagery and LANDSAT ETM+ data, which were acquired over Yongin/Nuengpyung area of Korean peninsula. The experiment has shown that it is greatly effective on the applications in which it is hard to find homogeneous regions represented by a single land-cover type in training process.

Multi- Resolution MSS Image Fusion

  • Ghassemian, Hassan;Amidian, Asghar
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.648-650
    • /
    • 2003
  • Efficient multi-resolution image fusion aims to take advantage of the high spectral resolution of Landsat TM images and high spatial resolution of SPOT panchromatic images simultaneously. This paper presents a multi-resolution data fusion scheme, based on multirate image representation. Motivated by analytical results obtained from high-resolution multispectral image data analysis: the energy packing the spectral features are distributed in the lower frequency bands, and the spatial features, edges, are distributed in the higher frequency bands. This allows to spatially enhancing the multispectral images, by adding the high-resolution spatial features to them, by a multirate filtering procedure. The proposed method is compared with some conventional methods. Results show it preserves more spectral features with less spatial distortion.

  • PDF

Fusion of LIDAR Data and Aerial Images for Building Reconstruction

  • Chen, Liang-Chien;Lai, Yen-Chung;Rau, Jiann-Yeou
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.773-775
    • /
    • 2003
  • From the view point of data fusion, we integrate LIDAR data and digital aerial images to perform 3D building modeling in this study. The proposed scheme comprises two major parts: (1) building block extraction and (2) building model reconstruction. In the first step, height differences are analyzed to detect the above ground areas. Color analysis is then performed for the exclusion of tree areas. Potential building blocks are selected first followed by the refinement of building areas. In the second step, through edge detection and extracting the height information from LIDAR data, accurate 3D edges in object space is calculated. The accurate 3D edges are combined with the already developed SMS method for building modeling. LIDAR data acquired by Leica ALS 40 in Hsin-Chu Science-based Industrial Park of north Taiwan will be used in the test.

  • PDF

Estimation of Global Image Fusion Parameters for KOMPSAT-3A: Application to Korean Peninsula (아리랑 3A호의 글로벌 융합 파라미터 추정방법: 한반도 영역을 대상으로)

  • Park, Sung-Hwan;Oh, Kwan-Young;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_4
    • /
    • pp.1363-1372
    • /
    • 2019
  • In this study, we tried to analyze the fusion parameters required to produce a high-resolution multispectral image using an image fusion technique and to suggest global fusion parameters. We analyzed the linear regression coefficients that can simulate the panchromatic image, and the fusion coefficients required for producing the fusion image. When the fusion images were produced using the representative fusion parameters, it was confirmed that the difference in DN value between each fusion image was quantitatively smaller than when the optimal fusion parameters were used. Therefore, this study can minimize the regional characteristics reflected in the fused image.

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance (그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.581-591
    • /
    • 2010
  • In this paper, we proposed a new wavelet-based image fusion algorithm, which has advantages in both frequency and spatial domains for signal analysis. The developed algorithm compares the ratio of SAR image signal to optical image signal and assigns the SAR image signal to the fused image if the ratio is larger than a predefined threshold value. If the ratio is smaller than the threshold value, the fused image signal is determined by a weighted sum of optical and SAR image signal. The fusion rules consider the ratio of SAR image signal to optical image signal, image gradient and local variance of each image signal. We evaluated the proposed algorithm using Ikonos and TerraSAR-X satellite images. The proposed method showed better performance than the conventional methods which take only relatively strong SAR image signals in the fused image, in terms of entropy, image clarity, spatial frequency and speckle index.

Refinement of Disparity Map using the Rule-based Fusion of Area and Feature-based Matching Results

  • Um, Gi-Mun;Ahn, Chung-Hyun;Kim, Kyung-Ok;Lee, Kwae-Hi
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.304-309
    • /
    • 1999
  • In this paper, we presents a new disparity map refinement algorithm using statistical characteristics of disparity map and edge information. The proposed algorithm generate a refined disparity map using disparity maps which are obtained from area and feature-based Stereo Matching by selecting a disparity value of edge point based on the statistics of both disparity maps. Experimental results on aerial stereo image show the better results than conventional fusion algorithms in the disparity error. This algorithm can be applied to the reconstruction of building image from the high resolution remote sensing data.

  • PDF

Image Fusion and Evaluation by using Mapping Satellite-1 Data

  • Huang, He;Hu, Yafei;Feng, Yi;Zhang, Meng;Song, DongSeob
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_2
    • /
    • pp.593-599
    • /
    • 2013
  • China's Mapping Satellite-1, developed by the China Aerospace Science and Technology Corporation (CASC), was launched in three years ago. The data from Mapping Satellite-1 are able to use for efficient surveying and geometric mapping application field. In this paper, we fuse the panchromatic and multispectral images of Changchun area, which are obtained from the Mapping Satellite-1, the one that is the Chinese first transmission-type three-dimensional mapping satellite. The four traditional image fusion methods, which are HPF, Mod.IHS, Panshar and wavelet transform, were used to approach for effectively fusing Mapping Satellite-1 remote sensing data. Subsequently we assess the results with some commonly used methods, which are known a subjective qualitative evaluation and quantitative statistical analysis approach. Consequently, we found that the wavelet transform remote sensing image fusion is the optimal in the degree of distortion, the ability of performance of details and image information availability among four methods. To further understand the optimal methods to fuse Mapping Satellite-1 images, an additional study is necessary.