• 제목/요약/키워드: Remote Sensing Image Fusion

검색결과 136건 처리시간 0.024초

A New Method of Remote Sensing Image Fusion Based on Modified Kohonen Networks

  • Shuhe, Zhao;Xiuwan, Chen;Junfeng, Chen;Yinghai, Ke
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1337-1339
    • /
    • 2003
  • In this article, a new remote sensing image fusion model based on modified Kohonen networks is given. And a new fusion rule based on modified voting rule was established. Select Shaoxing City as the study site, located at Zhejiang Province, P.R.China. The fusion experiment between Landsat TM data (30m) and IRS-C Pan data (5.8m) was performed using the given fusion method. The fusion results show that the new method can gain better result in apply ing to the lower hill area, and the whole classification accuracy was 10% higher than the basic Kohonen method. The confusion between the woodlands and the waterbodies was also diminished.

  • PDF

Change Detection in Bitemporal Remote Sensing Images by using Feature Fusion and Fuzzy C-Means

  • Wang, Xin;Huang, Jing;Chu, Yanli;Shi, Aiye;Xu, Lizhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권4호
    • /
    • pp.1714-1729
    • /
    • 2018
  • Change detection of remote sensing images is a profound challenge in the field of remote sensing image analysis. This paper proposes a novel change detection method for bitemporal remote sensing images based on feature fusion and fuzzy c-means (FCM). Different from the state-of-the-art methods that mainly utilize a single image feature for difference image construction, the proposed method investigates the fusion of multiple image features for the task. The subsequent problem is regarded as the difference image classification problem, where a modified fuzzy c-means approach is proposed to analyze the difference image. The proposed method has been validated on real bitemporal remote sensing data sets. Experimental results confirmed the effectiveness of the proposed method.

Evaluation of Geo-based Image Fusion on Mobile Cloud Environment using Histogram Similarity Analysis

  • Lee, Kiwon;Kang, Sanggoo
    • 대한원격탐사학회지
    • /
    • 제31권1호
    • /
    • pp.1-9
    • /
    • 2015
  • Mobility and cloud platform have become the dominant paradigm to develop web services dealing with huge and diverse digital contents for scientific solution or engineering application. These two trends are technically combined into mobile cloud computing environment taking beneficial points from each. The intention of this study is to design and implement a mobile cloud application for remotely sensed image fusion for the further practical geo-based mobile services. In this implementation, the system architecture consists of two parts: mobile web client and cloud application server. Mobile web client is for user interface regarding image fusion application processing and image visualization and for mobile web service of data listing and browsing. Cloud application server works on OpenStack, open source cloud platform. In this part, three server instances are generated as web server instance, tiling server instance, and fusion server instance. With metadata browsing of the processing data, image fusion by Bayesian approach is performed using functions within Orfeo Toolbox (OTB), open source remote sensing library. In addition, similarity of fused images with respect to input image set is estimated by histogram distance metrics. This result can be used as the reference criterion for user parameter choice on Bayesian image fusion. It is thought that the implementation strategy for mobile cloud application based on full open sources provides good points for a mobile service supporting specific remote sensing functions, besides image fusion schemes, by user demands to expand remote sensing application fields.

Development and Implementation of Multi-source Remote Sensing Imagery Fusion Based on PCI Geomatica

  • Yu, ZENG;Jixian, ZHANG;Qin, YAN;Pinglin, QIAO
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1334-1336
    • /
    • 2003
  • On the basis of comprehensive analysis and summarization of the image fusion algorithms provided by PCI Geomatica software, deficiencies in image fusion processing functions of this software are put forwarded in this paper. This limitation could be improved by further developing PCI Geomatica on the user’ side. Five effective algorithms could be added into PCI Geomatica. In this paper, the detailed description of how to customize and further develop PCI Geomatica by using Microsoft Visual C++ 6.0, PCI SDK Kit and GDB technique is also given. Through this way, the remote sensing imagery fusion functions of PCI Geomatica software can be extended.

  • PDF

Reducing Spectral Signature Confusion of Optical Sensor-based Land Cover Using SAR-Optical Image Fusion Techniques

  • ;Tateishi, Ryutaro;Wikantika, Ketut;M.A., Mohammed Aslam
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.107-109
    • /
    • 2003
  • Optical sensor-based land cover categories produce spectral signature confusion along with degraded classification accuracy. In the classification tasks, the goal of fusing data from different sensors is to reduce the classification error rate obtained by single source classification. This paper describes the result of land cover/land use classification derived from solely of Landsat TM (TM) and multisensor image fusion between JERS 1 SAR (JERS) and TM data. The best radar data manipulation is fused with TM through various techniques. Classification results are relatively good. The highest Kappa Coefficient is derived from classification using principal component analysis-high pass filtering (PCA+HPF) technique with the Overall Accuracy significantly high.

  • PDF

Fusion Techniques Comparison of GeoEye-1 Imagery

  • Kim, Yong-Hyun;Kim, Yong-Il;Kim, Youn-Soo
    • 대한원격탐사학회지
    • /
    • 제25권6호
    • /
    • pp.517-529
    • /
    • 2009
  • Many satellite image fusion techniques have been developed in order to produce a high resolution multispectral (MS) image by combining a high resolution panchromatic (PAN) image and a low resolution MS image. Heretofore, most high resolution image fusion techniques have used IKONOS and QuickBird images. Recently, GeoEye-1, offering the highest resolution of any commercial imaging system, was launched. In this study, we have experimented with GeoEye-1 images in order to evaluate which fusion algorithms are suitable for these images. This paper presents compares and evaluates the efficiency of five image fusion techniques, the $\grave{a}$ trous algorithm based additive wavelet transformation (AWT) fusion techniques, the Principal Component analysis (PCA) fusion technique, Gram-Schmidt (GS) spectral sharpening, Pansharp, and the Smoothing Filter based Intensity Modulation (SFIM) fusion technique, for the fusion of a GeoEye-1 image. The results of the experiment show that the AWT fusion techniques maintain more spatial detail of the PAN image and spectral information of the MS image than other image fusion techniques. Also, the Pansharp technique maintains information of the original PAN and MS images as well as the AWT fusion technique.

Texture Image Fusion on Wavelet Scheme with Space Borne High Resolution Imagery: An Experimental Study

  • Yoo, Hee-Young;Lee , Ki-Won
    • 대한원격탐사학회지
    • /
    • 제21권3호
    • /
    • pp.243-252
    • /
    • 2005
  • Wavelet transform and its inverse processing provide the effective framework for data fusion. The purpose of this study is to investigate applicability of wavelet transform using texture images for the urban remote sensing application. We tried several experiments regarding image fusion by wavelet transform and texture imaging using high resolution images such as IKONOS and KOMPSAT EOC. As for texture images, we used homogeneity and ASM (Angular Second Moment) images according that these two types of texture images reveal detailed information of complex features of urban environment well. To find out the useful combination scheme for further applications, we performed DWT(Discrete Wavelet Transform) and IDWT(Inverse Discrete Wavelet Transform) using texture images and original images, with adding edge information on the fused images to display texture-wavelet information within edge boundaries. The edge images were obtained by the LoG (Laplacian of Gaussian) processing of original image. As the qualitative result by the visual interpretation of these experiments, the resultant image by each fusion scheme will be utilized to extract unique details of surface characterization on urban features around edge boundaries.

Classification of Fused SAR/EO Images Using Transformation of Fusion Classification Class Label

  • Ye, Chul-Soo
    • 대한원격탐사학회지
    • /
    • 제28권6호
    • /
    • pp.671-682
    • /
    • 2012
  • Strong backscattering features from high-resolution Synthetic Aperture Rader (SAR) image provide useful information to analyze earth surface characteristics such as man-made objects in urban areas. The SAR image has, however, some limitations on description of detail information in urban areas compared to optical images. In this paper, we propose a new classification method using a fused SAR and Electro-Optical (EO) image, which provides more informative classification result than that of a single-sensor SAR image classification. The experimental results showed that the proposed method achieved successful results in combination of the SAR image classification and EO image characteristics.

Segment-based Image Classification of Multisensor Images

  • Lee, Sang-Hoon
    • 대한원격탐사학회지
    • /
    • 제28권6호
    • /
    • pp.611-622
    • /
    • 2012
  • This study proposed two multisensor fusion methods for segment-based image classification utilizing a region-growing segmentation. The proposed algorithms employ a Gaussian-PDF measure and an evidential measure respectively. In remote sensing application, segment-based approaches are used to extract more explicit information on spatial structure compared to pixel-based methods. Data from a single sensor may be insufficient to provide accurate description of a ground scene in image classification. Due to the redundant and complementary nature of multisensor data, a combination of information from multiple sensors can make reduce classification error rate. The Gaussian-PDF method defines a regional measure as the PDF average of pixels belonging to the region, and assigns a region into a class associated with the maximum of regional measure. The evidential fusion method uses two measures of plausibility and belief, which are derived from a mass function of the Beta distribution for the basic probability assignment of every hypothesis about region classes. The proposed methods were applied to the SPOT XS and ENVISAT data, which were acquired over Iksan area of of Korean peninsula. The experiment results showed that the segment-based method of evidential measure is greatly effective on improving the classification via multisensor fusion.