• Title/Summary/Keyword: Satellite image fusion

Search Result 128, Processing Time 0.033 seconds

Evaluation of Block-based Sharpening Algorithms for Fusion of Hyperion and ALI Imagery (Hyperion과 ALI 영상의 융합을 위한 블록 기반의 융합기법 평가)

  • Kim, Yeji;Choi, Jaewan
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.1
    • /
    • pp.63-70
    • /
    • 2015
  • An Image fusion, or Pansharpening is a methodology of increasing the spatial resolution of image with low-spatial resolution using high-spatial resolution images. In this paper, we have performed an image fusion of hyperspectral imagery by using panchromatic image with high-spatial resolution, multispectral and hyperspectral images with low-spatial resolution, which had been acquired by ALI and Hyperion of EO-1 satellite sensors. The study has been mainly focused on evaluating performance of fusion process following to the image fusion methodology of the block association, which had applied to ALI and Hyperion dataset by considering spectral characteristics between multispectral and hyperspectral images. The results from experiments have been identified that the proposed algorithm efficiently improved the spatial resolution and minimized spectral distortion comparing with results from a fusion of the only panchromatic and hyperspectral images and the existing block-based fusion method. Through the study in a proposed algorithm, we could concluded in that those applications of airborne hyperspectral sensors and various hyperspectral satellite sensors will be launched at future by enlarge its usages.

High Spatial Resolution Satellite Image Simulation Based on 3D Data and Existing Images

  • La, Phu Hien;Jeon, Min Cheol;Eo, Yang Dam;Nguyen, Quang Minh;Lee, Mi Hee;Pyeon, Mu Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.2
    • /
    • pp.121-132
    • /
    • 2016
  • This study proposes an approach for simulating high spatial resolution satellite images acquired under arbitrary sun-sensor geometry using existing images and 3D (three-dimensional) data. First, satellite images, having significant differences in spectral regions compared with those in the simulated image were transformed to the same spectral regions as those in simulated image by using the UPDM (Universal Pattern Decomposition Method). Simultaneously, shadows cast by buildings or high features under the new sun position were modeled. Then, pixels that changed from shadow into non-shadow areas and vice versa were simulated on the basis of existing images. Finally, buildings that were viewed under the new sensor position were modeled on the basis of open library-based 3D reconstruction program. An experiment was conducted to simulate WV-3 (WorldView-3) images acquired under two different sun-sensor geometries based on a Pleiades 1A image, an additional WV-3 image, a Landsat image, and 3D building models. The results show that the shapes of the buildings were modeled effectively, although some problems were noted in the simulation of pixels changing from shadows cast by buildings into non-shadow. Additionally, the mean reflectance of the simulated image was quite similar to that of actual images in vegetation and water areas. However, significant gaps between the mean reflectance of simulated and actual images in soil and road areas were noted, which could be attributed to differences in the moisture content.

Fine-image Registration between Multi-sensor Satellite Images for Global Fusion Application of KOMPSAT-3·3A Imagery (KOMPSAT-3·3A 위성영상 글로벌 융합활용을 위한 다중센서 위성영상과의 정밀영상정합)

  • Kim, Taeheon;Yun, Yerin;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_4
    • /
    • pp.1901-1910
    • /
    • 2022
  • Arriving in the new space age, securing technology for fusion application of KOMPSAT-3·3A and global satellite images is becoming more important. In general, multi-sensor satellite images have relative geometric errors due to various external factors at the time of acquisition, degrading the quality of the satellite image outputs. Therefore, we propose a fine-image registration methodology to minimize the relative geometric error between KOMPSAT-3·3A and global satellite images. After selecting the overlapping area between the KOMPSAT-3·3A and foreign satellite images, the spatial resolution between the two images is unified. Subsequently, tie-points are extracted using a hybrid matching method in which feature- and area-based matching methods are combined. Then, fine-image registration is performed through iterative registration based on pyramid images. To evaluate the performance and accuracy of the proposed method, we used KOMPSAT-3·3A, Sentinel-2A, and PlanetScope satellite images acquired over Daejeon city, South Korea. As a result, the average RMSE of the accuracy of the proposed method was derived as 1.2 and 3.59 pixels in Sentinel-2A and PlanetScope images, respectively. Consequently, it is considered that fine-image registration between multi-sensor satellite images can be effectively performed using the proposed method.

Pan-Sharpening Algorithm of High-Spatial Resolution Satellite Image by Using Spectral and Spatial Characteristics (영상의 분광 및 공간 특성을 이용한 고해상도 위성영상 융합 알고리즘)

  • Choi, Jae-Wan;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.79-86
    • /
    • 2010
  • Generally, image fusion is defined as generating re-organized image by merging two or more data using special algorithms. In remote sensing, image fusion technique is called as Pan-sharpening algorithm because it aims to improve the spatial resolution of original multispectral image by using panchromatic image of high-spatial resolution. The pan-sharpened image has been an important task due to various applications such as change detection, digital map creation and urban analysis. However, most approaches have tended to distort the spectral information of the original multispectral data or decrease the spatial quality compared with the panchromatic image. In order to solve these problems, a novel pan-sharpening algorithm is proposed by considering the spectral and spatial characteristics of multispectral image. The algorithm is applied to the KOMPSAT-2 and QuickBird satellite image and the results showed that our method can improve the spectral/spatial quality compared with the existing fusion algorithms.

FUSION OF LASER SCANNING DATA, DIGITAL MAPS, AERIAL PHOTOGRAPHS AND SATELLITE IMAGES FOR BUILDING MODELLING

  • Han, Seung-Hee;Bae, Yeon-Soung;Kim, Hong-Jin;Bae, Sang-Ho
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.899-902
    • /
    • 2006
  • For a quick and accurate 3D modelling of a building, laser scanning data, digital maps, aerial photographs and satellite images should be fusioned. Moreover, library establishment according to a standard structure of a building and effective texturing method are required in order to determine the structure of a building. In this study, we made a standard library by categorizing Korean village forms and presented a model that can predict a structure of a building from a shape of the roof on an aerial photo image. We made an ortho image using the high-definition digital image and considerable amount of ground scanning point cloud and mapped this image. These methods enabled a more quick and accurate building modelling.

  • PDF

Comparison of Image Fusion Methods to Merge KOMPSAT-2 Panchromatic and Multispectral Images (KOMPSAT-2 전정색영상과 다중분광영상의 융합기법 비교평가)

  • Oh, Kwan-Young;Jung, Hyung-Sup;Lee, Kwang-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.1
    • /
    • pp.39-54
    • /
    • 2012
  • The objective of this study is to propose efficient data fusion techniques feasible to the KOMPSAT-2 satellite images. The most widely used image fusion techniques, which are the high-pass filter (HPF), the intensity-hue-saturation-based (modified IHS), the pan-sharpened, and the wavelet-based methods, was applied to four KOMPSAT - 2 satellite images having different regional and seasonal characteristics. Each fusion result was compared and analyzed in spatial and spectral features, respectively. Quality evaluation of image fusion techniques was performed in both quantitative and visual analysis. The quantitative analysis methods used for this study were the relative global dimensional error (spatial and spectral ERGAS), the spectral angle mapper index (SAM), and the image quality index (Q4). The results of quantitative and visual analysis indicate that the pan-sharpened method among the fusion methods used for this study relatively has the suitable balance between spectral and spatial information. In the case of the modified IHS method, the spatial information is well preserved, while the spectral information is distorted. And also the HPF and wavelet methods do not preserve the spectral information but the spatial information.

Fusion of DEMs Generated from Optical and SAR Sensor

  • Jin, Kveong-Hyeok;Yeu, Yeon;Hong, Jae-Min;Yoon, Chang-Rak;Yeu, Bock-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.5 s.23
    • /
    • pp.53-65
    • /
    • 2002
  • The most widespread techniques for DEM generation are stereoscopy for optical sensor images and SAR interferometry(InSAR) for SAR images. These techniques suffer from certain sensor and processing limitations, which can be overcome by the synergetic use of both sensors and DEMs respectively. This study is associated with improvements of accuracy with consistency of image's characteristics between two different DEMs coming from stereoscopy for the optical images and interferometry for SAR images. The MWD(Multiresolution Wavelet Decomposition) and HPF(High-Pass Filtering), which take advantage of the complementary properties of SAR and stereo optical DEMs, will be applied for the fusion process. DEM fusion is tested with two sets of SPOT and ERS-l/-2 satellite imagery and for the analysis of results, DEM generated from digital topographic map(1 to 5000) is used. As a result of an integration of DEMs, it can more clearly portray topographic slopes and tilts when applying the strengths of DEM of SAR image to DEM of an optical satellite image and in the case of HPF, the resulting DEM.

  • PDF

Data Fusion Using Image Segmentation in High Spatial Resolution Satellite Imagery

  • Lee, Jong-Yeol
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.283-285
    • /
    • 2003
  • This paper describes a data fusion method for high spatial resolution satellite imagery. The pixels located around an object edge have spectral mixing because of the geometric primitive of pixel. The larger a size of pixel is, the wider an area of spectral mixing is. The intensity of pixels adjacent edges were modified by the spectral characteristics of the pixels located inside of objects. The methods developed in this study were tested using IKONOS Multispectral and Pan data of a part of Jeju-shi in Korea. The test application shows that the spectral information of the pixels adjacent edges were improved well.

  • PDF

Modified a'trous Algorithm based Wavelet Pan-sharpening Method Using IKONOS Image (IKONOS 영상을 이용한 수정된 a'trous 알고리즘 기반 웨이블릿 영상융합 기법)

  • Kim, Yong Hyun;Choi, Jae Wan;Kim, Hye Jin;Kim, Yong Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.2D
    • /
    • pp.305-309
    • /
    • 2009
  • The object of image fusion is to integrate information from multiple images as the same scene. In the satellite image fusion, many image fusion methods have been proposed for combining a high resolution panchromatic(PAN) image with low resolution multispectral(MS) images and it is very important to preserve both the spatial detail and the spectral information of fusion result. The image fusion method using wavelet transform shows good result compared with other fusion methods in preserving spectral information. This study proposes a modified a'trous algorithm based wavelet image fusion method using IKONOS image. Based on the result of experiment using IKONOS image, we confirmed that proposed method was more effective in preserving spatial detail and spectral information than existing fusion methods using a'trous algorithm.

Wavelet-based Fusion of Optical and Radar Image using Gradient and Variance (그레디언트 및 분산을 이용한 웨이블릿 기반의 광학 및 레이더 영상 융합)

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.581-591
    • /
    • 2010
  • In this paper, we proposed a new wavelet-based image fusion algorithm, which has advantages in both frequency and spatial domains for signal analysis. The developed algorithm compares the ratio of SAR image signal to optical image signal and assigns the SAR image signal to the fused image if the ratio is larger than a predefined threshold value. If the ratio is smaller than the threshold value, the fused image signal is determined by a weighted sum of optical and SAR image signal. The fusion rules consider the ratio of SAR image signal to optical image signal, image gradient and local variance of each image signal. We evaluated the proposed algorithm using Ikonos and TerraSAR-X satellite images. The proposed method showed better performance than the conventional methods which take only relatively strong SAR image signals in the fused image, in terms of entropy, image clarity, spatial frequency and speckle index.