• Title/Summary/Keyword: Remote Sensing Image Fusion

Search Result 136, Processing Time 0.025 seconds

Image Fusion for Improving Classification

  • Lee, Dong-Cheon;Kim, Jeong-Woo;Kwon, Jay-Hyoun;Kim, Chung;Park, Ki-Surk
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1464-1466
    • /
    • 2003
  • classification of the satellite images provides information about land cover and/or land use. Quality of the classification result depends mainly on the spatial and spectral resolutions of the images. In this study, image fusion in terms of resolution merging, and band integration with multi-source of the satellite images; Landsat ETM+ and Ikonos were carried out to improve classification. Resolution merging and band integration could generate imagery of high resolution with more spectral bands. Precise image co-registration is required to remove geometric distortion between different sources of images. Combination of unsupervised and supervised classification of the fused imagery was implemented to improve classification. 3D display of the results was possible by combining DEM with the classification result so that interpretability could be improved.

  • PDF

Quadratic Programming Approach to Pansharpening of Multispectral Images Using a Regression Model

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.257-266
    • /
    • 2008
  • This study presents an approach to synthesize multispectral images at a higher resolution by exploiting a high-resolution image acquired in panchromatic modality. The synthesized images should be similar to the multispectral images that would have been observed by the corresponding sensor at the same high resolution. The proposed scheme is designed to reconstruct the multispectral images at the higher resolution with as less color distortion as possible. It uses a regression model of the second order to fit panchromatic data to multispectral observations. Based on the regression model, the multispectral images at the higher spatial resolution of the panchromatic image are optimized by a quadratic programming. In this study, the new method was applied to the IKONOS 1m panchromatic and 4m multispectral data, and the results were compared with them of several current approaches. Experimental results demonstrate that the proposed scheme can achieve significant improvement over other methods.

SHIP BLOCK ARRANGEMENT SYSTEM BASED ON IMAGE PROCESSING

  • Park, Jeong-Ho;Choi, Wan-Sik
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.104-106
    • /
    • 2008
  • This paper proposes an image based method for arranging ship blocks in a dockyard. The problem of appropriately arranging numerous blocks has to be carefully planned because it has close relation to the effectiveness of the whole working process. To implement the system, the block shape and feature points have to be obtained from block image. The block arrangement system can be implemented by the fusion of the block shape extraction and image matching technology.

  • PDF

AUTOMATIC BUILDING EXTRACTION BASED ON MULTI-SOURCE DATA FUSION

  • Lu, Yi Hui;Trinder, John
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.248-250
    • /
    • 2003
  • An automatic approach and strategy for extracting building information from aerial images using combined image analysis and interpretation techniques is described in this paper. A dense DSM is obtained by stereo image matching. Multi-band classification, DSM, texture segmentation and Normalised Difference Vegetation Index (NDVI) are used to reveal building interest areas. Then, based on the derived approximate building areas, a shape modelling algorithm based on the level set formulation of curve and surface motion has been used to precisely delineate the building boundaries. Data fusion, based on the Dempster-Shafer technique, is used to interpret simultaneously knowledge from several data sources of the same region, to find the intersection of propositions on extracted information derived from several datasets, together with their associated probabilities. A number of test areas, which include buildings with different sizes, shape and roof colour have been investigated. The tests are encouraging and demonstrate that the system is effective for building extraction, and the determination of more accurate elevations of the terrain surface.

  • PDF

An Adaptive FIHS Fusion Using Spatial and Spectral Band Characteristics of Remote Sensing Image (위성 영상의 공간 및 분광대역 특성을 활용한 적응 FIHS 융합)

  • Seo, Yong-Su;Kim, Joong-Gon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.12 no.4
    • /
    • pp.125-135
    • /
    • 2009
  • Owing to its fast computing capability for fusing images, the FIHS(Fast Intensity Hue Saturation) fusion is widely used for fusion purposes. However, the FIHS fusion also distorts color in the same way such as the IHS(Intensity Hue Saturation) fusion technique. In this paper, a FIHS fusion technique(FIHS-BR) which reduces color distortion by using the ratio of each spectral band and an adaptive FIHS fusion(FIHS-SABR) using spatial information and the ratio of each spectral band are proposed. The proposed FIHS-BR fusion reduces color distortion by adding different spatial detail improvement values for each spectral band. The spatial detail improvement values are derived from the ratio of spectral band. And the proposed FIHS-SABR fusion reduces more color distortion by readjusting the spatial detail improvement values for each spectral band according to the ratio of the spectral bands. The spatial detail improvement values are derived adaptively from the characteristics of spatial information of the local image. To evaluate the performance of the proposed FIHS-BR fusion and FIHS-SABR fusion, a computer simulation is performed for IKONOS remote sensing image. Results from the experiments show that the proposed methods have less color distortion for the forest regions which reveal severe color distortion in the traditional FIHS fusion. From the evaluation results of the characteristics of spectral information for fused image, we show that the proposed methods have best results.

  • PDF

Wavelet Packet Image Coder Using Coefficients Partitioning For Remote Sensing Images (위성 영상을 위한 계수분할 웨이블릿 패킷 영상 부호화 알고리즘에 관한 연구)

  • 한수영;조성윤
    • Korean Journal of Remote Sensing
    • /
    • v.18 no.6
    • /
    • pp.359-367
    • /
    • 2002
  • In this paper, a new embedded wavelet packet image coder algorithm is proposed for an effective image coder using correlation between partitioned coefficients. This new algorithm presents parent-child relationship for reducing image reconstruction error using relations between individual frequency sub-bands. By parent-child relationship, every coefficient is partitioned and encoded for the zerotree data structure. It is shown that the proposed wavelet packet image coder algorithm achieves low bit rates and rate-distortion. It also demonstrates higher PSNR under the same bit rate and an improvement in image compression time. The perfect rate control is compared with the conventional method. These results show that the encoding and decoding processes of the proposed coder are simpler and more accurate than the conventional ones for texture images that include many mid and high-frequency elements such as aerial and satellite photograph images. The experimental results imply the possibility that the proposed method can be applied to real-time vision system, on-line image processing and image fusion which require smaller file size and better resolution.

An Experiment on Image Restoration Applying the Cycle Generative Adversarial Network to Partial Occlusion Kompsat-3A Image

  • Won, Taeyeon;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.33-43
    • /
    • 2022
  • This study presents a method to restore an optical satellite image with distortion and occlusion due to fog, haze, and clouds to one that minimizes degradation factors by referring to the same type of peripheral image. Specifically, the time and cost of re-photographing were reduced by partially occluding a region. To maintain the original image's pixel value as much as possible and to maintain restored and unrestored area continuity, a simulation restoration technique modified with the Cycle Generative Adversarial Network (CycleGAN) method was developed. The accuracy of the simulated image was analyzed by comparing CycleGAN and histogram matching, as well as the pixel value distribution, with the original image. The results show that for Site 1 (out of three sites), the root mean square error and R2 of CycleGAN were 169.36 and 0.9917, respectively, showing lower errors than those for histogram matching (170.43 and 0.9896, respectively). Further, comparison of the mean and standard deviation values of images simulated by CycleGAN and histogram matching with the ground truth pixel values confirmed the CycleGAN methodology as being closer to the ground truth value. Even for the histogram distribution of the simulated images, CycleGAN was closer to the ground truth than histogram matching.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

AQUACULTURE FACILITIES DETECTION FROM SAR AND OPTIC IMAGES

  • Yang, Chan-Su;Yeom, Gi-Ho;Cha, Young-Jin;Park, Dong-Uk
    • Proceedings of the KSRS Conference
    • /
    • 2008.10a
    • /
    • pp.320-323
    • /
    • 2008
  • This study attempts to establish a system extracting and monitoring cultural grounds of seaweeds (lavers, brown seaweeds and seaweed fulvescens) and abalone on the basis of both KOMPSAT-2 and Terrasar-X data. The study areas are located in the northwest and southwest coast of South Korea, famous for coastal cultural grounds. The northwest site is in a high tidal range area (on the average, 6.1 min Asan Bay) and has laver cultural grounds for the most. An semi-automatic detection system of laver facilities is described and assessed for spacebome optic images. On the other hand, the southwest cost is most famous for seaweeds. Aquaculture facilities, which cover extensive portions of this area, can be subdivided into three major groups: brown seaweeds, capsosiphon fulvescens and abalone farms. The study is based on interpretation of optic and SAR satellite data and a detailed image analysis procedure is described here. On May 25 and June 2, 2008 the TerraSAR-X radar satellite took some images of the area. SAR data are unique for mapping those farms. In case of abalone farms, the backscatters from surrounding dykes allows for recognition and separation of abalone ponds from all other water-covered surfaces. But identification of seaweeds such as laver, brown seaweeds and seaweed fulvescens depends on the dampening effect due to the presence of the facilities and is a complex task because objects that resemble seaweeds frequently occur, particularly in low wind or tidal conditions. Lastly, fusion of SAR and optic spatial images is tested to enhance the detection of aquaculture facilities by using the panchromatic image with spatial resolution 1 meter and the corresponding multi-spectral, with spatial resolution 4 meters and 4 spectrum bands, from KOMPSAT-2. The mapping accuracy achieved for farms will be estimated and discussed after field verification of preliminary results.

  • PDF

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.