• Title/Summary/Keyword: Multi-resolution Image fusion

Search Result 59, Processing Time 0.027 seconds

Semi-Automated Extraction of Geographic Information using KOMPSAT 2 : Analyzing Image Fusion Methods and Geographic Objected-Based Image Analysis (다목적 실용위성 2호 고해상도 영상을 이용한 지리 정보 추출 기법 - 영상융합과 지리객체 기반 분석을 중심으로 -)

  • Yang, Byung-Yun;Hwang, Chul-Sue
    • Journal of the Korean Geographical Society
    • /
    • v.47 no.2
    • /
    • pp.282-296
    • /
    • 2012
  • This study compared effects of spatial resolution ratio in image fusion by Korea Multi-Purpose SATellite 2 (KOMPSAT II), also known as Arirang-2. Image fusion techniques, also called pansharpening, are required to obtain color imagery with high spatial resolution imagery using panchromatic and multi-spectral images. The higher quality satellite images generated by an image fusion technique enable interpreters to produce better application results. Thus, image fusions categorized in 3 domains were applied to find out significantly improved fused images using KOMPSAT 2. In addition, all fused images were evaluated to satisfy both spectral and spatial quality to investigate an optimum fused image. Additionally, this research compared Pixel-Based Image Analysis (PBIA) with the GEOgraphic Object-Based Image Analysis (GEOBIA) to make better classification results. Specifically, a roof top of building was extracted by both image analysis approaches and was finally evaluated to obtain the best accurate result. This research, therefore, provides the effective use for very high resolution satellite imagery with image interpreter to be used for many applications such as coastal area, urban and regional planning.

  • PDF

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.

A depth-based Multi-view Super-Resolution Method Using Image Fusion and Blind Deblurring

  • Fan, Jun;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Feng, Jing;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5129-5152
    • /
    • 2016
  • Multi-view super-resolution (MVSR) aims to estimate a high-resolution (HR) image from a set of low-resolution (LR) images that are captured from different viewpoints (typically by different cameras). MVSR is usually applied in camera array imaging. Given that MVSR is an ill-posed problem and is typically computationally costly, we super-resolve multi-view LR images of the original scene via image fusion (IF) and blind deblurring (BD). First, we reformulate the MVSR problem into two easier problems: an IF problem and a BD problem. We further solve the IF problem on the premise of calculating the depth map of the desired image ahead, and then solve the BD problem, in which the optimization problems with respect to the desired image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Our approach bridges the gap between MVSR and BD, taking advantages of existing BD methods to address MVSR. Thus, this approach is appropriate for camera array imaging because the blur kernel is typically unknown in practice. Corresponding experimental results using real and synthetic images demonstrate the effectiveness of the proposed method.

Assessment of the Ochang Plain NDVI using Improved Resolution Method from MODIS Images (MODIS영상의 고해상도화 수법을 이용한 오창평야 NDVI의 평가)

  • Park, Jong-Hwa;La, Sang-Il
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.9 no.6
    • /
    • pp.1-12
    • /
    • 2006
  • Remote sensing cannot provide a direct measurement of vegetation index (VI) but it can provide a reasonably good estimate of vegetation index, defined as the ratio of satellite bands. The monitoring of vegetation in nearby urban regions is made difficult by the low spatial resolution and temporal resolution image captures. In this study, enhancing spatial resolution method is adapted as to improve a low spatial resolution. Recent studies have successfully estimated normalized difference vegetation index (NDVI) using improved resolution method such as from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard EOS Terra satellite. Image enhancing spatial resolution is an important tool in remote sensing, as many Earth observation satellites provide both high-resolution and low-resolution multi-spectral images. Examples of enhancement of a MODIS multi-spectral image and a MODIS NDVI image of Cheongju using a Landsat TM high-resolution multi-spectral image are presented. The results are compared with that of the IHS technique is presented for enhancing spatial resolution of multi-spectral bands using a higher resolution data set. To provide a continuous monitoring capability for NDVI, in situ measurements of NDVI from paddy field was carried out in 2004 for comparison with remotely sensed MODIS data. We compare and discuss NDVI estimates from MODIS sensors and in-situ spectroradiometer data over Ochang plain region. These results indicate that the MODIS NDVI is underestimated by approximately 50%.

Comparative Analysis of Land-use thematic GIS layers and Multi-resolution Image Classification Results by using LANDSAT 7 ETM+ and KOMPSAT EOC image (Landsat 7 ETM+와 KOMPSAT EOC 영상 자료를 이용한 다중 분해능 영상 분류결과와 토지이용현황 주제도 대비 분석)

  • 이기원;유영철;송무영;사공호상
    • Spatial Information Research
    • /
    • v.10 no.2
    • /
    • pp.331-343
    • /
    • 2002
  • Recently, as various fields of applications using space-borne imagery have been emphasized, interests on integrated analysis or fusion using multi-sources are also increasing. In this study, to investigate applicability of multiple imageries for further regional-scaled application, DN value analysis and multi-resolution classification by using KOMPSAT EOC imagery and Landsat 7 ETM+image data in the Namyangju-city area were performed, and then this classified results were compared to land-use thematic data at the same area. In case of classified results by using muff-resolution image data, it is shown that linear-type features can be easily extracted. furthermore, it is expected that multi-resolution classified image can be effectively utilized to urban environment analysis, according to results of similar pattern by comparative study based on multi-buffered zone analysis or so-called distance analysis along main road features in the study area.

Comparison of various image fusion methods for impervious surface classification from VNREDSat-1

  • Luu, Hung V.;Pham, Manh V.;Man, Chuc D.;Bui, Hung Q.;Nguyen, Thanh T.N.
    • International Journal of Advanced Culture Technology
    • /
    • v.4 no.2
    • /
    • pp.1-6
    • /
    • 2016
  • Impervious surfaces are important indicators for urban development monitoring. Accurate mapping of urban impervious surfaces with observational satellites, such as VNREDSat-1, remains challenging due to the spectral diversity not captured by an individual PAN image. In this article, five multi-resolution image fusion techniques were compared for the task of classifting urban impervious surfaces. The result shows that for VNREDSat-1 dataset, UNB and Wavelet tranformation methods are the best techniques in reserving spatial and spectral information of original MS image, respectively. However, the UNB technique gives the best results when it comes to impervious surface classification, especially in the case of shadow areas included in non-impervious surface group.

Improvement of Land Cover Classification Accuracy by Optimal Fusion of Aerial Multi-Sensor Data

  • Choi, Byoung Gil;Na, Young Woo;Kwon, Oh Seob;Kim, Se Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.3
    • /
    • pp.135-152
    • /
    • 2018
  • The purpose of this study is to propose an optimal fusion method of aerial multi - sensor data to improve the accuracy of land cover classification. Recently, in the fields of environmental impact assessment and land monitoring, high-resolution image data has been acquired for many regions for quantitative land management using aerial multi-sensor, but most of them are used only for the purpose of the project. Hyperspectral sensor data, which is mainly used for land cover classification, has the advantage of high classification accuracy, but it is difficult to classify the accurate land cover state because only the visible and near infrared wavelengths are acquired and of low spatial resolution. Therefore, there is a need for research that can improve the accuracy of land cover classification by fusing hyperspectral sensor data with multispectral sensor and aerial laser sensor data. As a fusion method of aerial multisensor, we proposed a pixel ratio adjustment method, a band accumulation method, and a spectral graph adjustment method. Fusion parameters such as fusion rate, band accumulation, spectral graph expansion ratio were selected according to the fusion method, and the fusion data generation and degree of land cover classification accuracy were calculated by applying incremental changes to the fusion variables. Optimal fusion variables for hyperspectral data, multispectral data and aerial laser data were derived by considering the correlation between land cover classification accuracy and fusion variables.

High-resolution Depth Generation using Multi-view Camera and Time-of-Flight Depth Camera (다시점 카메라와 깊이 카메라를 이용한 고화질 깊이 맵 제작 기술)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.1-7
    • /
    • 2011
  • The depth camera measures range information of the scene in real time using Time-of-Flight (TOF) technology. Measured depth data is then regularized and provided as a depth image. This depth image is utilized with the stereo or multi-view image to generate high-resolution depth map of the scene. However, it is required to correct noise and distortion of TOF depth image due to the technical limitation of the TOF depth camera. The corrected depth image is combined with the color image in various methods, and then we obtain the high-resolution depth of the scene. In this paper, we introduce the principal and various techniques of sensor fusion for high-quality depth generation that uses multiple camera with depth cameras.

An Improved Remote Sensing Image Fusion Algorithm Based on IHS Transformation

  • Deng, Chao;Wang, Zhi-heng;Li, Xing-wang;Li, Hui-na;Cavalcante, Charles Casimiro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.3
    • /
    • pp.1633-1649
    • /
    • 2017
  • In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.