• Title/Summary/Keyword: STARFM

Search Result 3, Processing Time 0.018 seconds

Cloud Detection and Restoration of Landsat-8 using STARFM (재난 모니터링을 위한 Landsat 8호 영상의 구름 탐지 및 복원 연구)

  • Lee, Mi Hee;Cheon, Eun Ji;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.861-871
    • /
    • 2019
  • Landsat satellite images have been increasingly used for disaster damage analysis and disaster monitoring because they can be used for periodic and broad observation of disaster damage area. However, periodic disaster monitoring has limitation because of areas having missing data due to clouds as a characteristic of optical satellite images. Therefore, a study needs to be conducted for restoration of missing areas. This study detected and removed clouds and cloud shadows by using the quality assessment (QA) band provided when acquiring Landsat-8 images, and performed image restoration of removed areas through a spatial and temporal adaptive reflectance fusion (STARFM) algorithm. The restored image by the proposed method is compared with the restored image by conventional image restoration method throught MLC method. As a results, the restoration method by STARFM showed an overall accuracy of 89.40%, and it is confirmed that the restoration method is more efficient than the conventional image restoration method. Therefore, the results of this study are expected to increase the utilization of disaster analysis using Landsat satellite images.

Comparison of Spatio-temporal Fusion Models of Multiple Satellite Images for Vegetation Monitoring (식생 모니터링을 위한 다중 위성영상의 시공간 융합 모델 비교)

  • Kim, Yeseul;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1209-1219
    • /
    • 2019
  • For consistent vegetation monitoring, it is necessary to generate time-series vegetation index datasets at fine temporal and spatial scales by fusing the complementary characteristics between temporal and spatial scales of multiple satellite data. In this study, we quantitatively and qualitatively analyzed the prediction accuracy of time-series change information extracted from spatio-temporal fusion models of multiple satellite data for vegetation monitoring. As for the spatio-temporal fusion models, we applied two models that have been widely employed to vegetation monitoring, including a Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and an Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM). To quantitatively evaluate the prediction accuracy, we first generated simulated data sets from MODIS data with fine temporal scales and then used them as inputs for the spatio-temporal fusion models. We observed from the comparative experiment that ESTARFM showed better prediction performance than STARFM, but the prediction performance for the two models became degraded as the difference between the prediction date and the simultaneous acquisition date of the input data increased. This result indicates that multiple data acquired close to the prediction date should be used to improve the prediction accuracy. When considering the limited availability of optical images, it is necessary to develop an advanced spatio-temporal model that can reflect the suggestions of this study for vegetation monitoring.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.