• Title/Summary/Keyword: RapidEye Image

Search Result 37, Processing Time 0.029 seconds

Vicarious Radiometric Calibration of RapidEye Satellite Image Using CASI Hyperspectral Data (CASI 초분광 영상을 이용한 RapidEye 위성영상의 대리복사보정)

  • Chang, An Jin;Choi, Jae Wan;Song, Ah Ram;Kim, Ye Ji;Jung, Jin Ha
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.3
    • /
    • pp.3-10
    • /
    • 2015
  • All kinds of objects on the ground have inherent spectral reflectance curves, which can be used to classify the ground objects and to detect the target. Remotely sensed data have to be transferred to spectral reflectance for accurate analysis. There are formula methods provided by the institution, mathematical model method and ground-data-based method. In this study, RapidEye satellite image was converted to reflectance data using spectral reflectance of a CASI hyperspectral image by using vicarious radiometric calibration. The results were compared with those of the other calibration methods and ground data. The proposed method was closer to the ground data than ATCOR and New Kurucz 2005 method and equal with ELM method.

Extraction of Spatial Characteristics of Cadastral Land Category from RapidEye Satellite Images

  • La, Phu Hien;Huh, Yong;Eo, Yang Dam;Lee, Soo Bong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.6
    • /
    • pp.581-590
    • /
    • 2014
  • With rapid land development, land category should be updated on a regular basis. However, manual field surveys have certain limitations. In this study, attempts were made to extract a feature vector considering spectral signature by parcel, PIMP (Percent Imperviousness), texture, and VIs (Vegetation Indices) based on RapidEye satellite image and cadastral map. A total of nine land categories in which feature vectors were significantly extracted from the images were selected and classified using SVM (Support Vector Machine). According to accuracy assessment, by comparing the cadastral map and classification result, the overall accuracy was 0.74. In the paddy-field category, in particular, PO acc. (producer's accuracy) and US acc. (user's accuracy) were highest at 0.85 and 0.86, respectively.

Estimation of Chlorophyll-a Concentrations in the Nakdong River Using High-Resolution Satellite Image (고해상도 위성영상을 이용한 낙동강 유역의 클로로필-a 농도 추정)

  • Choe, Eun-Young;Lee, Jae-Woon;Lee, Jae-Kwan
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.5
    • /
    • pp.613-623
    • /
    • 2011
  • This study assessed the feasibility to apply Two-band and Three-band reflectance models for chlorophyll-a estimation in turbid productive waters whose scale is smaller and narrower than ocean using a high spatial resolution image. Those band ratio models were successfully applied to analyzing chlorophyll-a concentrations of ocean or coastal water using Moderate Imaging Spectroradiometer(MODIS), Sea-viewing Wide Field-fo-view Sensor(SeaWiFS), Medium Resolution Imaging Spectrometer(MERIS), etc. Two-band and Three-band models based on band ratio such as Red and NIR band were generally used for the Chl-a in turbid waters. Two-band modes using Red and NIR bands of RapidEye image showed no significant results with $R^2$ 0.38. To enhance a band ratio between absorption and reflection peak, We used red-edge band(710 nm) of RapidEye image for Twoband and Three-band models. Red-RE Two-band and Red-RE-NIR Three-band reflectance model (with cubic equation) for the RapidEye image provided significance performances with $R^2$ 0.66 and 0.73, respectively. Their performance showed the 'Approximate Prediction' with RPD, 1.39 and 1.29 and RMSE, 24.8, 22.4, respectively. Another three-band model with quadratic equation showed similar performances to Red-RE two-band model. The findings in this study demonstrated that Two-band and Three-band reflectance models using a red-edge band can approximately estimate chlorophyll-a concentrations in a turbid river water using high-resolution satellite image. In the distribution map of estimated Chl-a concentrations, three-band model with cubic equation showed lower values than twoband model. In the further works, quantification and correction of spectral interferences caused by suspended sediments and colored dissolved organic matters will improve the accuracy of chlorophyll-a estimation in turbid waters.

Estimation of Paddy Field Area in North Korea Using RapidEye Images (RapidEye 영상을 이용한 북한의 논 면적 산정)

  • Hong, Suk Young;Min, Byoung-Keol;Lee, Jee-Min;Kim, Yihyun;Lee, Kyungdo
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.6
    • /
    • pp.1194-1202
    • /
    • 2012
  • Remotely sensed satellite images can be applied to monitor and obtain land surface information on inaccessible areas. We classified paddy field area in North Korea based on on-screen digitization with visual interpretation using 291 RapidEye satellite images covering the whole country. Criteria for paddy field classification based on RapidEye imagery acquired at different time of rice growth period was defined. Darker colored fields with regular shape in the images with false color composite from early May to late June were detected as rice fields. From early July to late September, it was hard to discriminate rice canopy from other type of vegetation including upland crops, grass, and forest in the image. Regular form of readjusted rice field in the plains and uniform texture when compared with surrounding vegetation. Paddy fields classified from RapidEye imagery were mapped and the areas were calculated by administrative district, province or city. Sixty six percent of paddy fields ($3,521km^2$) were distributed in the west coastal regions including Pyeongannam-do, Pyeonganbuk-do, and Hwanghaenam-do. The paddy field areas classified from RapidEye images showed less than 1% of difference from the paddy field areas of North Korea reported by FAO/WFP (Food and Agriculture Organization/World Food Programme).

Extraction of paddy field in Jaeryeong, North Korea by object-oriented classification with RapidEye NDVI imagery (RapidEye 위성영상의 시계열 NDVI 및 객체기반 분류를 이용한 북한 재령군의 논벼 재배지역 추출 기법 연구)

  • Lee, Sang-Hyun;Oh, Yun-Gyeong;Park, Na-Young;Lee, Sung Hack;Choi, Jin-Yong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.56 no.3
    • /
    • pp.55-64
    • /
    • 2014
  • While utilizing high resolution satellite image for land use classification has been popularized, object-oriented classification has been adapted as an affordable classification method rather than conventional statistical classification. The aim of this study is to extract the paddy field area using object-oriented classification with time series NDVI from high-resolution satellite images, and the RapidEye satellite images of Jaeryung-gun in North Korea were used. For the implementation of object-oriented classification, creating objects by setting of scale and color factors was conducted, then 3 different land use categories including paddy field, forest and water bodies were extracted from the objects applying the variation of time-series NDVI. The unclassified objects which were not involved into the previous extraction classified into 6 categories using unsupervised classification by clustering analysis. Finally, the unsuitable paddy field area were assorted from the topographic factors such as elevation and slope. As the results, about 33.6 % of the total area (32313.1 ha) were classified to the paddy field (10847.9 ha) and 851.0 ha was classified to the unsuitable paddy field based on the topographic factors. The user accuracy of paddy field classification was calculated to 83.3 %, and among those, about 60.0 % of total paddy fields were classified from the time-series NDVI before the unsupervised classification. Other land covers were classified as to upland(5255.2 ha), forest (10961.0 ha), residential area and bare land (3309.6 ha), and lake and river (1784.4 ha) from this object-oriented classification.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

Determination of Spatial Resolution to Improve GCP Chip Matching Performance for CAS-4 (농림위성용 GCP 칩 매칭 성능 향상을 위한 위성영상 공간해상도 결정)

  • Lee, YooJin;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1517-1526
    • /
    • 2021
  • With the recent global and domestic development of Earth observation satellites, the applications of satellite images have been widened. Research for improving the geometric accuracy of satellite images is being actively carried out. This paper studies the possibility of automated ground control point (GCP) generation for CAS-4 satellite, to be launched in 2025 with the capability of image acquisition at 5 m ground sampling distance (GSD). In particular, this paper focuses to check whether GCP chips with 25 cm GSD established for CAS-1 satellite images can be used for CAS-4 and to check whether optimalspatial resolution for matching between CAS-4 images and GCP chips can be determined to improve matching performance. Experiments were carried out using RapidEye images, which have similar GSD to CAS-4. Original satellite images were upsampled to make satellite images with smaller GSDs. At each GSD level, up-sampled satellite images were matched against GCP chips and precision sensor models were estimated. Results shows that the accuracy of sensor models were improved with images atsmaller GSD compared to the sensor model accuracy established with original images. At 1.25~1.67 m GSD, the accuracy of about 2.4 m was achieved. This finding lead that the possibility of automated GCP extraction and precision ortho-image generation for CAS-4 with improved accuracy.

Accuracy Assessment of Land-Use Land-Cover Classification Using Semantic Segmentation-Based Deep Learning Model and RapidEye Imagery (RapidEye 위성영상과 Semantic Segmentation 기반 딥러닝 모델을 이용한 토지피복분류의 정확도 평가)

  • Woodam Sim;Jong Su Yim;Jung-Soo Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.269-282
    • /
    • 2023
  • The purpose of this study was to construct land cover maps using a deep learning model and to select the optimal deep learning model for land cover classification by adjusting the dataset such as input image size and Stride application. Two types of deep learning models, the U-net model and the DeeplabV3+ model with an Encoder-Decoder network, were utilized. Also, the combination of the two deep learning models, which is an Ensemble model, was used in this study. The dataset utilized RapidEye satellite images as input images and the label images used Raster images based on the six categories of the land use of Intergovernmental Panel on Climate Change as true value. This study focused on the problem of the quality improvement of the dataset to enhance the accuracy of deep learning model and constructed twelve land cover maps using the combination of three deep learning models (U-net, DeeplabV3+, and Ensemble), two input image sizes (64 × 64 pixel and 256 × 256 pixel), and two Stride application rates (50% and 100%). The evaluation of the accuracy of the label images and the deep learning-based land cover maps showed that the U-net and DeeplabV3+ models had high accuracy, with overall accuracy values of approximately 87.9% and 89.8%, and kappa coefficients of over 72%. In addition, applying the Ensemble and Stride to the deep learning models resulted in a maximum increase of approximately 3% in accuracy and an improvement in the issue of boundary inconsistency, which is a problem associated with Semantic Segmentation based deep learning models.

Evaluation of the Applicability of Rice Growth Monitoring on Seosan and Pyongyang Region using RADARSAT-2 SAR -By Comparing RapidEye- (RADARSAT-2 SAR를 이용한 서산 및 평양 지역의 벼 생육 모니터링 적용성 평가 -RapidEye와의 비교를 통해-)

  • Na, Sang Il;Hong, Suk Young;Kim, Yi Hyun;Lee, Kyoung Do
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.56 no.5
    • /
    • pp.55-65
    • /
    • 2014
  • Radar remote sensing is appropriate for rice monitoring because the areas where this crop is cultivated are often cloudy and rainy. Especially, Synthetic Aperture Radar (SAR) can acquire remote sensing information with a high temporal resolution in tropical and subtropical regions due to its all-weather capability. This paper analyzes the relationships between backscattering coefficients of rice measured by RADARSAT-2 SAR and growth parameters during a rice growth period. And we applied the relationships to crop monitoring of paddy rice in North Korea. As a result, plant height and Leaf Area Index (LAI) increased until Day Of Year (DOY) 234 and then decreased, while fresh weight and dry weight increased until DOY 253. Correlation coefficients revealed that Horizontal transmit and Horizontal receive polarization (HH)-polarization backscattering coefficients were correlated highly with plant height (r=0.95), fresh weight (r=0.92), vegetation water content (r=0.91), LAI (r=0.90), and dry weight (r=0.89). Based on the observed relationships between backscattering coefficients and variables of cultivation, prediction equations were developed using the HH-polarization backscattering coefficients. Concerning the evaluation for the applicability of the LAI distribution from RADARSAT-2, the LAI statistic was evaluated in comparison with LAI distribution from RapidEye image. And LAI distributions in Pyongyang were presented to show spatial variability for unaccessible areas.

Visual Performances of the Corrected Navarro Accommodation-Dependent Finite Model Eye (안구의 굴절능 조절을 고려한 수정된 Navarro 정밀모형안의 시성능 분석)

  • Choi, Ka-Ul;Song, Seok-Ho;Kim, Sang-Gee
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.5
    • /
    • pp.337-344
    • /
    • 2007
  • In recent years, there has been rapid progress in different areas of vision science, such as refractive surgical procedures, contact lenses and spectacles, and near vision. This progress requires a highly accurate modeling of optical performance of the human eyes in different accommodation states. A new novel model-eye was designed using the Navarro accommodation-dependent finite model eye. For each of the vergence distances, ocular wavefront error, accommodative response, and visual acuity were calculated. Using the new model eye ocular wavefront error, accommodation dative response, and visual acuity are calculated for six vergence stimuli, -0.17D, 1D, 2D, 3D, 4D and -5D. Also, $3^{rd}\;and\;4^{th}$ order aberrations, modulation transfer function, and visual acuity of the accommodation-dependent model eye were analyzed. These results are well-matched to anatomical, biometric, and optical realities. Our corrected accommodation-dependent model-eye may provide a more accurate way to evaluate optical transfer functions and optical performances of the human eye.