• Title/Summary/Keyword: Simulated satellite image

Search Result 60, Processing Time 0.029 seconds

Comparison of Remote Sensing and Crop Growth Models for Estimating Within-Field LAI Variability

  • Hong, Suk-Young;Sudduth, Kenneth-A.;Kitchen, Newell-R.;Fraisse, Clyde-W.;Palm, Harlan-L.;Wiebold, William-J.
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.3
    • /
    • pp.175-188
    • /
    • 2004
  • The objectives of this study were to estimate leaf area index (LAI) as a function of image-derived vegetation indices, and to compare measured and estimated LAI to the results of crop model simulation. Soil moisture, crop phenology, and LAI data were obtained several times during the 2001 growing season at monitoring sites established in two central Missouri experimental fields, one planted to com (Zea mays L.) and the other planted to soybean (Glycine max L.). Hyper- and multi-spectral images at varying spatial. and spectral resolutions were acquired from both airborne and satellite platforms, and data were extracted to calculate standard vegetative indices (normalized difference vegetative index, NDVI; ratio vegetative index, RVI; and soil-adjusted vegetative index, SAVI). When comparing these three indices, regressions for measured LAI were of similar quality $(r^2$ =0.59 to 0.61 for com; $r^2$ =0.66 to 0.68 for soybean) in this single-year dataset. CERES(Crop Environment Resource Synthesis)-Maize and CROPGRO-Soybean models were calibrated to measured soil moisture and yield data and used to simulate LAI over the growing season. The CERES-Maize model over-predicted LAI at all corn monitoring sites. Simulated LAI from CROPGRO-Soybean was similar to observed and image-estimated LA! for most soybean monitoring sites. These results suggest crop growth model predictions might be improved by incorporating image-estimated LAI. Greater improvements might be expected with com than with soybean.

KOMPSAT-2 위성의 요각 계산방법 연구

  • Kim, Jong-Ah;Kang, Keum-Sil;Jang, Young-Jun;Yong, Sang-Soon;Kang, Song-Doug;Youn, Heong-Sik
    • Aerospace Engineering and Technology
    • /
    • v.3 no.2
    • /
    • pp.160-169
    • /
    • 2004
  • In order to get the high resolution satellite image, MSC has TDI function in the KOMPSAT-2. So it is required to control the yaw angle of the attitude as operation concepts of KOMPSAT-2. This study was to explain the TDI function, to set up the geometric equation to satisfy the condition, and finally to determine the equation of yaw angle. The calculating program was developed and simulated with orbit and imaging attitude as input data, and the results were compared with the yaw steering values calculated in the on-board computer.

  • PDF

Wildfire-induced Change Detection Using Post-fire VHR Satellite Images and GIS Data (산불 발생 후 VHR 위성영상과 GIS 데이터를 이용한 산불 피해 지역 변화 탐지)

  • Chung, Minkyung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1389-1403
    • /
    • 2021
  • Disaster management using VHR (very high resolution) satellite images supports rapid damage assessment and also offers detailed information of the damages. However, the acquisition of pre-event VHR satellite images is usually limited due to the long revisit time of VHR satellites. The absence of the pre-event data can reduce the accuracy of damage assessment since it is difficult to distinguish the changed region from the unchanged region with only post-event data. To address this limitation, in this study, we conducted the wildfire-induced change detection on national wildfire cases using post-fire VHR satellite images and GIS (Geographic Information System) data. For GIS data, a national land cover map was selected to simulate the pre-fire NIR (near-infrared) images using the spatial information of the pre-fire land cover. Then, the simulated pre-fire NIR images were used to analyze bi-temporal NDVI (Normalized Difference Vegetation Index) correlation for unsupervised change detection. The whole process of change detection was performed on a superpixel basis considering the advantages of superpixels being able to reduce the complexity of the image processing while preserving the details of the VHR images. The proposed method was validated on the 2019 Gangwon wildfire cases and showed a high overall accuracy over 98% and a high F1-score over 0.97 for both study sites.

Simulation of Sentinel-2 Product Using Airborne Hyperspectral Image and Analysis of TOA and BOA Reflectance for Evaluation of Sen2cor Atmosphere Correction: Focused on Agricultural Land (Sen2Cor 대기보정 프로세서 평가를 위한 항공 초분광영상 기반 Sentinel-2 모의영상 생성 및 TOA와 BOA 반사율 자료와의 비교: 농업지역을 중심으로)

  • Cho, Kangjoon;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.251-263
    • /
    • 2019
  • Sentinel-2 Multi Spectral Instrument(MSI) launched by the European Space Agency (ESA) offered high spatial resolution optical products, enhanced temporal revisit of five days, and 13 spectral bands in the visible, near infrared and shortwave infrared wavelengths similar to Landsat mission. Landsat satellite imagery has been applied to various previous studies, but Sentinel-2 optical satellite imagery has not been widely used. Currently, for global coverage, Sentinel-2 products are systematically processed and distributed to Level-1C (L1C) products which contain the Top-of-Atmosphere (TOA) reflectance. Furthermore, ESA plans a systematic global production of Level-2A(L2A) product including the atmospheric corrected Bottom-of-Atmosphere (BOA) reflectance considered the aerosol optical thickness and the water vapor content. Therefore, the Sentinel-2 L2A products are expected to enhance the reliability of image quality for overall coverage in the Sentinel-2 mission with enhanced spatial,spectral, and temporal resolution. The purpose of this work is a quantitative comparison Sentinel-2 L2A products and fully simulated image to evaluate the applicability of the Sentinel-2 dataset in cultivated land growing various kinds of crops in Korea. Reference image of Sentinel-2 L2A data was simulated by airborne hyperspectral data acquired from AISA Fenix sensor. The simulation imagery was compared with the reflectance of L1C TOA and that of L2A BOA data. The result of quantitative comparison shows that, for the atmospherically corrected L2A reflectance, the decrease in RMSE and the increase in correlation coefficient were found at the visible band and vegetation indices to be significant.

Performance Evaluation of Machine Learning Algorithms for Cloud Removal of Optical Imagery: A Case Study in Cropland (광학 영상의 구름 제거를 위한 기계학습 알고리즘의 예측 성능 평가: 농경지 사례 연구)

  • Soyeon Park;Geun-Ho Kwak;Ho-Yong Ahn;No-Wook Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.507-519
    • /
    • 2023
  • Multi-temporal optical images have been utilized for time-series monitoring of croplands. However, the presence of clouds imposes limitations on image availability, often requiring a cloud removal procedure. This study assesses the applicability of various machine learning algorithms for effective cloud removal in optical imagery. We conducted comparative experiments by focusing on two key variables that significantly influence the predictive performance of machine learning algorithms: (1) land-cover types of training data and (2) temporal variability of land-cover types. Three machine learning algorithms, including Gaussian process regression (GPR), support vector machine (SVM), and random forest (RF), were employed for the experiments using simulated cloudy images in paddy fields of Gunsan. GPR and SVM exhibited superior prediction accuracy when the training data had the same land-cover types as the cloud region, and GPR showed the best stability with respect to sampling fluctuations. In addition, RF was the least affected by the land-cover types and temporal variations of training data. These results indicate that GPR is recommended when the land-cover type and spectral characteristics of the training data are the same as those of the cloud region. On the other hand, RF should be applied when it is difficult to obtain training data with the same land-cover types as the cloud region. Therefore, the land-cover types in cloud areas should be taken into account for extracting informative training data along with selecting the optimal machine learning algorithm.

A Study on the Deep Neural Network based Recognition Model for Space Debris Vision Tracking System (심층신경망 기반 우주파편 영상 추적시스템 인식모델에 대한 연구)

  • Lim, Seongmin;Kim, Jin-Hyung;Choi, Won-Sub;Kim, Hae-Dong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.9
    • /
    • pp.794-806
    • /
    • 2017
  • It is essential to protect the national space assets and space environment safely as a space development country from the continuously increasing space debris. And Active Debris Removal(ADR) is the most active way to solve this problem. In this paper, we studied the Artificial Neural Network(ANN) for a stable recognition model of vision-based space debris tracking system. We obtained the simulated image of the space environment by the KARICAT which is the ground-based space debris clearing satellite testbed developed by the Korea Aerospace Research Institute, and created the vector which encodes structure and color-based features of each object after image segmentation by depth discontinuity. The Feature Vector consists of 3D surface area, principle vector of point cloud, 2D shape and color information. We designed artificial neural network model based on the separated Feature Vector. In order to improve the performance of the artificial neural network, the model is divided according to the categories of the input feature vectors, and the ensemble technique is applied to each model. As a result, we confirmed the performance improvement of recognition model by ensemble technique.

Descent Dataset Generation and Landmark Extraction for Terrain Relative Navigation on Mars (화성 지형상대항법을 위한 하강 데이터셋 생성과 랜드마크 추출 방법)

  • Kim, Jae-In
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1015-1023
    • /
    • 2022
  • The Entry-Descent-Landing process of a lander involves many environmental and technical challenges. To solve these problems, recently, terrestrial relative navigation (TRN) technology has been essential for landers. TRN is a technology for estimating the position and attitude of a lander by comparing Inertial Measurement Unit (IMU) data and image data collected from a descending lander with pre-built reference data. In this paper, we present a method for generating descent dataset and extracting landmarks, which are key elements for developing TRN technologies to be used on Mars. The proposed method generates IMU data of a descending lander using a simulated Mars landing trajectory and generates descent images from high-resolution ortho-map and digital elevation map through a ray tracing technique. Landmark extraction is performed by an area-based extraction method due to the low-textured surfaces on Mars. In addition, search area reduction is carried out to improve matching accuracy and speed. The performance evaluation result for the descent dataset generation method showed that the proposed method can generate images that satisfy the imaging geometry. The performance evaluation result for the landmark extraction method showed that the proposed method ensures several meters of positioning accuracy while ensuring processing speed as fast as the feature-based methods.

Urban Area Building Reconstruction Using High Resolution SAR Image (고해상도 SAR 영상을 이용한 도심지 건물 재구성)

  • Kang, Ah-Reum;Lee, Seung-Kuk;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.4
    • /
    • pp.361-373
    • /
    • 2013
  • The monitoring of urban area, target detection and building reconstruction have been actively studied and investigated since high resolution X-band SAR images could be acquired by airborne and/or satellite SAR systems. This paper describes an efficient approach to reconstruct artificial structures (e.g. apartment, building and house) in urban area using high resolution X-band SAR images. Building footprint was first extracted from 1:25,000 digital topographic map and then a corner line of building was detected by an automatic detecting algorithm. With SAR amplitude images, an initial building height was calculated by the length of layover estimated using KS-test (Kolmogorov-Smirnov test) from the corner line. The interferometric SAR phases were simulated depending on SAR geometry and changable building heights ranging from -10 m to +10 m of the initial building height. With an interferogram from real SAR data set, the simulation results were compared using the method of the phase consistency. One of results can be finally defined as the reconstructed building height. The developed algorithm was applied to repeat-pass TerraSAR-X spotlight mode data set over an apartment complex in Daejeon city, Korea. The final building heights were validated against reference heights extracted from LiDAR DSM, with an RMSE (Root Mean Square Error) of about 1~2m.

Estimation of Carbon Absorption Distribution based on Satellite Image Considering Climate Change Scenarios (기후변화 시나리오를 고려한 위성영상 기반 미래 탄소흡수량 분포 추정)

  • Na, Sang-il;Ahn, Ho-yong;Ryu, Jae-Hyun;So, Kyu-ho;Lee, Kyung-do
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.833-845
    • /
    • 2021
  • Quantification of carbon absorption and understanding the human induced land use changes forms one of the major study with respect to global climatic changes. An attempt study has been made to quantify the carbon absorption by land use changes through remote sensing technology. However, it focused on past carbon absorption changes. So prediction of future carbon absorption changes is insufficient. This study simulated land use change using the Conversion of Land Use and its Effects at Small regional extent (CLUE-S) model and predicted future changes in carbon absorption considering climate change scenarios 4.5 and 8.5 of the Representative Concentration Pathways (RCP). Results of this study, in the RCP 4.5 scenarios there predicted to be loss of 7.92% of carbon absorption, but in the RCP 8.5 scenarios was 13.02%. Therefore, the approach used in this study is expected to enable exploration of future carbon absorption change considering other climate change scenarios.

Soil moisture estimation using the water cloud model and Sentinel-1 & -2 satellite image-based vegetation indices (Sentinel-1 & -2 위성영상 기반 식생지수와 Water Cloud Model을 활용한 토양수분 산정)

  • Chung, Jeehun;Lee, Yonggwan;Kim, Jinuk;Jang, Wonjin;Kim, Seongjoon
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.3
    • /
    • pp.211-224
    • /
    • 2023
  • In this study, a soil moisture estimation was performed using the Water Cloud Model (WCM), a backscatter model that considers vegetation based on SAR (Synthetic Aperture Radar). Sentinel-1 SAR and Sentinel-2 MSI (Multi-Spectral Instrument) images of a 40 × 50 km2 area including the Yongdam Dam watershed of the Geum River were collected for this study. As vegetation descriptor of WCM, Sentinel-1 based vegetation index RVI (Radar Vegetation Index), depolarization ratio (DR), and Sentinel-2 based NDVI (Normalized Difference Vegetation Index) were used, respectively. Forward modeling of WCM was performed by 3 groups, which were divided by the characteristics between backscattering coefficient and soil moisture. The clearer the linear relationship between soil moisture and the backscattering coefficient, the higher the simulation performance. To estimate the soil moisture, the simulated backscattering coefficient was inverted. The simulation performance was proportional to the forward modeling result. The WCM simulation error showed an increasing pattern from about -12dB based on the observed backscattering coefficient.