• Title/Summary/Keyword: Remote Training

Search Result 324, Processing Time 0.024 seconds

KOMPSAT Optical Image Registration via Deep-Learning Based OffsetNet Model (딥러닝 기반 OffsetNet 모델을 통한 KOMPSAT 광학 영상 정합)

  • Jin-Woo Yu;Che-Won Park;Hyung-Sup Jung
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1707-1720
    • /
    • 2023
  • With the increase in satellite time series data, the utility of remote sensing data is growing. In the analysis of time series data, the relative positional accuracy between images has a significant impact on the results, making image registration essential for correction. In recent years, research on image registration has been increasing by applying deep learning, which outperforms existing image registration algorithms. To train deep learning-based registration models, a large number of image pairs are required. Additionally, creating a correlation map between the data of existing deep learning models and applying additional computations to extract registration points is inefficient. To overcome these drawbacks, this study developed a data augmentation technique for training image registration models and applied it to OffsetNet, a registration model that predicts the offset amount itself, to perform image registration for KOMSAT-2, -3, and -3A. The results of the model training showed that OffsetNet accurately predicted the offset amount for the test data, enabling effective registration of the master and slave images.

Assessing the Extent and Rate of Deforestation in the Mountainous Tropical Forest

  • Pujiono, Eko;Lee, Woo-Kyun;Kwak, Doo-Ahn;Lee, Jong-Yeol
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.315-328
    • /
    • 2011
  • Landsat data incorporated with additional bands-normalized difference vegetation index (NDVI) and band ratios were used to assess the extent and rate of deforestation in the Gunung Mutis Nature Reserve (GMNR), a mountainous tropical forest in Eastern of Indonesia. Hybrid classification was chosen as the classification approach. In this approach, the unsupervised classification-iterative self-organizing data analysis (ISODATA) was used to create signature files and training data set. A statistical separability measurement-transformed divergence (TD) was used to identify the combination of bands that showed the highest distinction between the land cover classes in training data set. Supervised classification-maximum likelihood classification (MLC) was performed using selected bands and the training data set. Post-classification smoothing and accuracy assessment were applied to classified image. Post-classification comparison was used to assess the extent of deforestation, of which the rate of deforestation was calculated by the formula suggested by Food Agriculture Organization (FAO). The results of two periods of deforestation assessment showed that the extent of deforestation during 1989-1999 was 720.72 ha, 0.80% of annual rate of deforestation, and its extent of deforestation during 1999-2009 was 1,059.12 ha, 1.31% of annual rate of deforestation. Such results are important for the GMNR authority to establish strategies, plans and actions for combating deforestation.

Feature Selection for Image Classification of Hyperion Data (Hyperion 영상의 분류를 위한 밴드 추출)

  • 한동엽;조영욱;김용일;이용웅
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.2
    • /
    • pp.170-179
    • /
    • 2003
  • In order to classify Land Use/Land Cover using multispectral images, we have to give consequence to defining proper classes and selecting training sample with higher class separability. The process of satellite hyperspectral image which has a lot of bands is difficult and time-consuming. Furthermore, classification result of hyperspectral image with noise is often worse than that of a multispectral image. When selecting training fields according to the signatures in the study area, it is difficult to calculate covariance matrix in some clusters with pixels less than the number of bands. Therefore in this paper we presented an overview of feature extraction methods for classification of Hyperion data and examined effectiveness of feature extraction through the accuracy assesment of classified image. Also we evaluated the classification accuracy of optimal meaningful features by class separation distance, which is also a method for band reduction. As a result, the classification accuracies of feature-extracted image and original image are similar regardless of classifiers. But the number of bands used and computing time were reduced. The classifiers such as MLC, SAM and ECHO were used.

Estimation of Quantitative Precipitation Rate Using an Optimal Weighting Method with RADAR Estimated Rainrate and AWS Rainrate (RADAR 추정 강수량과 AWS 강수량의 최적 결합 방법을 이용한 정량적 강수량 산출)

  • Oh, Hyun-Mi;Heo, Ki-Young;Ha, Kyung-Ja
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.6
    • /
    • pp.485-493
    • /
    • 2006
  • This study is to combine precipitation data with different spatial-temporal characteristics using an optimal weighting method. This optimal weighting method is designed for combination of AWS rain gage data and S-band RADAR-estimated rain data with weighting function in inverse proportion to own mean square error for the previous time step. To decide the optimal weight coefficient for optimized precipitation according to different training time, the method has been performed on Changma case with a long spell of rainy hour for the training time from 1 hour to 10 hours. Horizontal field of optimized precipitation tends to be smoothed after 2 hours training time, and then optimized precipitation has a good agreement with synoptic station rainfall assumed as true value. This result suggests that this optimal weighting method can be used for production of high-resolution quantitative precipitation rate using various data sets.

Attention Gated FC-DenseNet for Extracting Crop Cultivation Area by Multispectral Satellite Imagery (다중분광밴드 위성영상의 작물재배지역 추출을 위한 Attention Gated FC-DenseNet)

  • Seong, Seon-kyeong;Mo, Jun-sang;Na, Sang-il;Choi, Jae-wan
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1061-1070
    • /
    • 2021
  • In this manuscript, we tried to improve the performance of the FC-DenseNet by applying an attention gate for the classification of cropping areas. The attention gate module could facilitate the learning of a deep learning model and improve the performance of the model by injecting of spatial/spectral weights to each feature map. Crop classification was performed in the onion and garlic regions using a proposed deep learning model in which an attention gate was added to the skip connection part of FC-DenseNet. Training data was produced using various PlanetScope satellite imagery, and preprocessing was applied to minimize the problem of imbalanced training dataset. As a result of the crop classification, it was verified that the proposed deep learning model can more effectively classify the onion and garlic regions than existing FC-DenseNet algorithm.

Performance of Support Vector Machine for Classifying Land Cover in Optical Satellite Images: A Case Study in Delaware River Port Area

  • Ramayanti, Suci;Kim, Bong Chan;Park, Sungjae;Lee, Chang-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_4
    • /
    • pp.1911-1923
    • /
    • 2022
  • The availability of high-resolution satellite images provides precise information without direct observation of the research target. Korea Multi-Purpose Satellite (KOMPSAT), also known as the Arirang satellite, has been developed and utilized for earth observation. The machine learning model was continuously proven as a good classifier in classifying remotely sensed images. This study aimed to compare the performance of the support vector machine (SVM) model in classifying the land cover of the Delaware River port area on high and medium-resolution images. Three optical images, which are KOMPSAT-2, KOMPSAT-3A, and Sentinel-2B, were classified into six land cover classes, including water, road, vegetation, building, vacant, and shadow. The KOMPSAT images are provided by Korea Aerospace Research Institute (KARI), and the Sentinel-2B image was provided by the European Space Agency (ESA). The training samples were manually digitized for each land cover class and considered the reference image. The predicted images were compared to the actual data to obtain the accuracy assessment using a confusion matrix analysis. In addition, the time-consuming training and classifying were recorded to evaluate the model performance. The results showed that the KOMPSAT-3A image has the highest overall accuracy and followed by KOMPSAT-2 and Sentinel-2B results. On the contrary, the model took a long time to classify the higher-resolution image compared to the lower resolution. For that reason, we can conclude that the SVM model performed better in the higher resolution image with the consequence of the longer time-consuming training and classifying data. Thus, this finding might provide consideration for related researchers when selecting satellite imagery for effective and accurate image classification.

A Study on the Land Cover Classification and Cross Validation of AI-based Aerial Photograph

  • Lee, Seong-Hyeok;Myeong, Soojeong;Yoon, Donghyeon;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.4
    • /
    • pp.395-409
    • /
    • 2022
  • The purpose of this study is to evaluate the classification performance and applicability when land cover datasets constructed for AI training are cross validation to other areas. For study areas, Gyeongsang-do and Jeolla-do in South Korea were selected as cross validation areas, and training datasets were obtained from AI-Hub. The obtained datasets were applied to the U-Net algorithm, a semantic segmentation algorithm, for each region, and the accuracy was evaluated by applying them to the same and other test areas. There was a difference of about 13-15% in overall classification accuracy between the same and other areas. For rice field, fields and buildings, higher accuracy was shown in the Jeolla-do test areas. For roads, higher accuracy was shown in the Gyeongsang-do test areas. In terms of the difference in accuracy by weight, the result of applying the weights of Gyeongsang-do showed high accuracy for forests, while that of applying the weights of Jeolla-do showed high accuracy for dry fields. The result of land cover classification, it was found that there is a difference in classification performance of existing datasets depending on area. When constructing land cover map for AI training, it is expected that higher quality datasets can be constructed by reflecting the characteristics of various areas. This study is highly scalable from two perspectives. First, it is to apply satellite images to AI study and to the field of land cover. Second, it is expanded based on satellite images and it is possible to use a large scale area and difficult to access.

The Efficiency of Long Short-Term Memory (LSTM) in Phenology-Based Crop Classification

  • Ehsan Rahimi;Chuleui Jung
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.57-69
    • /
    • 2024
  • Crop classification plays a vitalrole in monitoring agricultural landscapes and enhancing food production. In this study, we explore the effectiveness of Long Short-Term Memory (LSTM) models for crop classification, focusing on distinguishing between apple and rice crops. The aim wasto overcome the challenges associatedwith finding phenology-based classification thresholds by utilizing LSTM to capture the entire Normalized Difference Vegetation Index (NDVI)trend. Our methodology involvestraining the LSTM model using a reference site and applying it to three separate three test sites. Firstly, we generated 25 NDVI imagesfrom the Sentinel-2A data. Aftersegmenting study areas, we calculated the mean NDVI values for each segment. For the reference area, employed a training approach utilizing the NDVI trend line. This trend line served as the basis for training our crop classification model. Following the training phase, we applied the trained model to three separate test sites. The results demonstrated a high overall accuracy of 0.92 and a kappa coefficient of 0.85 for the reference site. The overall accuracies for the test sites were also favorable, ranging from 0.88 to 0.92, indicating successful classification outcomes. We also found that certain phenological metrics can be less effective in crop classification therefore limitations of relying solely on phenological map thresholds and emphasizes the challenges in detecting phenology in real-time, particularly in the early stages of crops. Our study demonstrates the potential of LSTM models in crop classification tasks, showcasing their ability to capture temporal dependencies and analyze timeseriesremote sensing data.While limitations exist in capturing specific phenological events, the integration of alternative approaches holds promise for enhancing classification accuracy. By leveraging advanced techniques and considering the specific challenges of agricultural landscapes, we can continue to refine crop classification models and support agricultural management practices.

2-Step Structural Damage Analysis Based on Foundation Model for Structural Condition Assessment (시설물 상태평가를 위한 파운데이션 모델 기반 2-Step 시설물 손상 분석)

  • Hyunsoo Park;Hwiyoung Kim ;Dongki Chung
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.621-635
    • /
    • 2023
  • The assessment of structural condition is a crucial process for evaluating its usability and determining the diagnostic cycle. The currently employed manpower-based methods suffer from issues related to safety, efficiency, and objectivity. To address these concerns, research based on deep learning using images is being conducted. However, acquiring structural damage data is challenging, making it difficult to construct a substantial amount of training data, thus limiting the effectiveness of deep learning-based condition assessment. In this study, we propose a foundation model-based 2-step structural damage analysis to overcome the lack of training data in image-based structural condition assessments. We subdivided the elements of structural condition assessment into instantiation and quantification. In the quantification step, we applied a foundation model for image segmentation. Our method demonstrated a 10%-point increase in mean intersection over union compared to conventional image segmentation techniques, with a notable 40%-point improvement in the case of rebar exposure. We anticipate that our proposed approach will enhance performance in domains where acquiring training data is challenging.

Semantic Segmentation of Drone Images Based on Combined Segmentation Network Using Multiple Open Datasets (개방형 다중 데이터셋을 활용한 Combined Segmentation Network 기반 드론 영상의 의미론적 분할)

  • Ahram Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.967-978
    • /
    • 2023
  • This study proposed and validated a combined segmentation network (CSN) designed to effectively train on multiple drone image datasets and enhance the accuracy of semantic segmentation. CSN shares the entire encoding domain to accommodate the diversity of three drone datasets, while the decoding domains are trained independently. During training, the segmentation accuracy of CSN was lower compared to U-Net and the pyramid scene parsing network (PSPNet) on single datasets because it considers loss values for all dataset simultaneously. However, when applied to domestic autonomous drone images, CSN demonstrated the ability to classify pixels into appropriate classes without requiring additional training, outperforming PSPNet. This research suggests that CSN can serve as a valuable tool for effectively training on diverse drone image datasets and improving object recognition accuracy in new regions.