• Title/Summary/Keyword: Remote Learning

Search Result 563, Processing Time 0.028 seconds

Estimation of TROPOMI-derived Ground-level SO2 Concentrations Using Machine Learning Over East Asia (기계학습을 활용한 동아시아 지역의 TROPOMI 기반 SO2 지상농도 추정)

  • Choi, Hyunyoung;Kang, Yoojin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.275-290
    • /
    • 2021
  • Sulfur dioxide (SO2) in the atmosphere is mainly generated from anthropogenic emission sources. It forms ultra-fine particulate matter through chemical reaction and has harmful effect on both the environment and human health. In particular, ground-level SO2 concentrations are closely related to human activities. Satellite observations such as TROPOMI (TROPOspheric Monitoring Instrument)-derived column density data can provide spatially continuous monitoring of ground-level SO2 concentrations. This study aims to propose a 2-step residual corrected model to estimate ground-level SO2 concentrations through the synergistic use of satellite data and numerical model output. Random forest machine learning was adopted in the 2-step residual corrected model. The proposed model was evaluated through three cross-validations (i.e., random, spatial and temporal). The results showed that the model produced slopes of 1.14-1.25, R values of 0.55-0.65, and relative root-mean-square-error of 58-63%, which were improved by 10% for slopes and 3% for R and rRMSE when compared to the model without residual correction. The model performance by country was slightly reduced in Japan, often resulting in overestimation, where the sample size was small, and the concentration level was relatively low. The spatial and temporal distributions of SO2 produced by the model agreed with those of the in-situ measurements, especially over Yangtze River Delta in China and Seoul Metropolitan Area in South Korea, which are highly dependent on the characteristics of anthropogenic emission sources. The model proposed in this study can be used for long-term monitoring of ground-level SO2 concentrations on both the spatial and temporal domains.

A Study on Daytime Transparent Cloud Detection through Machine Learning: Using GK-2A/AMI (기계학습을 통한 주간 반투명 구름탐지 연구: GK-2A/AMI를 이용하여)

  • Byeon, Yugyeong;Jin, Donghyun;Seong, Noh-hun;Woo, Jongho;Jeon, Uujin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1181-1189
    • /
    • 2022
  • Clouds are composed of tiny water droplets, ice crystals, or mixtures suspended in the atmosphere and cover about two-thirds of the Earth's surface. Cloud detection in satellite images is a very difficult task to separate clouds and non-cloud areas because of similar reflectance characteristics to some other ground objects or the ground surface. In contrast to thick clouds, which have distinct characteristics, thin transparent clouds have weak contrast between clouds and background in satellite images and appear mixed with the ground surface. In order to overcome the limitations of transparent clouds in cloud detection, this study conducted cloud detection focusing on transparent clouds using machine learning techniques (Random Forest [RF], Convolutional Neural Networks [CNN]). As reference data, Cloud Mask and Cirrus Mask were used in MOD35 data provided by MOderate Resolution Imaging Spectroradiometer (MODIS), and the pixel ratio of training data was configured to be about 1:1:1 for clouds, transparent clouds, and clear sky for model training considering transparent cloud pixels. As a result of the qualitative comparison of the study, bothRF and CNN successfully detected various types of clouds, including transparent clouds, and in the case of RF+CNN, which mixed the results of the RF model and the CNN model, the cloud detection was well performed, and was confirmed that the limitations of the model were improved. As a quantitative result of the study, the overall accuracy (OA) value of RF was 92%, CNN showed 94.11%, and RF+CNN showed 94.29% accuracy.

Estimation of High Resolution Sea Surface Salinity Using Multi Satellite Data and Machine Learning (다종 위성자료와 기계학습을 이용한 고해상도 표층 염분 추정)

  • Sung, Taejun;Sim, Seongmun;Jang, Eunna;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.747-763
    • /
    • 2022
  • Ocean salinity affects ocean circulation on a global scale and low salinity water around coastal areas often has an impact on aquaculture and fisheries. Microwave satellite sensors (e.g., Soil Moisture Active Passive [SMAP]) have provided sea surface salinity (SSS) based on the dielectric characteristics of water associated with SSS and sea surface temperature (SST). In this study, a Light Gradient Boosting Machine (LGBM)-based model for generating high resolution SSS from Geostationary Ocean Color Imager (GOCI) data was proposed, having machine learning-based improved SMAP SSS by Jang et al. (2022) as reference data (SMAP SSS (Jang)). Three schemes with different input variables were tested, and scheme 3 with all variables including Multi-scale Ultra-high Resolution SST yielded the best performance (coefficient of determination = 0.60, root mean square error = 0.91 psu). The proposed LGBM-based GOCI SSS had a similar spatiotemporal pattern with SMAP SSS (Jang), with much higher spatial resolution even in coastal areas, where SMAP SSS (Jang) was not available. In addition, when tested for the great flood occurred in Southern China in August 2020, GOCI SSS well simulated the spatial and temporal change of Changjiang Diluted Water. This research provided a potential that optical satellite data can be used to generate high resolution SSS associated with the improved microwave-based SSS especially in coastal areas.

Estimation of Chlorophyll-a Concentration in Nakdong River Using Machine Learning-Based Satellite Data and Water Quality, Hydrological, and Meteorological Factors (머신러닝 기반 위성영상과 수질·수문·기상 인자를 활용한 낙동강의 Chlorophyll-a 농도 추정)

  • Soryeon Park;Sanghun Son;Jaegu Bae;Doi Lee;Dongju Seo;Jinsoo Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.655-667
    • /
    • 2023
  • Algal bloom outbreaks are frequently reported around the world, and serious water pollution problems arise every year in Korea. It is necessary to protect the aquatic ecosystem through continuous management and rapid response. Many studies using satellite images are being conducted to estimate the concentration of chlorophyll-a (Chl-a), an indicator of algal bloom occurrence. However, machine learning models have recently been used because it is difficult to accurately calculate Chl-a due to the spectral characteristics and atmospheric correction errors that change depending on the water system. It is necessary to consider the factors affecting algal bloom as well as the satellite spectral index. Therefore, this study constructed a dataset by considering water quality, hydrological and meteorological factors, and sentinel-2 images in combination. Representative ensemble models random forest and extreme gradient boosting (XGBoost) were used to predict the concentration of Chl-a in eight weirs located on the Nakdong river over the past five years. R-squared score (R2), root mean square errors (RMSE), and mean absolute errors (MAE) were used as model evaluation indicators, and it was confirmed that R2 of XGBoost was 0.80, RMSE was 6.612, and MAE was 4.457. Shapley additive expansion analysis showed that water quality factors, suspended solids, biochemical oxygen demand, dissolved oxygen, and the band ratio using red edge bands were of high importance in both models. Various input data were confirmed to help improve model performance, and it seems that it can be applied to domestic and international algal bloom detection.

Gap-Filling of Sentinel-2 NDVI Using Sentinel-1 Radar Vegetation Indices and AutoML (Sentinel-1 레이더 식생지수와 AutoML을 이용한 Sentinel-2 NDVI 결측화소 복원)

  • Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Yungyo Im;Youngmin Seo;Myoungsoo Won;Junghwa Chun;Kyungmin Kim;Keunchang Jang;Joongbin Lim;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1341-1352
    • /
    • 2023
  • The normalized difference vegetation index (NDVI) derived from satellite images is a crucial tool to monitor forests and agriculture for broad areas because the periodic acquisition of the data is ensured. However, optical sensor-based vegetation indices(VI) are not accessible in some areas covered by clouds. This paper presented a synthetic aperture radar (SAR) based approach to retrieval of the optical sensor-based NDVI using machine learning. SAR system can observe the land surface day and night in all weather conditions. Radar vegetation indices (RVI) from the Sentinel-1 vertical-vertical (VV) and vertical-horizontal (VH) polarizations, surface elevation, and air temperature are used as the input features for an automated machine learning (AutoML) model to conduct the gap-filling of the Sentinel-2 NDVI. The mean bias error (MAE) was 7.214E-05, and the correlation coefficient (CC) was 0.878, demonstrating the feasibility of the proposed method. This approach can be applied to gap-free nationwide NDVI construction using Sentinel-1 and Sentinel-2 images for environmental monitoring and resource management.

Retrieval of Hourly Aerosol Optical Depth Using Top-of-Atmosphere Reflectance from GOCI-II and Machine Learning over South Korea (GOCI-II 대기상한 반사도와 기계학습을 이용한 남한 지역 시간별 에어로졸 광학 두께 산출)

  • Seyoung Yang;Hyunyoung Choi;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.933-948
    • /
    • 2023
  • Atmospheric aerosols not only have adverse effects on human health but also exert direct and indirect impacts on the climate system. Consequently, it is imperative to comprehend the characteristics and spatiotemporal distribution of aerosols. Numerous research endeavors have been undertaken to monitor aerosols, predominantly through the retrieval of aerosol optical depth (AOD) via satellite-based observations. Nonetheless, this approach primarily relies on a look-up table-based inversion algorithm, characterized by computationally intensive operations and associated uncertainties. In this study, a novel high-resolution AOD direct retrieval algorithm, leveraging machine learning, was developed using top-of-atmosphere reflectance data derived from the Geostationary Ocean Color Imager-II (GOCI-II), in conjunction with their differences from the past 30-day minimum reflectance, and meteorological variables from numerical models. The Light Gradient Boosting Machine (LGBM) technique was harnessed, and the resultant estimates underwent rigorous validation encompassing random, temporal, and spatial N-fold cross-validation (CV) using ground-based observation data from Aerosol Robotic Network (AERONET) AOD. The three CV results consistently demonstrated robust performance, yielding R2=0.70-0.80, RMSE=0.08-0.09, and within the expected error (EE) of 75.2-85.1%. The Shapley Additive exPlanations(SHAP) analysis confirmed the substantial influence of reflectance-related variables on AOD estimation. A comprehensive examination of the spatiotemporal distribution of AOD in Seoul and Ulsan revealed that the developed LGBM model yielded results that are in close concordance with AERONET AOD over time, thereby confirming its suitability for AOD retrieval at high spatiotemporal resolution (i.e., hourly, 250 m). Furthermore, upon comparing data coverage, it was ascertained that the LGBM model enhanced data retrieval frequency by approximately 8.8% in comparison to the GOCI-II L2 AOD products, ameliorating issues associated with excessive masking over very illuminated surfaces that are often encountered in physics-based AOD retrieval processes.

Analysis of Applicability of RPC Correction Using Deep Learning-Based Edge Information Algorithm (딥러닝 기반 윤곽정보 추출자를 활용한 RPC 보정 기술 적용성 분석)

  • Jaewon Hur;Changhui Lee;Doochun Seo;Jaehong Oh;Changno Lee;Youkyung Han
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.387-396
    • /
    • 2024
  • Most very high-resolution (VHR) satellite images provide rational polynomial coefficients (RPC) data to facilitate the transformation between ground coordinates and image coordinates. However, initial RPC often contains geometric errors, necessitating correction through matching with ground control points (GCPs). A GCP chip is a small image patch extracted from an orthorectified image together with height information of the center point, which can be directly used for geometric correction. Many studies have focused on area-based matching methods to accurately align GCP chips with VHR satellite images. In cases with seasonal differences or changed areas, edge-based algorithms are often used for matching due to the difficulty of relying solely on pixel values. However, traditional edge extraction algorithms,such as canny edge detectors, require appropriate threshold settings tailored to the spectral characteristics of satellite images. Therefore, this study utilizes deep learning-based edge information that is insensitive to the regional characteristics of satellite images for matching. Specifically,we use a pretrained pixel difference network (PiDiNet) to generate the edge maps for both satellite images and GCP chips. These edge maps are then used as input for normalized cross-correlation (NCC) and relative edge cross-correlation (RECC) to identify the peak points with the highest correlation between the two edge maps. To remove mismatched pairs and thus obtain the bias-compensated RPC, we iteratively apply the data snooping. Finally, we compare the results qualitatively and quantitatively with those obtained from traditional NCC and RECC methods. The PiDiNet network approach achieved high matching accuracy with root mean square error (RMSE) values ranging from 0.3 to 0.9 pixels. However, the PiDiNet-generated edges were thicker compared to those from the canny method, leading to slightly lower registration accuracy in some images. Nevertheless, PiDiNet consistently produced characteristic edge information, allowing for successful matching even in challenging regions. This study demonstrates that improving the robustness of edge-based registration methods can facilitate effective registration across diverse regions.

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.

Derivation of Inherent Optical Properties Based on Deep Neural Network (심층신경망 기반의 해수 고유광특성 도출)

  • Hyeong-Tak Lee;Hey-Min Choi;Min-Kyu Kim;Suk Yoon;Kwang-Seok Kim;Jeong-Eon Moon;Hee-Jeong Han;Young-Je Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.695-713
    • /
    • 2023
  • In coastal waters, phytoplankton,suspended particulate matter, and dissolved organic matter intricately and nonlinearly alter the reflectivity of seawater. Neural network technology, which has been rapidly advancing recently, offers the advantage of effectively representing complex nonlinear relationships. In previous studies, a three-stage neural network was constructed to extract the inherent optical properties of each component. However, this study proposes an algorithm that directly employs a deep neural network. The dataset used in this study consists of synthetic data provided by the International Ocean Color Coordination Group, with the input data comprising above-surface remote-sensing reflectance at nine different wavelengths. We derived inherent optical properties using this dataset based on a deep neural network. To evaluate performance, we compared it with a quasi-analytical algorithm and analyzed the impact of log transformation on the performance of the deep neural network algorithm in relation to data distribution. As a result, we found that the deep neural network algorithm accurately estimated the inherent optical properties except for the absorption coefficient of suspended particulate matter (R2 greater than or equal to 0.9) and successfully separated the sum of the absorption coefficient of suspended particulate matter and dissolved organic matter into the absorption coefficient of suspended particulate matter and dissolved organic matter, respectively. We also observed that the algorithm, when directly applied without log transformation of the data, showed little difference in performance. To effectively apply the findings of this study to ocean color data processing, further research is needed to perform learning using field data and additional datasets from various marine regions, compare and analyze empirical and semi-analytical methods, and appropriately assess the strengths and weaknesses of each algorithm.

Band Selection Using L2,1-norm Regression for Hyperspectral Target Detection (초분광 표적 탐지를 위한 L2,1-norm Regression 기반 밴드 선택 기법)

  • Kim, Joochang;Yang, Yukyung;Kim, Jun-Hyung;Kim, Junmo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_1
    • /
    • pp.455-467
    • /
    • 2017
  • When performing target detection using hyperspectral imagery, a feature extraction process is necessary to solve the problem of redundancy of adjacent spectral bands and the problem of a large amount of calculation due to high dimensional data. This study proposes a new band selection method using the $L_{2,1}$-norm regression model to apply the feature selection technique in the machine learning field to the hyperspectral band selection. In order to analyze the performance of the proposed band selection technique, we collected the hyperspectral imagery and these were used to analyze the performance of target detection with band selection. The Adaptive Cosine Estimator (ACE) detection performance is maintained or improved when the number of bands is reduced from 164 to about 30 to 40 bands in the 350 nm to 2500 nm wavelength band. Experimental results show that the proposed band selection technique extracts bands that are effective for detection in hyperspectral images and can reduce the size of the data without reducing the performance, which can help improve the processing speed of real-time target detection system in the future.