• Title/Summary/Keyword: 구름 탐지

Search Result 59, Processing Time 0.032 seconds

Evaluation of the Utility of SSG Algorithm for Image Restoration of Landsat-8 (Landsat 8호 영상 복원을 위한 SSG 기법 활용성 평가)

  • Lee, Mi Hee;Lee, Dalgeun;Yu, Jung Hum;Kim, Jinyoung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1231-1244
    • /
    • 2020
  • Landsat satellites are representative optical satellites that have observed the Earth's surface for a long-term, and are suitable for long-term changes such as disaster preparedness/recovery monitoring, land use change, change detection, and time series monitoring. In this paper, clouds and cloud shadows were detected using QA bands to detect and remove clouds simply and efficiently. Then, the missing area of the experimantal image is restorated through the SSG algorithm, which does not directly refer to the pixel value of the reference image, but performs restoration to the pixel value in the Experimental image. Through this study, we presented the possibility of utilizing the modified SSG algorithm by quantitatively and qualitatively evaluating information on variousl and cover conditions in the thermal wavelength band as well as the visible wavelength band observing the surface.

Cloud Detection and Restoration of Landsat-8 using STARFM (재난 모니터링을 위한 Landsat 8호 영상의 구름 탐지 및 복원 연구)

  • Lee, Mi Hee;Cheon, Eun Ji;Eo, Yang Dam
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_2
    • /
    • pp.861-871
    • /
    • 2019
  • Landsat satellite images have been increasingly used for disaster damage analysis and disaster monitoring because they can be used for periodic and broad observation of disaster damage area. However, periodic disaster monitoring has limitation because of areas having missing data due to clouds as a characteristic of optical satellite images. Therefore, a study needs to be conducted for restoration of missing areas. This study detected and removed clouds and cloud shadows by using the quality assessment (QA) band provided when acquiring Landsat-8 images, and performed image restoration of removed areas through a spatial and temporal adaptive reflectance fusion (STARFM) algorithm. The restored image by the proposed method is compared with the restored image by conventional image restoration method throught MLC method. As a results, the restoration method by STARFM showed an overall accuracy of 89.40%, and it is confirmed that the restoration method is more efficient than the conventional image restoration method. Therefore, the results of this study are expected to increase the utilization of disaster analysis using Landsat satellite images.

Generation of Sea Surface Temperature Products Considering Cloud Effects Using NOAA/AVHRR Data in the TeraScan System: Case Study for May Data (TeraScan시스템에서 NOAA/AVHRR 해수면온도 산출시 구름 영향에 따른 신뢰도 부여 기법: 5월 자료 적용)

  • Yang, Sung-Soo;Yang, Chan-Su;Park, Kwang-Soon
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.13 no.3
    • /
    • pp.165-173
    • /
    • 2010
  • A cloud detection method is introduced to improve the reliability of NOAA/AVHRR Sea Surface Temperature (SST) data processed during the daytime and nighttime in the TeraScan System. In daytime, the channels 2 and 4 are used to detect a cloud using the three tests, which are spatial uniformity tests of brightness temperature (infrared channel 4) and channel 2 albedo, and reflectivity threshold test for visible channel 2. Meanwhile, the nighttime cloud detection tests are performed by using the channels 3 and 4, because the channel 2 data are not available in nighttime. This process include the dual channel brightness temperature difference (ch3 - ch4) and infrared channel brightness temperature threshold tests. For a comparison of daytime and nighttime SST images, two data used here are obtained at 0:28 (UTC) and 21:00 (UTC) on May 13, 2009. 6 parameters was tested to understand the factors that affect a cloud masking in and around Korean Peninsula. In daytime, the thresholds for ch2_max cover a range 3 through 8, and ch4_delta and ch2_delta are fixed on 5 and 2, respectively. In nighttime, the threshold range of ch3_minus_ch4 is from -1 to 0, and ch4_delta and min_ch4_temp have the fixed thresholds with 3.5 and 0, respectively. It is acceptable that the resulted images represent a reliability of SST according to the change of cloud masking area by each level. In the future, the accuracy of SST will be verified, and an assimilation method for SST data should be tested for a reliability improvement considering an atmospheric characteristic of research area around Korean Peninsula.

Sea Fog Detection Algorithm Using Visible and Near Infrared Bands (가시 밴드와 근적외 밴드를 이용한 해무 탐지 알고리즘)

  • Lee, Kyung-Hun;Kwon, Byung-Hyuk;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.3
    • /
    • pp.669-676
    • /
    • 2018
  • The Geostationary Ocean Color Imager(: GOCI) detects the sea fog at a high horizontal resolution of $500m{\times}500m$ using the Rayleigh corrected reflectance of 8 bands. The visible and the near infrared waves strongly reflect the characteristics of the earth surface, causing errors in cloud and fog detection. A threshold of the Band7 reflectance was set to detect the sea fog entering the land. When the region on which Band4 reflectance is larger than Band8 is determinated as cloud, the error over-estimated as sea fog is corrected by comparing the average reflectance with the surrounding region. The improved algorithm has been verified by comparing the fog images of the Cheollian satellite (COMS: Communication, Ocean, and Meteorological Satellite) as well as the visibility data from the Korea Meteorological Administration.

Combining Conditional Generative Adversarial Network and Regression-based Calibration for Cloud Removal of Optical Imagery (광학 영상의 구름 제거를 위한 조건부 생성적 적대 신경망과 회귀 기반 보정의 결합)

  • Kwak, Geun-Ho;Park, Soyeon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1357-1369
    • /
    • 2022
  • Cloud removal is an essential image processing step for any task requiring time-series optical images, such as vegetation monitoring and change detection. This paper presents a two-stage cloud removal method that combines conditional generative adversarial networks (cGANs) with regression-based calibration to construct a cloud-free time-series optical image set. In the first stage, the cGANs generate initial prediction results using quantitative relationships between optical and synthetic aperture radar images. In the second stage, the relationships between the predicted results and the actual values in non-cloud areas are first quantified via random forest-based regression modeling and then used to calibrate the cGAN-based prediction results. The potential of the proposed method was evaluated from a cloud removal experiment using Sentinel-2 and COSMO-SkyMed images in the rice field cultivation area of Gimje. The cGAN model could effectively predict the reflectance values in the cloud-contaminated rice fields where severe changes in physical surface conditions happened. Moreover, the regression-based calibration in the second stage could improve the prediction accuracy, compared with a regression-based cloud removal method using a supplementary image that is temporally distant from the target image. These experimental results indicate that the proposed method can be effectively applied to restore cloud-contaminated areas when cloud-free optical images are unavailable for environmental monitoring.

Enhancing GEMS Surface Reflectance in Snow-Covered Regions through Combined of GeoKompsat-2A/2B Data (천리안 위성자료 융합을 통한 적설역에서의 GEMS 지표면 반사도 개선 연구)

  • Suyoung Sim;Daeseong Jung;Jongho Woo;Nayeon Kim;Sungwoo Park;Hyunkee Hong;Kyung-Soo Han
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1497-1503
    • /
    • 2023
  • To address challenges in classifying clouds and snow cover when calculating ground reflectance in Near-UltraViolet (UV) wavelengths, this study introduces a methodology that combines cloud data from the Geostationary Environmental Monitoring Spectrometer (GEMS) and the Advanced Meteorological Imager (AMI)satellites for snow cover analysis. The proposed approach aims to enhance the quality of surface reflectance calculations, and combined cloud data were generated by integrating GEMS cloud data with AMI cloud detection data. When applied to compute GEMS surface reflectance, this fusion approach significantly mitigated underestimation issues compared to using only GEMS cloud data in snow-covered regions, resulting in an approximately 17% improvement across the entire observational area. The findings of this study highlight the potential to address persistent underestimation challenges in snow areas by employing fused cloud data, consequently enhancing the accuracy of other Level-2 products based on improved surface reflectivity.

Cloud-cell Tracking Analysis using Satellite Image of Extreme Heavy Snowfall in the Yeongdong Region (영동지역의 극한 대설에 대한 위성관측으로부터 구름 추적)

  • Cho, Young-Jun;Kwon, Tae-Yong
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.83-107
    • /
    • 2014
  • This study presents spatial characteristics of cloud using satellite image in the extreme heavy snowfall of the Yeongdong region. 3 extreme heavy snowfall events in the Yeongdong region during the recent 12 years (2001 ~ 2012) are selected for which the fresh snow cover exceed 50 cm/day. Spatial characteristics (minimum brightness temperature; Tmin, cloud size, center of cloud-cell) of cloud are analyzed by tracking main cloud-cell related with these events. These characteristics are compared with radar precipitation in the Yeongdong region to investigate relationship between cloud and precipitation. The results are summarized as follows, selected extreme heavy snowfall events are associated with the isolated, well-developed, and small-scale convective cloud which is developing over the Yeongdong region or moving from over East Korea Bay to the Yeongdong region. During the period of main precipitation, cloud-cell Tmin is low ($-40{\sim}-50^{\circ}C$) and cloud area is small (17,000 ~ 40,000 $km^2$). Precipitation area (${\geq}$ 0.5 mm/hr) from radar also shows small and isolated shape (4,000 ~ 8,000 $km^2$). The locations of the cloud and precipitation are similar, but in there centers are located closely to the coast of the Yeongdong region. In all events the extreme heavy snowfall occur in the period a developed cloud-cell was moving into the coastal waters of the Yeongdong. However, it was found that developing stage of cloud and precipitation are not well matched each other in one of 3 events. Water vapor image shows that cloud-cell is developed on the northern edge of the dry(dark) region. Therefore, at the result analyzed from cloud and precipitation, selected extreme heavy snowfall events are associated with small-scale secondary cyclone or vortex, not explosive polar low. Detection and tracking small-scale cloud-cell in the real-time forecasting of the Yeongdong extreme heavy snowfall is important.

Detection of Forest Fire Damage from Sentinel-1 SAR Data through the Synergistic Use of Principal Component Analysis and K-means Clustering (Sentinel-1 SAR 영상을 이용한 주성분분석 및 K-means Clustering 기반 산불 탐지)

  • Lee, Jaese;Kim, Woohyeok;Im, Jungho;Kwon, Chunguen;Kim, Sungyong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1373-1387
    • /
    • 2021
  • Forest fire poses a significant threat to the environment and society, affecting carbon cycle and surface energy balance, and resulting in socioeconomic losses. Widely used multi-spectral satellite image-based approaches for burned area detection have a problem in that they do not work under cloudy conditions. Therefore, in this study, Sentinel-1 Synthetic Aperture Radar (SAR) data from Europe Space Agency, which can be collected in all weather conditions, were used to identify forest fire damaged area based on a series of processes including Principal Component Analysis (PCA) and K-means clustering. Four forest fire cases, which occurred in Gangneung·Donghae and Goseong·Sokcho in Gangwon-do of South Korea and two areas in North Korea on April 4, 2019, were examined. The estimated burned areas were evaluated using fire reference data provided by the National Institute of Forest Science (NIFOS) for two forest fire cases in South Korea, and differenced normalized burn ratio (dNBR) for all four cases. The average accuracy using the NIFOS reference data was 86% for the Gangneung·Donghae and Goseong·Sokcho fires. Evaluation using dNBR showed an average accuracy of 84% for all four forest fire cases. It was also confirmed that the stronger the burned intensity, the higher detection the accuracy, and vice versa. Given the advantage of SAR remote sensing, the proposed statistical processing and K-means clustering-based approach can be used to quickly identify forest fire damaged area across the Korean Peninsula, where a cloud cover rate is high and small-scale forest fires frequently occur.

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.

Forest fire detection in Kangwon Province using RADARSAT-1 SAR data (RADARSAT-1 SAR 영상을 이용한 강원도 산불지역 관측)

  • Kim, Sang-Wan
    • Proceedings of the KSRS Conference
    • /
    • 2009.03a
    • /
    • pp.309-313
    • /
    • 2009
  • 산불은 전세계적으로 발생하는 가장 주요한 재해현상 중 하나이다. 산불 감시나 산불에 의한 피해지역의 효과적인 관측은 피해 지역을 최소화하고, 효율적인 피해 복구 계획 수립에 매우 중요한 기초자료를 제공한다. 광학 위성 자료를 활용한 산불 피해지역 탐지가 널리 사용되고 있음에도 불구하고, 산불에 의한 연기 또는 구름 분포에 의해 종종 사용상에 제약이 있다. 본 연구에서는 2000년 4월 강원도 고성, 강릉, 삼척, 물진 지역에서 발생한 대규모 산불을 연구 대상지역으로 하여, 1998년-2000년 동안 획득된 RADARSAT-1 SAR 영상을 이용하여 산불 피해 지역 감시의 활용성을 연구하였다. 산불에 의한 산림 피해지역 관측을 위해 RADARSAT-1 SAR 영상의 후방산란관의 변화를 통한 변환 탐지를 수행하였다. 산불 피해지역에서 산불 전에 비해 산불 후에 획득된 RADARSAT-1 SAR 영상의 후방산란값이 증가하는 것으로 관측되었다. RADARSAT-1 SAR 영상으로부터 관측된 산불 피해 지역은 Landsat-7 ETM 자료와 현장 조사 자료에 의한 산불 피해 지역과 매우 상관성이 높은 것으로 관측되었다.

  • PDF