• Title/Summary/Keyword: Pixel

Search Result 3,986, Processing Time 0.027 seconds

Development of Cloud and Shadow Detection Algorithm for Periodic Composite of Sentinel-2A/B Satellite Images (Sentinel-2A/B 위성영상의 주기합성을 위한 구름 및 구름 그림자 탐지 기법 개발)

  • Kim, Sun-Hwa;Eun, Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.989-998
    • /
    • 2021
  • In the utilization of optical satellite imagery, which is greatly affected by clouds, periodic composite technique is a useful method to minimize the influence of clouds. Recently, a technique for selecting the optimal pixel that is least affected by the cloud and shadow during a certain period by directly inputting cloud and cloud shadow information during period compositing has been proposed. Accurate extraction of clouds and cloud shadowsis essential in order to derive optimal composite results. Also, in the case of an surface targets where spectral information is important, such as crops, the loss of spectral information should be minimized during cloud-free compositing. In thisstudy, clouds using two spectral indicators (Haze Optimized Tranformation and MeanVis) were used to derive a detection technique with low loss ofspectral information while maintaining high detection accuracy of clouds and cloud shadowsfor cabbage fieldsin the highlands of Gangwon-do. These detection results were compared and analyzed with cloud and cloud shadow information provided by Sentinel-2A/B. As a result of analyzing data from 2019 to 2021, cloud information from Sentinel-2A/B satellites showed detection accuracy with an F1 value of 0.91, but bright artifacts were falsely detected as clouds. On the other hand, the cloud detection result obtained by applying the threshold (=0.05) to the HOT showed relatively low detection accuracy (F1=0.72), but the loss ofspectral information was minimized due to the small number of false positives. In the case of cloud shadows, only minimal shadows were detected in the Sentinel-2A/B additional layer, but when a threshold (= 0.015) was applied to MeanVis, cloud shadowsthat could be distinguished from the topographically generated shadows could be detected. By inputting spectral indicators-based cloud and shadow information,stable monthly cloud-free composited vegetation index results were obtained, and in the future, high-accuracy cloud information of Sentinel-2A/B will be input to periodic cloud-free composite for comparison.

Quality Evaluation through Inter-Comparison of Satellite Cloud Detection Products in East Asia (동아시아 지역의 위성 구름탐지 산출물 상호 비교를 통한 품질 평가)

  • Byeon, Yugyeong;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Jung, Daeseong;Sim, Suyoung;Woo, Jongho;Jeon, Uujin;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_2
    • /
    • pp.1829-1836
    • /
    • 2021
  • Cloud detection means determining the presence or absence of clouds in a pixel in a satellite image, and acts as an important factor affecting the utility and accuracy of the satellite image. In this study, among the satellites of various advanced organizations that provide cloud detection data, we intend to perform quantitative and qualitative comparative analysis on the difference between the cloud detection data of GK-2A/AMI, Terra/MODIS, and Suomi-NPP/VIIRS. As a result of quantitative comparison, the Proportion Correct (PC) index values in January were 74.16% for GK-2A & MODIS, 75.39% for GK-2A & VIIRS, and 87.35% for GK-2A & MODIS in April, and GK-2A & VIIRS showed that 87.71% of clouds were detected in April compared to January without much difference by satellite. As for the qualitative comparison results, when compared with RGB images, it was confirmed that the results corresponding to April rather than January detected clouds better than the previous quantitative results. However, if thin clouds or snow cover exist, each satellite were some differences in the cloud detection results.

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Suggested Protocol for Efficient Medical Image Information Exchange in Korea: Breast MRI (효율적 의료영상정보교류를 위한 프로토콜 제안: 유방자기공명영상)

  • Park, Ji Hee;Choi, Seon-Hyeong;Kim, Sungjun;Yong, Hwan Seok;Woo, Hyunsik;Jin, Kwang Nam;Jeong, Woo Kyoung;Shin, Na-Young;Choi, Moon Hyung;Jung, Seung Eun
    • Journal of the Korean Society of Radiology
    • /
    • v.79 no.5
    • /
    • pp.254-258
    • /
    • 2018
  • Purpose: Establishment of an appropriate protocol for breast magnetic resonance imaging (MRI) in the study of image quality standards to enhance the effectiveness of medical image information exchange, which is part of the construction and activation of clinical information exchange for healthcare informatization. Materials and Methods: The recommended protocols of breast and MRI scans were reviewed and the questionnaire was prepared by a responsible researcher. Then, a panel of 9 breast dedicated radiologists was set up in Korea. The expert panel conducted a total of three Delphi agreements to draw up a consensus on the breast MRI protocol. Results: The agreed breast MRI recommendation protocol is a 1.5 Tesla or higher device that acquires images with prone position using a breast dedicated coil and includes T2-weighted and pre-contrast T1-weighted images. Contrast enhancement images are acquired at least two times, and include 60-120 seconds between images and after 4 minutes. The contrast enhancement T1-weighted image should be less than 3 mm in thickness, less than 120 seconds in temporal resolution, and less than $1.5mm^2$ in-plane pixel resolution. Conclusion: The Delphi agreement of the domestic breast imaging specialist group has established the recommendation protocol of the effective breast MRI.

A Comparison between Multiple Satellite AOD Products Using AERONET Sun Photometer Observations in South Korea: Case Study of MODIS,VIIRS, Himawari-8, and Sentinel-3 (우리나라에서 AERONET 태양광도계 자료를 이용한 다종위성 AOD 산출물 비교평가: MODIS, VIIRS, Himawari-8, Sentinel-3의 사례연구)

  • Kim, Seoyeon;Jeong, Yemin;Youn, Youjeong;Cho, Subin;Kang, Jonggu;Kim, Geunah;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.543-557
    • /
    • 2021
  • Because aerosols have different spectral characteristics according to the size and composition of the particle and to the satellite sensors, a comparative analysis of aerosol products from various satellite sensors is required. In South Korea, however, a comprehensive study for the comparison of various official satellite AOD (Aerosol Optical Depth) products for a long period is not easily found. In this paper, we aimed to assess the performance of the AOD products from MODIS (Moderate Resolution Imaging Spectroradiometer), VIIRS (Visible Infrared Imaging Radiometer Suite), Himawari-8, and Sentinel-3 by referring to the AERONET (Aerosol Robotic Network) sun photometer observations for the period between January 2015 and December 2019. Seasonal and geographical characteristics of the accuracy of satellite AOD were also analyzed. The MODIS products, which were accumulated for a long time and optimized by the new MAIAC (Multiangle Implementation of Atmospheric Correction) algorithm, showed the best accuracy (CC=0.836) and were followed by the products from VIIRS and Himawari-8. On the other hand, Sentinel-3 AOD did not appear to have a good quality because it was recently launched and not sufficiently optimized yet, according to ESA (European Space Agency). The AOD of MODIS, VIIRS, and Himawari-8 did not show a significant difference in accuracy according to season and to urban vs. non-urban regions, but the mixed pixel problem was partly found in a few coastal regions. Because AOD is an essential component for atmospheric correction, the result of this study can be a reference to the future work for the atmospheric correction for the Korean CAS (Compact Advanced Satellite) series.

Hell Formation and Character of Literary Works of the Late Joseon Dynasty (조선후기 문학작품의 지옥 형상화와 그 성격)

  • Kim, Ki-Jong
    • (The)Study of the Eastern Classic
    • /
    • no.66
    • /
    • pp.129-162
    • /
    • 2017
  • This article examines the form of hell and the nature of literary works in the late Joseon period. 'Hoeshimgok(回心曲)' divides a sinner into a man and a woman, and presents a virtue of goodness to a man and an item of evil to a woman. The elements of virtue and malice are both Buddhist ethical norms and Confucian ethical norms. Hell-related novels have common features that emphasize the ethical norms that should be kept in daily life through the causes of hell, though the patterns of punishment and their reasons are slightly different depending on the works. And 'Hoeshimgok(回心曲)' and these works are generally shown by reducing the punishment pixel of hell compared to the cause of hell. This characteristic shows that the literary works of the late Joseon literature related to hell were mainly aimed at providing or educating ethical virtues centered on 'Samgangwol(三綱五倫)' through sanctions of 'Hell' widely known to the general public. The emphasis on Confucian ethics is not limited to works of literature related to hell. In the nineteenth century, when these works were created and circulated, there is a surge in the number of chapters and publications of books for Confucian Indoctrination, Didactic Gasa, and Goodness Books, which emphasize Confucian ethics. Such a strengthening of the Confucian ethical consciousness can be attributed to the crisis of the 19th century Joseon society about the social confusion that threatens the existing system. In particular, the creation and circulation of literary works related to hell in the late Joseon period is related to the dissemination and spread of Catholicism. In the end, the hell shape of the late Joseon literature reflects the crisis of social confusion faced by Joseon society in the nineteenth century. Therefore, it can be said that it has the character of literary response to the prevalent diffusion of Catholicism.

A Study on Daytime Transparent Cloud Detection through Machine Learning: Using GK-2A/AMI (기계학습을 통한 주간 반투명 구름탐지 연구: GK-2A/AMI를 이용하여)

  • Byeon, Yugyeong;Jin, Donghyun;Seong, Noh-hun;Woo, Jongho;Jeon, Uujin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1181-1189
    • /
    • 2022
  • Clouds are composed of tiny water droplets, ice crystals, or mixtures suspended in the atmosphere and cover about two-thirds of the Earth's surface. Cloud detection in satellite images is a very difficult task to separate clouds and non-cloud areas because of similar reflectance characteristics to some other ground objects or the ground surface. In contrast to thick clouds, which have distinct characteristics, thin transparent clouds have weak contrast between clouds and background in satellite images and appear mixed with the ground surface. In order to overcome the limitations of transparent clouds in cloud detection, this study conducted cloud detection focusing on transparent clouds using machine learning techniques (Random Forest [RF], Convolutional Neural Networks [CNN]). As reference data, Cloud Mask and Cirrus Mask were used in MOD35 data provided by MOderate Resolution Imaging Spectroradiometer (MODIS), and the pixel ratio of training data was configured to be about 1:1:1 for clouds, transparent clouds, and clear sky for model training considering transparent cloud pixels. As a result of the qualitative comparison of the study, bothRF and CNN successfully detected various types of clouds, including transparent clouds, and in the case of RF+CNN, which mixed the results of the RF model and the CNN model, the cloud detection was well performed, and was confirmed that the limitations of the model were improved. As a quantitative result of the study, the overall accuracy (OA) value of RF was 92%, CNN showed 94.11%, and RF+CNN showed 94.29% accuracy.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.

Deep Learning Approaches for Accurate Weed Area Assessment in Maize Fields (딥러닝 기반 옥수수 포장의 잡초 면적 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Wan-Gyu Sang;Ho-young Ban;Sungyul Chang;Jae-Kyeong Baek;Yun-Ho Lee;Woo-jin Im;Myung-chul Seo;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.17-27
    • /
    • 2023
  • Weeds are one of the factors that reduce crop yield through nutrient and photosynthetic competition. Quantification of weed density are an important part of making accurate decisions for precision weeding. In this study, we tried to quantify the density of weeds in images of maize fields taken by unmanned aerial vehicle (UAV). UAV image data collection took place in maize fields from May 17 to June 4, 2021, when maize was in its early growth stage. UAV images were labeled with pixels from maize and those without and the cropped to be used as the input data of the semantic segmentation network for the maize detection model. We trained a model to separate maize from background using the deep learning segmentation networks DeepLabV3+, U-Net, Linknet, and FPN. All four models showed pixel accuracy of 0.97, and the mIOU score was 0.76 and 0.74 in DeepLabV3+ and U-Net, higher than 0.69 for Linknet and FPN. Weed density was calculated as the difference between the green area classified as ExGR (Excess green-Excess red) and the maize area predicted by the model. Each image evaluated for weed density was recombined to quantify and visualize the distribution and density of weeds in a wide range of maize fields. We propose a method to quantify weed density for accurate weeding by effectively separating weeds, maize, and background from UAV images of maize fields.

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.