• Title/Summary/Keyword: Pixels

Search Result 2,463, Processing Time 0.026 seconds

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

A Study on Daytime Transparent Cloud Detection through Machine Learning: Using GK-2A/AMI (기계학습을 통한 주간 반투명 구름탐지 연구: GK-2A/AMI를 이용하여)

  • Byeon, Yugyeong;Jin, Donghyun;Seong, Noh-hun;Woo, Jongho;Jeon, Uujin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1181-1189
    • /
    • 2022
  • Clouds are composed of tiny water droplets, ice crystals, or mixtures suspended in the atmosphere and cover about two-thirds of the Earth's surface. Cloud detection in satellite images is a very difficult task to separate clouds and non-cloud areas because of similar reflectance characteristics to some other ground objects or the ground surface. In contrast to thick clouds, which have distinct characteristics, thin transparent clouds have weak contrast between clouds and background in satellite images and appear mixed with the ground surface. In order to overcome the limitations of transparent clouds in cloud detection, this study conducted cloud detection focusing on transparent clouds using machine learning techniques (Random Forest [RF], Convolutional Neural Networks [CNN]). As reference data, Cloud Mask and Cirrus Mask were used in MOD35 data provided by MOderate Resolution Imaging Spectroradiometer (MODIS), and the pixel ratio of training data was configured to be about 1:1:1 for clouds, transparent clouds, and clear sky for model training considering transparent cloud pixels. As a result of the qualitative comparison of the study, bothRF and CNN successfully detected various types of clouds, including transparent clouds, and in the case of RF+CNN, which mixed the results of the RF model and the CNN model, the cloud detection was well performed, and was confirmed that the limitations of the model were improved. As a quantitative result of the study, the overall accuracy (OA) value of RF was 92%, CNN showed 94.11%, and RF+CNN showed 94.29% accuracy.

Effects of Ginsenoside Rb1 Loaded Films on Oral Wound Healing (Ginsenoside Rb1함유 필름의 구강 내 창상 회복 촉진 효과)

  • Jeong Hyun, Lee;Seung Hwan, Park;Asiri Naif, Mohammed;Myoung-Han, Lee;Dong-Keon, Kweon;Yongkwon, Chae;Koeun, Lee;Misun, Kim;Hyoseol, Lee;Sungchul, Choi;Ok Hyung, Nam
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.49 no.3
    • /
    • pp.300-309
    • /
    • 2022
  • This study aimed to evaluate the effects of two ginsenoside Rb1 (G-Rb1) loaded films on oral wound healing. Two types of G-Rb1 films, G-Rb1 loaded carboxymethyl cellulose (GCMC) film and G-Rb1 loaded hyaluronic acid (GHA) film, were developed. A total of 36 Sprague-Dawley rats were divided into 3 groups: control, GCMC, and GHA. After wound formation on midpalate, the control group was left without treatment, whereas the experimental groups had films attached. The specimen was analyzed clinically and histologically after 7 and 21 days. For clinical analysis, the area of incompletely re-epithelialized wound was measured. For histological analysis, the distance between the margins of the wound (soft tissue gap) was measured and the percentage of the collagen-stained area on the specimen was calculated. In clinical and soft tissue gap analysis, the GCMC group presented improved healing compared to the GHA group and the control at day 7 (p < 0.05). And, both GCMC (9.74 ± 10.12%) and GHA groups (19.50 ± 14.47%) presented greater collagen-positive pixels compared to control (0.89 ± 1.60%) at day 7 (p < 0.05). However, there were no differences in these parameters among the groups on day 21. Therefore, G-Rb1 loaded films improved oral wound healing.

Sea Fog Level Estimation based on Maritime Digital Image for Protection of Aids to Navigation (항로표지 보호를 위한 디지털 영상기반 해무 강도 측정 알고리즘)

  • Ryu, Eun-Ji;Lee, Hyo-Chan;Cho, Sung-Yoon;Kwon, Ki-Won;Im, Tae-Ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.25-32
    • /
    • 2021
  • In line with future changes in the marine environment, Aids to Navigation has been used in various fields and their use is increasing. The term "Aids to Navigation" means an aid to navigation prescribed by Ordinance of the Ministry of Oceans and Fisheries which shows navigating ships the position and direction of the ships, position of obstacles, etc. through lights, shapes, colors, sound, radio waves, etc. Also now the use of Aids to Navigation is transforming into a means of identifying and recording the marine weather environment by mounting various sensors and cameras. However, Aids to Navigation are mainly lost due to collisions with ships, and in particular, safety accidents occur because of poor observation visibility due to sea fog. The inflow of sea fog poses risks to ports and sea transportation, and it is not easy to predict sea fog because of the large difference in the possibility of occurrence depending on time and region. In addition, it is difficult to manage individually due to the features of Aids to Navigation distributed throughout the sea. To solve this problem, this paper aims to identify the marine weather environment by estimating sea fog level approximately with images taken by cameras mounted on Aids to Navigation and to resolve safety accidents caused by weather. Instead of optical and temperature sensors that are difficult to install and expensive to measure sea fog level, sea fog level is measured through the use of general images of cameras mounted on Aids to Navigation. Furthermore, as a prior study for real-time sea fog level estimation in various seas, the sea fog level criteria are presented using the Haze Model and Dark Channel Prior. A specific threshold value is set in the image through Dark Channel Prior(DCP), and based on this, the number of pixels without sea fog is found in the entire image to estimate the sea fog level. Experimental results demonstrate the possibility of estimating the sea fog level using synthetic haze image dataset and real haze image dataset.

Deep Learning Approaches for Accurate Weed Area Assessment in Maize Fields (딥러닝 기반 옥수수 포장의 잡초 면적 평가)

  • Hyeok-jin Bak;Dongwon Kwon;Wan-Gyu Sang;Ho-young Ban;Sungyul Chang;Jae-Kyeong Baek;Yun-Ho Lee;Woo-jin Im;Myung-chul Seo;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.1
    • /
    • pp.17-27
    • /
    • 2023
  • Weeds are one of the factors that reduce crop yield through nutrient and photosynthetic competition. Quantification of weed density are an important part of making accurate decisions for precision weeding. In this study, we tried to quantify the density of weeds in images of maize fields taken by unmanned aerial vehicle (UAV). UAV image data collection took place in maize fields from May 17 to June 4, 2021, when maize was in its early growth stage. UAV images were labeled with pixels from maize and those without and the cropped to be used as the input data of the semantic segmentation network for the maize detection model. We trained a model to separate maize from background using the deep learning segmentation networks DeepLabV3+, U-Net, Linknet, and FPN. All four models showed pixel accuracy of 0.97, and the mIOU score was 0.76 and 0.74 in DeepLabV3+ and U-Net, higher than 0.69 for Linknet and FPN. Weed density was calculated as the difference between the green area classified as ExGR (Excess green-Excess red) and the maize area predicted by the model. Each image evaluated for weed density was recombined to quantify and visualize the distribution and density of weeds in a wide range of maize fields. We propose a method to quantify weed density for accurate weeding by effectively separating weeds, maize, and background from UAV images of maize fields.

Rainfall image DB construction for rainfall intensity estimation from CCTV videos: focusing on experimental data in a climatic environment chamber (CCTV 영상 기반 강우강도 산정을 위한 실환경 실험 자료 중심 적정 강우 이미지 DB 구축 방법론 개발)

  • Byun, Jongyun;Jun, Changhyun;Kim, Hyeon-Joon;Lee, Jae Joon;Park, Hunil;Lee, Jinwook
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.6
    • /
    • pp.403-417
    • /
    • 2023
  • In this research, a methodology was developed for constructing an appropriate rainfall image database for estimating rainfall intensity based on CCTV video. The database was constructed in the Large-Scale Climate Environment Chamber of the Korea Conformity Laboratories, which can control variables with high irregularity and variability in real environments. 1,728 scenarios were designed under five different experimental conditions. 36 scenarios and a total of 97,200 frames were selected. Rain streaks were extracted using the k-nearest neighbor algorithm by calculating the difference between each image and the background. To prevent overfitting, data with pixel values greater than set threshold, compared to the average pixel value for each image, were selected. The area with maximum pixel variability was determined by shifting with every 10 pixels and set as a representative area (180×180) for the original image. After re-transforming to 120×120 size as an input data for convolutional neural networks model, image augmentation was progressed under unified shooting conditions. 92% of the data showed within the 10% absolute range of PBIAS. It is clear that the final results in this study have the potential to enhance the accuracy and efficacy of existing real-world CCTV systems with transfer learning.

Visible and SWIR Satellite Image Fusion Using Multi-Resolution Transform Method Based on Haze-Guided Weight Map (Haze-Guided Weight Map 기반 다중해상도 변환 기법을 활용한 가시광 및 SWIR 위성영상 융합)

  • Taehong Kwak;Yongil Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.283-295
    • /
    • 2023
  • With the development of sensor and satellite technology, numerous high-resolution and multi-spectral satellite images have been available. Due to their wavelength-dependent reflection, transmission, and scattering characteristics, multi-spectral satellite images can provide complementary information for earth observation. In particular, the short-wave infrared (SWIR) band can penetrate certain types of atmospheric aerosols from the benefit of the reduced Rayleigh scattering effect, which allows for a clearer view and more detailed information to be captured from hazed surfaces compared to the visible band. In this study, we proposed a multi-resolution transform-based image fusion method to combine visible and SWIR satellite images. The purpose of the fusion method is to generate a single integrated image that incorporates complementary information such as detailed background information from the visible band and land cover information in the haze region from the SWIR band. For this purpose, this study applied the Laplacian pyramid-based multi-resolution transform method, which is a representative image decomposition approach for image fusion. Additionally, we modified the multiresolution fusion method by combining a haze-guided weight map based on the prior knowledge that SWIR bands contain more information in pixels from the haze region. The proposed method was validated using very high-resolution satellite images from Worldview-3, containing multi-spectral visible and SWIR bands. The experimental data including hazed areas with limited visibility caused by smoke from wildfires was utilized to validate the penetration properties of the proposed fusion method. Both quantitative and visual evaluations were conducted using image quality assessment indices. The results showed that the bright features from the SWIR bands in the hazed areas were successfully fused into the integrated feature maps without any loss of detailed information from the visible bands.

Estimation for Ground Air Temperature Using GEO-KOMPSAT-2A and Deep Neural Network (심층신경망과 천리안위성 2A호를 활용한 지상기온 추정에 관한 연구)

  • Taeyoon Eom;Kwangnyun Kim;Yonghan Jo;Keunyong Song;Yunjeong Lee;Yun Gon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.207-221
    • /
    • 2023
  • This study suggests deep neural network models for estimating air temperature with Level 1B (L1B) datasets of GEO-KOMPSAT-2A (GK-2A). The temperature at 1.5 m above the ground impact not only daily life but also weather warnings such as cold and heat waves. There are many studies to assume the air temperature from the land surface temperature (LST) retrieved from satellites because the air temperature has a strong relationship with the LST. However, an algorithm of the LST, Level 2 output of GK-2A, works only clear sky pixels. To overcome the cloud effects, we apply a deep neural network (DNN) model to assume the air temperature with L1B calibrated for radiometric and geometrics from raw satellite data and compare the model with a linear regression model between LST and air temperature. The root mean square errors (RMSE) of the air temperature for model outputs are used to evaluate the model. The number of 95 in-situ air temperature data was 2,496,634 and the ratio of datasets paired with LST and L1B show 42.1% and 98.4%. The training years are 2020 and 2021 and 2022 is used to validate. The DNN model is designed with an input layer taking 16 channels and four hidden fully connected layers to assume an air temperature. As a result of the model using 16 bands of L1B, the DNN with RMSE 2.22℃ showed great performance than the baseline model with RMSE 3.55℃ on clear sky conditions and the total RMSE including overcast samples was 3.33℃. It is suggested that the DNN is able to overcome cloud effects. However, it showed different characteristics in seasonal and hourly analysis and needed to append solar information as inputs to make a general DNN model because the summer and winter seasons showed a low coefficient of determinations with high standard deviations.

Development of Cloud Detection Method Considering Radiometric Characteristics of Satellite Imagery (위성영상의 방사적 특성을 고려한 구름 탐지 방법 개발)

  • Won-Woo Seo;Hongki Kang;Wansang Yoon;Pyung-Chae Lim;Sooahm Rhee;Taejung Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1211-1224
    • /
    • 2023
  • Clouds cause many difficult problems in observing land surface phenomena using optical satellites, such as national land observation, disaster response, and change detection. In addition, the presence of clouds affects not only the image processing stage but also the final data quality, so it is necessary to identify and remove them. Therefore, in this study, we developed a new cloud detection technique that automatically performs a series of processes to search and extract the pixels closest to the spectral pattern of clouds in satellite images, select the optimal threshold, and produce a cloud mask based on the threshold. The cloud detection technique largely consists of three steps. In the first step, the process of converting the Digital Number (DN) unit image into top-of-atmosphere reflectance units was performed. In the second step, preprocessing such as Hue-Value-Saturation (HSV) transformation, triangle thresholding, and maximum likelihood classification was applied using the top of the atmosphere reflectance image, and the threshold for generating the initial cloud mask was determined for each image. In the third post-processing step, the noise included in the initial cloud mask created was removed and the cloud boundaries and interior were improved. As experimental data for cloud detection, CAS500-1 L2G images acquired in the Korean Peninsula from April to November, which show the diversity of spatial and seasonal distribution of clouds, were used. To verify the performance of the proposed method, the results generated by a simple thresholding method were compared. As a result of the experiment, compared to the existing method, the proposed method was able to detect clouds more accurately by considering the radiometric characteristics of each image through the preprocessing process. In addition, the results showed that the influence of bright objects (panel roofs, concrete roads, sand, etc.) other than cloud objects was minimized. The proposed method showed more than 30% improved results(F1-score) compared to the existing method but showed limitations in certain images containing snow.

Detection of Wildfire Burned Areas in California Using Deep Learning and Landsat 8 Images (딥러닝과 Landsat 8 영상을 이용한 캘리포니아 산불 피해지 탐지)

  • Youngmin Seo;Youjeong Youn;Seoyeon Kim;Jonggu Kang;Yemin Jeong;Soyeon Choi;Yungyo Im;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1413-1425
    • /
    • 2023
  • The increasing frequency of wildfires due to climate change is causing extreme loss of life and property. They cause loss of vegetation and affect ecosystem changes depending on their intensity and occurrence. Ecosystem changes, in turn, affect wildfire occurrence, causing secondary damage. Thus, accurate estimation of the areas affected by wildfires is fundamental. Satellite remote sensing is used for forest fire detection because it can rapidly acquire topographic and meteorological information about the affected area after forest fires. In addition, deep learning algorithms such as convolutional neural networks (CNN) and transformer models show high performance for more accurate monitoring of fire-burnt regions. To date, the application of deep learning models has been limited, and there is a scarcity of reports providing quantitative performance evaluations for practical field utilization. Hence, this study emphasizes a comparative analysis, exploring performance enhancements achieved through both model selection and data design. This study examined deep learning models for detecting wildfire-damaged areas using Landsat 8 satellite images in California. Also, we conducted a comprehensive comparison and analysis of the detection performance of multiple models, such as U-Net and High-Resolution Network-Object Contextual Representation (HRNet-OCR). Wildfire-related spectral indices such as normalized difference vegetation index (NDVI) and normalized burn ratio (NBR) were used as input channels for the deep learning models to reflect the degree of vegetation cover and surface moisture content. As a result, the mean intersection over union (mIoU) was 0.831 for U-Net and 0.848 for HRNet-OCR, showing high segmentation performance. The inclusion of spectral indices alongside the base wavelength bands resulted in increased metric values for all combinations, affirming that the augmentation of input data with spectral indices contributes to the refinement of pixels. This study can be applied to other satellite images to build a recovery strategy for fire-burnt areas.