• Title/Summary/Keyword: RGB image

Search Result 821, Processing Time 0.03 seconds

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

An Analysis on the Episodes of Large-scale Transport of Natural Airborne Particles and Anthropogenically Affected Particles from Different Sources in the East Asian Continent in 2008 (2008년 동아시아 대륙으로부터 기원이 다른 먼지와 인위적 오염 입자의 광역적 이동 사례에 대한 분석)

  • Kim, Hak-Sung;Yoon, Ma-Byong;Sohn, Jung-Joo
    • Journal of the Korean earth science society
    • /
    • v.31 no.6
    • /
    • pp.600-607
    • /
    • 2010
  • In 2008, multiple episodes of large-scale transport of natural airborne particles and anthropogenically affected particles from different sources in the East Asian continent were identified in the National Oceanic and Atmospheric Administration (NOAA) satellite RGB-composite images and the mass concentrations of ground level particulate matters. To analyze the aerosol size distribution during the large-scale transport of atmospheric aerosols, both aerosol optical depth (AOD; proportional to the aerosol total loading in the vertical column) and fine aerosol weighting (FW; fractional contribution of fine aerosol to the total AOD) of Moderate resolution Imaging Spectroradiometer (MODIS) aerosol products were used over the East Asian region. The six episodes of massive natural airborne particles were observed at Cheongwon, originating from sandstorms in northern China, Mongolia and the loess plateau of China. The $PM_{10}$ and $PM_{2.5}$ stood at 70% and 16% of the total mass concentration of TSP, respectively. However, the mass concentration of $PM_{2.5}$ among TSP increased as high as 23% in the episode in which they were flowing in by way f the industrial area in east China. In the other five episodes of anthropogenically affected particles that flowed into the Korean Peninsula from east China, the mass concentrations of $PM_{10}$ and $PM_{2.5}$ among TSP reached 82% and 65%, respectively. The average AOD for the large-scale transport of anthropogenically affected particle episodes in the East Asian region was measured at $0.42{\pm}0.17$ compared with AOD ($0.36{\pm}0.13$) for the natural airborne particle episodes. Particularly, the regions covering east China, the Yellow Sea, the Korean Peninsula, and the east Korean sea were characterized by high levels of AOD. The average FW values observed during the event of anthropogenically affected aerosols ($0.63{\pm}0.16$) were moderately higher than those of natural airborne particles ($0.52{\pm}0.13$). This observation suggests that anthropogenically affected particles contribute greatly to the atmospheric aerosols in East Asia.

Digitization of Adjectives that Describe Facial Complexion to Evaluate Various Expressions of Skin Tone in Korean (피부색을 표현하는 형용사들의 수치화를 통한 안색 평가법 연구)

  • Lee, Sun Hwa;Lee, Jung Ah;Park, Sun Mi;Kim, Younghee;Jang, Yoon Jung;Kim, Bora;Kim, Nam Soo;Moon, Tae Kee
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.43 no.4
    • /
    • pp.349-355
    • /
    • 2017
  • Skin tone plays a key role in one of the determinant for facial attractiveness. Most female customers have an interest in choosing skin color and improving their skin tone and their needs have been contributed the expansion of cosmetic products in the market. Recently, cosmetic customers, who want bright skin, are also interested in healthy and lively-looking skin. However, there is no method to evaluate the skin tone with the complexion-describing adjectives (CDAs). Therefore, this study was conducted to find the ways to objectify and digitize the CDA. We obtained that quasi $L^*$ at dark skin is 65 and quasi $L^*$ at bright skin is 74 for standard images, which are selected from our data base. To match the following seven CDAs: pale, clear, radiant, lively, healthy, rosy and dull, the colors of both images were adjusted by 30 panels. The quasi $L^*$, $a^*$ and $b^*$ were converted from the RGB values of the manipulated images. The differences between the quasi $L^*$, $a^*$ and $b^*$ values of standard images and manipulated images reflecting each CDA were statistically significant (p < 0.05). However, there were no statistical significances between the $L^*$ values of dark and bright skin images that were modified in accordance with each CDA and there also were no statistical significances between the quasi $a^*$ values of dark and bright skin for pale and clear CDAs. From the statistical analysis, the CDAs were observed to form three groups: (i) pale-clear-radiant, (ii) lively-healthy-rosy and (iii) dull. We recognized that people have a similar opinion about perception of CDAs. Following our results of this study, we establish new standard method for sensibility evaluation which is difficult to carry out scientifically or objectively.

A Cross-cultural Study on the Affection of Color with Variation of Tone and Chroma for Automotive Visual Display

  • Jung, Jinsung;Park, Jaekyu;Choe, Jaeho;Jung, Eui S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.123-144
    • /
    • 2017
  • Objective: The objective of this study is to evaluate affection on how users perceive colors viewed from an automotive visual display according to cultural and radical differences including North America, Europe, and Southeast Asia. This study especially aims to identify effects of the variation of tone and chroma of representative color groups by analyzing affection differences depending on cultural and racial differences targeting the colors constituted through variation of tone and chroma, centered on representative colors. Background: The colors of the menu, information display or background viewed through an automotive visual display are an important factor stimulating consumer's affection, and therefore an effort to express the vehicle's brand and product image through colors is made. The studies on colors focus only on the research on unique characteristics of colors, but an affective approach lacks according to cultural and racial differences on colors considering tone and chroma variation within a color from the currently used automotive visual displays. Method: To grasp the visual affection felt by users, this study extracted affective adjectives related with colors through existing literature and a dictionary for adjectives, and presented human affection dimensions on colors through evaluation of various colors. Prior to carrying out affection evaluation, the basic light sources, red (R), green (G), and blue (B) constituting the colors used for automotive visual displays were defined as a representative color group, respectively. When colors in a color group are constituted, the evaluation target of each color group consisted of the colors considering the variation of tone and chroma by changing color sense through RGB values of the remaining two light sources. And then, this study carried out affection evaluation on the constituted colors targeting the subjects with cultural and racial differences. Results: As a result of evaluating the constituted colors with representative affections, there were statistically significant differences between the groups having cultural and racial differences. As a result of S-N-K post-hoc analysis on the colors showing significant differences, North America and Europe were classified as heterogeneous groups. In some cases, Korea was classified as the homogeneous group with North America, but Korea was mainly classified as the homogenous group with Europe. Conclusion: The representative affections on colors from an automotive visual display was drawn as three affective dimensions: passionate, neat, and masculine. Based on these, the affection of Korea and Europe on the constituted colors showed significant differences from that of North America, as a result of affection evaluation on the constituted colors viewed through the visual display by reflecting cultural and racial factors. Regarding representative color groups, bigger cultural and racial differences were revealed in terms of affection on red and green colors than on blue color, and variation of affection was the biggest in the red color. Application: This study analyzed correlations of affection considering the colors constituted through variation of tone and chroma, and the culture and race in the representative color groups constituting a visual display. The results of this study are predicted to be utilized in coordination and selection of colors viewed from an automotive visual display taking into account culture and race.

Estimating the Spatial Distribution of Rumex acetosella L. on Hill Pasture using UAV Monitoring System and Digital Camera (무인기와 디지털카메라를 이용한 산지초지에서의 애기수영 분포도 제작)

  • Lee, Hyo-Jin;Lee, Hyowon;Go, Han Jong
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.4
    • /
    • pp.365-369
    • /
    • 2016
  • Red sorrel (Rumex acetosella L.), as one of exotic weeds in Korea, was dominated in grassland and reduced the quality of forage. Improving current pasture productivity by precision management requires practical tools to collect site-specific pasture weed data. Recent development in unmanned aerial vehicle (UAV) technology has offered cost effective and real time applications for site-specific data collection. To map red sorrel on a hill pasture, we tested the potential use of an UAV system with digital cameras (visible and near-infrared (NIR) camera). Field measurements were conducted on grazing hill pasture at Hanwoo Improvement Office, Seosan City, Chungcheongnam-do Province, Korea on May 17, 2014. Plant samples were obtained at 20 sites. An UAV system was used to obtain aerial photos from a height of approximately 50 m (approximately 30 cm spatial resolution). Normalized digital number values of Red, Green, Blue, and NIR channels were extracted from aerial photos. Multiple linear regression analysis results showed that the correlation coefficient between Rumex content and 4 bands of UAV image was 0.96 with root mean square error of 9.3. Therefore, UAV monitoring system can be a quick and cost effective tool to obtain spatial distribution of red sorrel data for precision management of hilly grazing pasture.

Integrating UAV Remote Sensing with GIS for Predicting Rice Grain Protein

  • Sarkar, Tapash Kumar;Ryu, Chan-Seok;Kang, Ye-Seong;Kim, Seong-Heon;Jeon, Sae-Rom;Jang, Si-Hyeong;Park, Jun-Woo;Kim, Suk-Gu;Kim, Hyun-Jin
    • Journal of Biosystems Engineering
    • /
    • v.43 no.2
    • /
    • pp.148-159
    • /
    • 2018
  • Purpose: Unmanned air vehicle (UAV) remote sensing was applied to test various vegetation indices and make prediction models of protein content of rice for monitoring grain quality and proper management practice. Methods: Image acquisition was carried out by using NIR (Green, Red, NIR), RGB and RE (Blue, Green, Red-edge) camera mounted on UAV. Sampling was done synchronously at the geo-referenced points and GPS locations were recorded. Paddy samples were air-dried to 15% moisture content, and then dehulled and milled to 92% milling yield and measured the protein content by near-infrared spectroscopy. Results: Artificial neural network showed the better performance with $R^2$ (coefficient of determination) of 0.740, NSE (Nash-Sutcliffe model efficiency coefficient) of 0.733 and RMSE (root mean square error) of 0.187% considering all 54 samples than the models developed by PR (polynomial regression), SLR (simple linear regression), and PLSR (partial least square regression). PLSR calibration models showed almost similar result with PR as 0.663 ($R^2$) and 0.169% (RMSE) for cloud-free samples and 0.491 ($R^2$) and 0.217% (RMSE) for cloud-shadowed samples. However, the validation models performed poorly. This study revealed that there is a highly significant correlation between NDVI (normalized difference vegetation index) and protein content in rice. For the cloud-free samples, the SLR models showed $R^2=0.553$ and RMSE = 0.210%, and for cloud-shadowed samples showed 0.479 as $R^2$ and 0.225% as RMSE respectively. Conclusion: There is a significant correlation between spectral bands and grain protein content. Artificial neural networks have the strong advantages to fit the nonlinear problem when a sigmoid activation function is used in the hidden layer. Quantitatively, the neural network model obtained a higher precision result with a mean absolute relative error (MARE) of 2.18% and root mean square error (RMSE) of 0.187%.

Research for Calibration and Correction of Multi-Spectral Aerial Photographing System(PKNU 3) (다중분광 항공촬영 시스템(PKNU 3) 검정 및 보정에 관한 연구)

  • Lee, Eun Kyung;Choi, Chul Uong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.7 no.4
    • /
    • pp.143-154
    • /
    • 2004
  • The researchers, who seek geological and environmental information, depend on the remote sensing and aerial photographic datum from various commercial satellites and aircraft. However, the adverse weather conditions and the expensive equipment can restrict that the researcher can collect their data anywhere and any time. To allow for better flexibility, we have developed a compact, a multi-spectral automatic Aerial photographic system(PKNU 2). This system's Multi-spectral camera can catch the visible(RGB) and infrared(NIR) bands($3032{\times}2008$ pixels) image. Visible and infrared bands images were obtained from each camera respectively and produced Color-infrared composite images to be analyzed in the purpose of the environment monitor but that was not very good data. Moreover, it has a demerit that the stereoscopic overlap area is not satisfied with 60% due to the 12s storage time of each data, while it was possible that PKNU 2 system photographed photos of great capacity. Therefore, we have been developing the advanced PKNU 2(PKNU 3) that consists of color-infrared spectral camera can photograph the visible and near infrared bands data using one sensor at once, thermal infrared camera, two of 40 G computers to store images, and MPEG board to compress and transfer data to the computer at the real time and can attach and detach itself to a helicopter. Verification and calibration of each sensor(REDLAKE MS 4000, Raytheon IRPro) were conducted before we took the aerial photographs for obtaining more valuable data. Corrections for the spectral characteristics and radial lens distortions of sensor were carried out.

  • PDF

Color decomposition method for multi-primary display using 3D-LUT in linearized LAB space (멀티프라이머리 디스플레이를 위한 3D-LUT 색 신호 분리 방법)

  • Kang Dong-Woo;Cho Yang-Ho;Kim Yun-Tae;Choe Won-Hee;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.9-18
    • /
    • 2005
  • This paper proposes the color decomposition method for multi-primary display (MPD) using a 3-dimensional look-up-table (3D-LUT) in a linearized LAB space. The proposed method decomposes conventional three-primary colors into the multi-primary control values of a display device under constraints of tristimulus match. To reproduce images on the MPD, the color signals should be estimated from a device-independent color space, such as CIEXYZ and CIELAB. In this paper, the linearized LAB space is used due to its linearity and additivity in color conversion. The proposed method constructs the 3-D LUT, which contain gamut boundary information to calculate color signals of the MPD. For the image reproduction, standard RGB or CIEXYZ is transformed to the linearized LAB and then hue and chroma are computed to refer to the 3D-LUT. In the linearlized LAB space, the color signals of a gamut boundary point with the same lightness and hue of an input point are calculated. Also, color signals of a point on gray axis are calculated with the same lightness of an input. With gamut boundary points and input point, color signals of the input points are obtained with the chroma ratio divided by the chroma of the gamut boundary point. Specially, for the hue change, neighboring boundary points are employed. As a result the proposed method guarantees the continuity of color signals and computational efficiency, and requires less amount of memory.

A Study on Pipe Model Registration for Augmented Reality Based O&M Environment Improving (증강현실 기반의 O&M 환경 개선을 위한 배관 모델 정합에 관한 연구)

  • Lee, Won-Hyuk;Lee, Kyung-Ho;Lee, Jae-Joon;Nam, Byeong-Wook
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.191-197
    • /
    • 2019
  • As the shipbuilding and offshore plant industries grow larger and more complex, their maintenance and inspection systems become more important. Recently, maintenance and inspection systems based on augmented reality have been attracting much attention for improving worker's understanding of work and efficiency, but it is often difficult to work with because accurate matching between the augmented model and reality information is not. To solve this problem, marker based AR technology is used to attach a specific image to the model. However, the markers get damaged due to the characteristic of the shipbuilding and offshore plant industry, and the camera needs to be able to detect the entire marker clearly, and thus requires sufficient space to exist between the operator. In order to overcome the limitations of the existing AR system, in this study, a markerless AR was adopted to accurately recognize the actual model of the pipe system that occupies the most processes in the shipbuilding and offshore plant industries. The matching methodology. Through this system, it is expected that the twist phenomenon of the augmented model according to the attitude of the real worker and the limited environment can be improved.

Classification of Forest Vertical Structure Using Machine Learning Analysis (머신러닝 기법을 이용한 산림의 층위구조 분류)

  • Kwon, Soo-Kyung;Lee, Yong-Suk;Kim, Dae-Seong;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.229-239
    • /
    • 2019
  • All vegetation colonies have layered structure. This layer is called 'forest vertical structure.' Nowadays it is considered as an important indicator to estimate forest's vital condition, diversity and environmental effect of forest. So forest vertical structure should be surveyed. However, vertical structure is a kind of inner structure, so forest surveys are generally conducted through field surveys, a traditional forest inventory method which costs plenty of time and budget. Therefore, in this study, we propose a useful method to classify the vertical structure of forests using remote sensing aerial photographs and machine learning capable of mass data mining in order to reduce time and budget for forest vertical structure investigation. We classified it as SVM (Support Vector Machine) using RGB airborne photos and LiDAR (Light Detection and Ranging) DSM (Digital Surface Model) DTM (Digital Terrain Model). Accuracy based on pixel count is 66.22% when compared to field survey results. It is concluded that classification accuracy of layer classification is relatively high for single-layer and multi-layer classification, but it was concluded that it is difficult in multi-layer classification. The results of this study are expected to further develop the field of machine learning research on vegetation structure by collecting various vegetation data and image data in the future.