• Title/Summary/Keyword: Color Component

Search Result 850, Processing Time 0.037 seconds

Detecting Boundaries between Different Color Regions in Color Codes

  • Kwon B. H.;Yoo H. J.;Kim T. W.
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.846-849
    • /
    • 2004
  • Compared to the bar code which is being widely used for commercial products management, color code is advantageous in both the outlook and the number of combinations. And the color code has application areas complement to the RFID's. However, due to the severe distortion of the color component values, which is easily over $50{\%}$ of the scale, color codes have difficulty in finding applications in the industry. To improve the accuracy of recognition of color codes, it'd better to statistically process an entire color region and then determine its color than to process some samples selected from the region. For this purpose, we suggest a technique to detect edges between color regions in this paper, which is indispensable for an accurate segmentation of color regions. We first transformed RGB color image to HSI and YIQ color models, and then extracted I- and Y-components from them, respectively. Then we performed Canny edge detection on each component image. Each edge image usually had some edges missing. However, since the resulting edge images were complementary, we could obtain an optimal edge image by combining them.

  • PDF

Generation of Color Sketch Images Using DIP Operator (DIP 연산자를 이용한 컬러 스케치 영상 생성)

  • So, Hyun-Joo;Jang, Ick-Hoon;Kim, Ji-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.7
    • /
    • pp.947-952
    • /
    • 2009
  • In this paper, we propose a method of generating color sketch images using the DIP operator. In the proposed method, an input RGB color image is first transformed into an HSV color image. A sketch image of the V component image is then extracted by applying the DIP operator to the V component image, which is the brightness component of the input image. For the visual convenience, the extracted sketch image of the V component image is next inverted and contrast-stretched. The S component image is also enhanced to deepen the color of output sketch image while maintaining its color. Finally, the V and S component images along with the original H component image are transformed into an output RGB color sketch image. Experimental results show that the proposed method yields output color sketch images similar to hand-drawn sketch pictures whose colors are the same as those of input color images.

  • PDF

The Extraction of Face Regions based on Optimal Facial Color and Motion Information in Image Sequences (동영상에서 최적의 얼굴색 정보와 움직임 정보에 기반한 얼굴 영역 추출)

  • Park, Hyung-Chul;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.193-200
    • /
    • 2000
  • The extraction of face regions is required for Head Gesture Interface which is a natural user interface. Recently, many researchers are interested in using color information to detect face regions in image sequences. Two most widely used color models, HSI color model and YIQ color model, were selected for this study. Actually H-component of HSI and I-component of YIQ are used in this research. Given the difference in the color component, this study was aimed to compare the performance of face region detection between the two models. First, we search the optimum range of facial color for each color component, examining the detection accuracy of facial color regions for variant threshold range about facial color. And then, we compare the accuracy of the face box for both color models by using optimal facial color and motion information. As a result, a range of $0^{\circ}{\sim}14^{\circ}$ in the H-component and a range of $-22^{\circ}{\sim}-2^{\circ}$ in the I-component appeared to be the most optimum range for extracting face regions. When the optimal facial color range is used, I-component is better than H-component by about 10% in accuracy to extract face regions. While optimal facial color and motion information are both used, I-component is also better by about 3% in accuracy to extract face regions.

  • PDF

Color Image Enhancement Based on an Improved Image Formation Model (개선된 영상 생성 모델에 기반한 칼라 영상 향상)

  • Choi, Doo-Hyun;Jang, Ick-Hoon;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.65-84
    • /
    • 2006
  • In this paper, we present an improved image formation model and propose a color image enhancement based on the model. In the presented image formation model, an input image is represented as a product of global illumination, local illumination, and reflectance. In the proposed color image enhancement, an input RGB color image is converted into an HSV color image. Under the assumption of white-light illumination, the H and S component images are remained as they are and the V component image only is enhanced based on the image formation model. The global illumination is estimated by applying a linear LPF with wide support region to the input V component image and the local illumination by applying a JND (just noticeable difference)-based nonlinear LPF with narrow support region to the processed image, where the estimated global illumination is eliminated from the input V component image. The reflectance is estimated by dividing the input V component image by the estimated global and local illuminations. After performing the gamma correction on the three estimated components, the output V component image is obtained from their product. Histogram modeling is next executed such that the final output V component image is obtained. Finally an output RGB color image is obtained from the H and S component images of the input color image and the final output V component image. Experimental results for the test image DB built with color images downloaded from NASA homepage and MPEG-7 CCD color images show that the proposed method gives output color images of very well-increased global and local contrast without halo effect and color shift.

Optical Properties as Coating Process of Complex Phosphor for White LED (백색 LED용 복합형광체의 코팅공정에 따른 광 특성)

  • Lee, Hyo-Sung;Kim, Byung-Ho;Hwang, Jong Hee;Lim, Tae-Young;Kim, Jin-Ho;Jeon, Dae-Woo;Jung, Hyun-Suk;Lee, Mi Jai
    • Korean Journal of Materials Research
    • /
    • v.26 no.1
    • /
    • pp.22-28
    • /
    • 2016
  • In this study, we fabricated high quality color conversion component with green/red phosphor and low melting glass frit. The color conversion component was prepared by placing the green and red phosphor layer on slide glass via screen printing process. The properties of color conversion component could be controlled by changing coating sequence, layer thickness and heat treatment temperature. We discovered that optical properties of color conversion component were generally determined by the lowest layer. On the other hand, the heat treatment temperature also affected to correlated color temperature (CCT) and color rending index (CRI). The color conversion component with a green (lower) - red (upper) layer which was sintered at $550^{\circ}C$ showed the best optical properties: CCT, CRI and luminance efficacy were 3340 K, 78, and 56.5 lm/w, respectively.

Implementation of Effective Automatic Foreground Motion Detection Using Color Information

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.6
    • /
    • pp.131-140
    • /
    • 2017
  • As video equipments such as CCTV are used for various purposes in fields of society, digital video data processing technology such as automatic motion detection is essential. In this paper, we proposed and implemented a more stable and accurate motion detection system based on background subtraction technique. We could improve the accuracy and stability of motion detection over existing methods by efficiently processing color information of digital image data. We divided the procedure of color information processing into each components of color information : brightness component, color component of color information and merge them. We can process each component's characteristics with maximum consideration. Our color information processing provides more efficient color information in motion detection than the existing methods. We improved the success rate of motion detection by our background update process that analyzed the characteristics of the moving background in the natural environment and reflected it to the background image.

Gingival color change after scaling & subgingival root planing (치석제거술과 치은연하 치근면활택술 후 치은의 색조 변화)

  • Kim, Young-Seok;Lim, Sung-Bin;Chung, Chin-Hyung
    • Journal of Periodontal and Implant Science
    • /
    • v.31 no.3
    • /
    • pp.501-511
    • /
    • 2001
  • Several indices have been developed that use bleeding and color changes as indicators of early gingival pathology. In the presence of gingivitis, vascular proliferation and reduction of keratinization owing to increase redness in gingiva. Descriptions of healthy gingiva are numerous, ranging from pale pink and coral pink to deep red and violet. This terms are not objective. Because of perception of color depends on a lot of factors such as light source, object, observer and so on. It is difficult to make an objective expression. Therefore the using of mechanical equipment is recommended to exclude these variables and observer's vias. The purpose of this study was to evaluate gingival color change after scaling & subgingival root planing. The other purpose of this study was to research the correlation of pocket depth, P.B.I. score and gingival color change. After photo-taking and storaging the image of gingival color into a computer, color change was examine with an image analysis program. Results were as follow; 1. Color of healed gingiva after scaling & subgingival root planing was significantly differ from color of inflamed gingiva(p<0.01). 2. Color of healed gingiva after scaling was similar to color of healed gingiva after subgingival root planing(p<0.05). 3. There was statistically significant correlation between color change of red component and pocket depth after scaling & subgingival root planing(p<0.01). 4. There was no correlation between color change of green, blue component and pocket depth after scaling & subgingival root planing(p<0.01). 5. There was statistically significant correlation between between color change of red component and P.B.I. score after scaling & subgingival root planing(p<0.01). 6. There was no correlation between color changes of green, blue component and P.B.I. score after scaling & subgingival root planing(p<0.01). 7. Increase of pocket depth and P.B.I. score were significantly correlated to the amount of color change(p<0.01). 8. P.B.I. score had a higher correlation with color change than pocket depth(p<0.01).

  • PDF

Demosaicing Algorithm by Gradient Edge Detection Filtering on Color Component (컬러 성분 에지 기울기 검출 필터링을 이용한 디모자이킹 알고리즘)

  • Jeon, Gwan-Ggil;Jung, Tae-Young;Kim, Dong-Hyung;Kim, Seung-Jong;Jeong, Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1138-1146
    • /
    • 2009
  • Digital cameras adopting a single CCD detector collect image color by subsampling in three color planes and successively interpolating the information to reconstruct full-resolution color images. Therefore, to recovery of a full-resolution color image from a color filter array (CFA) like the Bayer pattern is generally considered as an interpolation issue for the unknown color components. In this paper, we first calculate luminance component value by combining R, G, B channel component information which is quite different from the conventional demosaicing algorithm. Because conventional system calculates G channel component followed by computing R and B channel components. Integrating the obtained gradient edge information and the improved weighting function in luminance component, a new edge sensitive demosaicing technique is presented. Based on 24 well known testing images, simulation results proved that our presented high-quality demosaicing technique shows the best image quality performance when compared with several recently presented techniques.

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

The Binarization of Text Regions in Natural Scene Images, based on Stroke Width Estimation (자연 영상에서 획 너비 추정 기반 텍스트 영역 이진화)

  • Zhang, Chengdong;Kim, Jung Hwan;Lee, Guee Sang
    • Smart Media Journal
    • /
    • v.1 no.4
    • /
    • pp.27-34
    • /
    • 2012
  • In this paper, a novel text binarization is presented that can deal with some complex conditions, such as shadows, non-uniform illumination due to highlight or object projection, and messy backgrounds. To locate the target text region, a focus line is assumed to pass through a text region. Next, connected component analysis and stroke width estimation based on location information of the focus line is used to locate the bounding box of the text region, and each box of connected components. A series of classifications are applied to identify whether each CC(Connected component) is text or non-text. Also, a modified K-means clustering method based on an HCL color space is applied to reduce the color dimension. A text binarization procedure based on location of text component and seed color pixel is then used to generate the final result.

  • PDF