• Title/Summary/Keyword: RGB color model

Search Result 154, Processing Time 0.022 seconds

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

Color matching between monitor and mobile display device using improved S-curve model and RGB color LUT (개선된 S-curve 모델과 RGB 칼라 LUT를 이용한 모니터와 모바일 디스플레이 장치간 색 정합)

  • 박기현;이명영;이철희;하영호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.33-41
    • /
    • 2004
  • This paper proposes a color matching 3D look-up table simplifying the complex color matching procedure between a monitor and a mobile display device. In other to perform color matching, it is necessary to process color of image in the device independent color space like CIEXYZ or CIELAB. To obtain the data of the device independent color space from that of the device dependent RGB color space, we must perform display characterizations. LCD characterization error using S-curve model is larger than tolerance error since LCD is more nonlinear than CRT. This paper improves the S-curve model to have smaller characterization error than tolerance error using the electro-optical transfer functions of X, Y, and Z value. We obtained images having higher color fidelity on mobile display devices through color matching experiments between monitor and mobile display devices. As a result of this experiments, we concluded that the color matching look-up table with 64(4${\times}$4${\times}$4) is the smallest size allowing characterization error to be acceptable.

Illuminant Estimation Method of a Color Image using rgb Chromaticity (rgb 색도를 이용한 칼라 영상의 조명 정보 평가 방법)

  • 윤창락;조맹섭
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.10b
    • /
    • pp.419-421
    • /
    • 2000
  • 정확한 색 재현(Color Reproduction)을 위해서 영상 입력 장치(Image Input Device)의 조명색(Illuminant Color)에 따른 영상 변화를 분석하는 것은 중요하다. 영상 입력 장치는 피사체(Object)를 비추는 조명의 색 특성에 따라 영상을 생성한다. 이는 인간 시각 시스템(Human Visual System)이 가지는 색 불변성(Color Constancy)과는 다른 특성이며, 정확한 색 재현을 위해 필요한 색 실현 모델(Color Appearance Model)이 영상을 변환하는데 문제점으로 작용한다. 따라서, 영상 입력 장치가 생성하는 영상으로부터 조명 정보를 분석하여 인간 시각 시스템의 색 불변성을 재현할 필요가 있다. 본 논문에서는 영상의 조명 정보를 평가하기 위해 채도(Chroma)가 높은 기준 색 샘플들의 rgb 색도를 이용하여 색도 평면에 색도 다각형(Chromaticity Polygon)을 구성하고 영상의 모든 픽셀들의 rgb 색도 분포와 기준 색 샘플들의 색도 다각형간의 포함 관계에 따라 조명 정보를 평가한다.

  • PDF

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

Skin Segmentation Using YUV and RGB Color Spaces

  • Al-Tairi, Zaher Hamid;Rahmat, Rahmita Wirza;Saripan, M. Iqbal;Sulaiman, Puteri Suhaiza
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.283-299
    • /
    • 2014
  • Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other's thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.

Color Space Exploration and Fusion for Person Re-identification (동일인 인식을 위한 컬러 공간의 탐색 및 결합)

  • Nam, Young-Ho;Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.10
    • /
    • pp.1782-1791
    • /
    • 2016
  • Various color spaces such as RGB, HSV, log-chromaticity have been used in the field of person re-identification. However, not enough studies have been done to find suitable color space for the re-identification. This paper reviews color invariance of color spaces by diagonal model and explores the suitability of each color space in the application of person re-identification. It also proposes a method for person re-identification based on a histogram refinement technique and some fusion strategies of color spaces. Two public datasets (ALOI and ImageLab) were used for the suitability test on color space and the ImageLab dataset was used for evaluating the feasibility of the proposed method for person re-identification. Experimental results show that RGB and HSV are more suitable for the re-identification problem than other color spaces such as normalized RGB and log-chromaticity. The cumulative recognition rates up to the third rank under RGB and HSV were 79.3% and 83.6% respectively. Furthermore, the fusion strategy using max score showed performance improvement of 16% or more. These results show that the proposed method is more effective than some other methods that use single color space in person re-identification.

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

A Basic Study on the Pitch-based Sound into Color Image Conversion (피치 기반 사운드-컬러이미지 변환에 관한 기초연구)

  • Kang, Kun-Woo;Kim, Sung-Ill
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.231-238
    • /
    • 2012
  • This study aims for building an application system of converting sound into color image based on synesthetic perception. As the major features of input sound, both scale and octave elements extracted from F0(fundamental frequency) were converted into both hue and intensity elements of HSI color model, respectively. In this paper, we used the fixed saturation value as 0.5. On the basis of color model conversion theory, the HSI color model was then converted into the RGB model, so that a color image of the BMP format was finally created. In experiments, the basic system was implemented on both software and hardware(TMS320C6713 DSP) platforms based on the proposed sound-color image conversion method. The results revealed that diverse color images with different hues and intensities were created depending on scales and octaves extracted from the F0 of input sound signals. The outputs on the hardware platform were also identical to those on the software platform.

  • PDF

Color Analysis with Enhanced Fuzzy Inference Method (개선된 퍼지 추론 기법을 이용한 칼라 분석)

  • Kim, Kwang-Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.8
    • /
    • pp.25-31
    • /
    • 2009
  • Widely used color information recognition methods based on the RGB color model with static fuzzy inference rules have limitations due to the model itself-the detachment of human vision and applicability of limited environment. In this paper, we propose a method that is based on HSI model with new inference process that resembles human vision recognition process. Also, a user can add, delete, update the inference rules in this system. In our method, we design membership intervals with sine, cosine function in H channel and with functions in trigonometric style in S and I channel. The membership degree is computed via interval merging process. Then, the inference rules are applied to the result in order to infer the color information. Our method is proven to be more intuitive and efficient compared with RGB model in experiment.

Color matching between monitor and mobile display device using improved S-curve model and RGB color LUT (개선된 S-curve 모델과 RGB 칼라 참조표를 이용한 모니터와 모바일 디스플레이 장치간 색 정합)

  • 박기현;이명영;이철희;하영호
    • Proceedings of the IEEK Conference
    • /
    • 2003.11a
    • /
    • pp.15-18
    • /
    • 2003
  • This paper proposes a color matching 3D look-up table simplifying the complex color matching procedure between a monitor and a mobile display device. In order to perform color matching, it is necessary to process color of image in the device independent color space like CIEXYZ or CIELAB. We improved the S-curve model to have smaller characterization error than tolerance error. Also, as a result of the experiments, we concluded that the color matching look-up table with 64(4$\times$4$\times$4) is the smallest size allowing characterization error to be acceptable.

  • PDF