• Title/Summary/Keyword: RGB Color model

Search Result 154, Processing Time 0.024 seconds

The Flame Color Analysis of Color Models for Fire Detection (화재검출을 위한 컬러모델의 화염색상 분석)

  • Lee, Hyun-Sul;Kim, Won-Ho
    • Journal of Satellite, Information and Communications
    • /
    • v.8 no.3
    • /
    • pp.52-57
    • /
    • 2013
  • This paper describes the color comparison analysis of flame in each standard color model in order to propose the optimal color model for image processing based flame detection algorithm. Histogram intersection values were used to analyze the separation characteristics between color of flame and color of non-flame in each standard color model which are RGB, YCbCr, CIE Lab, HSV. Histogram intersection value in each color model and components is evaluated for objective comparison. The analyzed result shows that YCbCr color model is the most suitable for flame detection by average HI value of 0.0575. Among the 12 components of standard color models, each Cb, R, Cr component has respectively HI value of 0.0433, 0.0526, 0.0567 and they have shown the best flame separation characteristics.

Division of the Hand and Fingers In Realtime Imaging Using Webcam

  • Kim, Ho Yong;Park, Jae Heung;Seo, Yeong Geon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.9
    • /
    • pp.1-6
    • /
    • 2018
  • In this paper, we propose a method dividing effectively the hand and fingers using general webcam. The method executes 4 times empirically preprocessing one to erase noise. First, it erases the overall noise of the image using Gaussian smoothing. Second, it changes from RGB image to HSV color model and YCbCr color model, executes a global static binarization based on the statistical value for each color model, and erase the noise through bitwise-OR operation. Third, it executes outline approximation and inner region filling algorithm using RDP algorithm and Flood fill algorithm and erase noise. Lastly, it erases noise through morphological operation and determines the threshold propositional to the image size and selects the hand and fingers area. This paper compares to existing one color based hand area division method and focuses the noise deduction and can be used to a gesture recognition application.

A Study on Color Management of Input and Output Device in Electronic Publishing (II) (전자출판에서 입.출력 장치의 컬러 관리에 관한 연구 (II))

  • Cho, Ga-Ram;Koo, Chul-Whoi
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.25 no.1
    • /
    • pp.65-80
    • /
    • 2007
  • The input and output device requires precise color representation and CMS (Color Management System) because of the increasing number of ways to apply the digital image into electronic publishing. However, there are slight differences in the device dependent color signal among the input and output devices. Also, because of the non-linear conversion of the input signal value to the output signal value, there are color differences between the original copy and the output copy. It seems necessary for device-dependent color information values to change into device-independent color information values. When creating an original copy through electronic publishing, there should be color management with the input and output devices. From the devices' three phases of calibration, characterization and color conversion, the device-dependent color should undergo a color transformation into a device-independent color. In this paper, an experiment was done where the input device used the linear multiple regression and the sRGB color space to perform a color transformation. The output device used the GOG, GOGO and sRGB for the color transformation. After undergoing a color transformation in the input and output devices, the best results were created when the original target underwent a color transformation by the scanner and digital camera input device by the linear multiple regression, and the LCD output device underwent a color transformation by the GOG model.

  • PDF

Automatic Color Palette Extraction for Paintings Using Color Grouping and Clustering (색상 그룹핑과 클러스터링을 이용한 회화 작품의 자동 팔레트 추출)

  • Lee, Ik-Ki;Lee, Chang-Ha;Park, Jae-Hwa
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.7
    • /
    • pp.340-353
    • /
    • 2008
  • A computational color palette extraction model is introduced to describe paint brush objectively and efficiently. In this model, a color palette is defined as a minimum set of colors in which a painting can be displayed within error allowance and extracted by the two step processing of color grouping and major color extraction. The color grouping controls the resolution of colors adaptively and produces a basic color set of given painting images. The final palette is obtained from the basic color set by applying weighted k-means clustering algorithm. The extracted palettes from several famous painters are displayed in a 3-D color space to show the distinctive palette styles using RGB and CIE LAB color models individually. And the two experiments of painter classification and color transform of photographic image has been done to check the performance of the proposed method. The results shows the possibility that the proposed palette model can be a computational color analysis metric to describe the paint brush, and can be a color transform tool for computer graphics.

Underwater image quality enhancement through Rayleigh-stretching and averaging image planes

  • Ghani, Ahmad Shahrizan Abdul;Isa, Nor Ashidi Mat
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.6 no.4
    • /
    • pp.840-866
    • /
    • 2014
  • Visibility in underwater images is usually poor because of the attenuation of light in the water that causes low contrast and color variation. In this paper, a new approach for underwater image quality improvement is presented. The proposed method aims to improve underwater image contrast, increase image details, and reduce noise by applying a new method of using contrast stretching to produce two different images with different contrasts. The proposed method integrates the modification of the image histogram in two main color models, RGB and HSV. The histograms of the color channel in the RGB color model are modified and remapped to follow the Rayleigh distribution within certain ranges. The image is then converted to the HSV color model, and the S and V components are modified within a certain limit. Qualitative and quantitative analyses indicate that the proposed method outperforms other state-of-the-art methods in terms of contrast, details, and noise reduction. The image color also shows much improvement.

Implementation of Intelligent Expert System for Color Measuring/Matching (칼라 매저링/매칭용 지능형 전문가 시스템의 구현)

  • An, Tae-Cheon;Jang, Gyeong-Won;O, Seong-Gwon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.7
    • /
    • pp.589-598
    • /
    • 2002
  • The color measuring/matching expert system is implemented with a new color measuring method that combines intelligent algorithms with image processing techniques. Color measuring part of the proposed system preprocesses the scanned original color input images to eliminate their distorted components by means of the image histogram technique of image pixels, and then extracts RGB(Red, Green, Blue)data among color information from preprocessed color input images. If the extracted RGB color data does not exist on the matching recipe databases, we can measure the colors for the user who want to implement the model that can search the rules for the color mixing information, using the intelligent modeling techniques such as fuzzy inference system and adaptive neuro-fuzzy inference system. Color matching part can easily choose images close to the original color for the user by comparing information of preprocessed color real input images with data-based measuring recipe information of the expert, from the viewpoint of the delta Eformula used in practical process.

Implementation of Multispectral Imaging System (멀티스펙트럼 영상 획득 시스템 구현)

  • Jin, Yoon-Jong;Lee, Moon-Hyun;Noh, Sung-Kyu;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.717-721
    • /
    • 2008
  • This paper proposes an image system that can efficiently measure the spectral reflectance of a scene using RGB cameras and LED light sources. Multispectral imaging system is composed of LED controllers, LED clusters and RGB cameras. It captures full-spectral images at real-time. The system adopts a simple, empirical linear model to estimate the full spectral reflectance at each pixel. Since the model is linear, the reconstruction is efficient and stable. We estimated the spectral reflectance of various scenes using the system and showed the effectiveness of the proposed system.

  • PDF

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

A License Plate Detection Method Using Multiple-Color Model and Character Layout Information in Complex Background (다중색상 모델과 문자배치 정보를 이용한 복잡한 배경 영상에서의 자동차 번호판 추출)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.11
    • /
    • pp.1515-1524
    • /
    • 2008
  • This paper proposes a method that detects a license plate in complex background using a multiple-color model and character layout information. A layout of a green license plate is different from that of a white license plate. So, this study used a strategy that firstly assumes the plate color and then utilizes its layout information. At first, it extracts green areas from an input image using a multiple-color model which combined HIS and YIQ color models with RGB color model. If green areas are detected, it searches the character layout of the green plate by analyzing the connected components in each areas. If not detected, it searches the character layout of the white plate in all area. Finally, it extracts a license plate by grouping the connected components which corresponds to characters. Experimental result shows that 98.1% of 419 input images are correctly detected. It also shows that the proposed method is robust against illumination, shadow, and weather condition.

  • PDF

Vehicle Color Recognition Using Neural-Network (신경회로망을 이용한 차량의 색상 인식)

  • Kim, Tae-hyung;Lee, Jung-hwa;Cha, Eui-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.731-734
    • /
    • 2009
  • In this paper, we propose the method the vehicle color recognizing in the image including a vehicle. In an image, the color feature vector of a vehicle is extracted and by using the backpropagation learning algorithm, that is the multi-layer perceptron, the recognized vehicle color. By using the RGB and HSI color model the feature vector used as the input of the backpropagation learning algorithm is the feature of the color used as the input of the neural network. The color of a vehicle recognizes as the white, the silver color, the black, the red, the yellow, the blue, and the green among the color of the vehicle most very much found out as 7 colors. By using the image including a vehicle for the performance evaluation of the method proposing, the color recognition performance was experimented.

  • PDF