• Title/Summary/Keyword: RGB color information

Search Result 499, Processing Time 0.025 seconds

Content-Based Image Retrieval using Scale-Space Theory (Scale-Space 이론에 기초한 내용 기반 영상 검색)

  • 오정범;문영식
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.150-150
    • /
    • 1999
  • In this paper, a content-based image retrieval scheme based on scale-space theory is proposed. The existing methods using scale-space theory consider all scales for image retrieval,thereby requiring a lot of computation. To overcome this problem, the proposed algorithm utilizes amodified histogram intersection method to select candidate images from database. The relative scalebetween a query image and a candidate image is calculated by the ratio of histograms. Feature pointsare extracted from the candidates using a corner detection algorithm. The feature vector for eachfeature point is composed of RGB color components and differential invariants. For computing thesimilarity between a query image and a candidate image, the euclidean distance measure is used. Theproposed image retrieval method has been applied to various images and the performance improvementover the existing methods has been verified.

An Implementation of Stereo Image Based Sighted Guiding Device Platform for the Visually Impaired (시각장애인을 위한 스테레오 영상기반 보행환경정보안내 단말 플랫폼 개발)

  • Oh, Bonjin;Park, Sangheon;Kim, Juwan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.2
    • /
    • pp.73-81
    • /
    • 2018
  • This paper describes a device platform which the blind can wear to keep path and to get surrounding information during their independent walking. Compared to the existing technologies, the proposed device could be used indoors and outdoors, and maps need not be provided in advance. It is composed of a glasses type device equipped with image sensors, and a portable device that analyzes sensor data for sighted guiding. RGB images and depth images are extracted to generate a walking map based on feature points. It also can cope with the risk of collision with bollard, color cone by applying vertical obstacle detection technology based on floor detection.

Visual Touch Recognition for NUI Using Voronoi-Tessellation Algorithm (보로노이-테셀레이션 알고리즘을 이용한 NUI를 위한 비주얼 터치 인식)

  • Kim, Sung Kwan;Joo, Young Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.3
    • /
    • pp.465-472
    • /
    • 2015
  • This paper presents a visual touch recognition for NUI(Natural User Interface) using Voronoi-tessellation algorithm. The proposed algorithms are three parts as follows: hand region extraction, hand feature point extraction, visual-touch recognition. To improve the robustness of hand region extraction, we propose RGB/HSI color model, Canny edge detection algorithm, and use of spatial frequency information. In addition, to improve the accuracy of the recognition of hand feature point extraction, we propose the use of Douglas Peucker algorithm, Also, to recognize the visual touch, we propose the use of the Voronoi-tessellation algorithm. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

Code Extraction of Car License Plates using Color Information and Fuzzy Binarization (컬러 정보와 퍼지 이진화를 이용한 차량 번호판의 코드 추출)

  • 김정은;엄인현;김정민;최정인;김광백
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2003.05b
    • /
    • pp.618-623
    • /
    • 2003
  • 본 논문에서는 RGB 컬러 정보와 퍼지 이진화를 이용하여 차량 번호판의 개별 코드를 추출하는 방법을 제안한다 제안된 방법은 비영업용 차량 영상에서 차량 번호판 영역을 추출하기 위해 녹색의 분포가 밀집되어 있는 영역을 번호판의 후보 영역으로 설정하고 번호판의 후보 영역에서 흰색의 밀집도가 높은 부분을 번호판의 영역으로 추출한다. 개별 코드 추출은 추출된 번호판 영역에서 3×3 소벨 마스크를 이용하여 잡음을 제거하고 퍼지 이진화 방법을 적용하여 번호판의 영역을 이진화 한다. 이진화된 번호판 영역을 윤곽선 따라가기 알고리즘을 적용하여 개별 코드를 추출한다. 제안된 방법의 성능을 평가하기 위하여 실제 비영업용 차량 번호판에 적용한 결과, 기존의 방법보다 번호판 영역에서 개별 코드의 추출률이 개선된 것을 확인하였다.

  • PDF

Breaking character-based CAPTCHA using color information (색상 정보를 이용한 문자 기반 CAPTCHA의 무력화)

  • Kim, Sung-Ho;Nyang, Dae-Hun;Lee, Kyung-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.6
    • /
    • pp.105-112
    • /
    • 2009
  • Nowadays, completely automated public turing tests to tell computers and humans apart(CAPTCHAs) are widely used to prevent various attacks by automated software agents such as creating accounts, advertising, sending spam mails, and so on. In early CAPTCHAs, the characters were simply distorted, so that users could easily recognize the characters. From that reason, using various techniques such as image processing, artificial intelligence, etc., one could easily break many CAPTCHAs, either. As an alternative, By adding noise to CAPTCHAs and distorting the characters in CAPTCHAs, it made the attacks to CAPTCHA more difficult. Naturally, it also made users more difficult to read the characters in CAPTCHAs. To improve the readability of CAPTCHAs, some CAPTCHAs used different colors for the characters. However, the usage of the different colors gives advantages to the adversary who wants to break CAPTCHAs. In this paper, we suggest a method of increasing the recognition ratio of CAPTCHAs based on colors.

A Scene Change Detection Technique using the Weighted $\chi^2$-test and the Automated Threshold-Decision Algorithm (변형된 $\chi^2$- 테스트와 자동 임계치-결정 알고리즘을 이용한 장면전환 검출 기법)

  • Ko, Kyong-Cheol;Rhee, Yang-Won
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.4 s.304
    • /
    • pp.51-58
    • /
    • 2005
  • This paper proposes a robust scene change detection technique that uses the weighted chi-square test and the automated threshold-decision algorithms. The weighted chi-square test can subdivide the difference values of individual color channels by calculating the color intensities according to NTSC standard, and it can detect the scene change by joining the weighted color intensities to the predefined chi-square test which emphasize the comparative color difference values. The automated threshold-decision at algorithm uses the difference values of frame-to-frame that was obtained by the weighted chi-square test. At first, The Average of total difference values is calculated and then, another average value is calculated using the previous average value from the difference values, finally the most appropriate mid-average value is searched and considered the threshold value. Experimental results show that the proposed algorithms are effective and outperform the previous approaches.

A Cross-cultural Study on the Affection of Color with Variation of Tone and Chroma for Automotive Visual Display

  • Jung, Jinsung;Park, Jaekyu;Choe, Jaeho;Jung, Eui S.
    • Journal of the Ergonomics Society of Korea
    • /
    • v.36 no.2
    • /
    • pp.123-144
    • /
    • 2017
  • Objective: The objective of this study is to evaluate affection on how users perceive colors viewed from an automotive visual display according to cultural and radical differences including North America, Europe, and Southeast Asia. This study especially aims to identify effects of the variation of tone and chroma of representative color groups by analyzing affection differences depending on cultural and racial differences targeting the colors constituted through variation of tone and chroma, centered on representative colors. Background: The colors of the menu, information display or background viewed through an automotive visual display are an important factor stimulating consumer's affection, and therefore an effort to express the vehicle's brand and product image through colors is made. The studies on colors focus only on the research on unique characteristics of colors, but an affective approach lacks according to cultural and racial differences on colors considering tone and chroma variation within a color from the currently used automotive visual displays. Method: To grasp the visual affection felt by users, this study extracted affective adjectives related with colors through existing literature and a dictionary for adjectives, and presented human affection dimensions on colors through evaluation of various colors. Prior to carrying out affection evaluation, the basic light sources, red (R), green (G), and blue (B) constituting the colors used for automotive visual displays were defined as a representative color group, respectively. When colors in a color group are constituted, the evaluation target of each color group consisted of the colors considering the variation of tone and chroma by changing color sense through RGB values of the remaining two light sources. And then, this study carried out affection evaluation on the constituted colors targeting the subjects with cultural and racial differences. Results: As a result of evaluating the constituted colors with representative affections, there were statistically significant differences between the groups having cultural and racial differences. As a result of S-N-K post-hoc analysis on the colors showing significant differences, North America and Europe were classified as heterogeneous groups. In some cases, Korea was classified as the homogeneous group with North America, but Korea was mainly classified as the homogenous group with Europe. Conclusion: The representative affections on colors from an automotive visual display was drawn as three affective dimensions: passionate, neat, and masculine. Based on these, the affection of Korea and Europe on the constituted colors showed significant differences from that of North America, as a result of affection evaluation on the constituted colors viewed through the visual display by reflecting cultural and racial factors. Regarding representative color groups, bigger cultural and racial differences were revealed in terms of affection on red and green colors than on blue color, and variation of affection was the biggest in the red color. Application: This study analyzed correlations of affection considering the colors constituted through variation of tone and chroma, and the culture and race in the representative color groups constituting a visual display. The results of this study are predicted to be utilized in coordination and selection of colors viewed from an automotive visual display taking into account culture and race.

Extraction of Representative Color of Digital Images Using Histogram of Hue Area and Non-Hue Area (색상영역과 비색상영역의 히스토그램을 이용한디지털 영상의 대표색상 추출)

  • Kwak, Nae-Joung;Hwang, Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.1-10
    • /
    • 2010
  • There have been studied with activity about color standard due to extention of digital contents' application area. Therefore the studies in relation to the standard are needed to represent image's feature as color. Also the methods to extract color's feature to be apt to various application are needed. In this paper, we set the base color as 50 colors from Munsell color system, get the color histogram to show the characteristics of colors's distribution of a image, and propose the method to extract representative colors from the histogram. Firstly, we convert a input image of RGB color space to a image of HSI color space and split the image into hue area and non-hue area. To split hue area and non-hue area, we use a fixed threshold and a perception-function of color area function to reflect the subjective vision of human-being. We compute histograms from each area and then make a total histogram from the histogram of hue area and the histogram of hue area, and extract the representative colors from the histogram. To evaluate the proposed method, we made 18 test images, applied conventional methods and proposed method to them Also the methods are applied to public images and the results are analyzed. The proposed method represents well the characteristics of the colors' distribution of images and piles up colors' frequency to representative colors. Therefore the representative colors can be applied to various applications

Implementation of Urinalysis Service Application based on MobileNetV3 (MobileNetV3 기반 요검사 서비스 어플리케이션 구현)

  • Gi-Jo Park;Seung-Hwan Choi;Kyung-Seok Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.41-46
    • /
    • 2023
  • Human urine is a process of excreting waste products in the blood, and it is easy to collect and contains various substances. Urinalysis is used to check for diseases, health conditions, and urinary tract infections. There are three methods of urinalysis: physical property test, chemical test, and microscopic test, and chemical test results can be easily confirmed using urine test strips. A variety of items can be tested on the urine test strip, through which various diseases can be identified. Recently, with the spread of smart phones, research on reading urine test strips using smart phones is being conducted. There is a method of detecting and reading the color change of a urine test strip using a smartphone. This method uses the RGB values and the color difference formula to discriminate. However, there is a problem in that accuracy is lowered due to various environmental factors. This paper applies a deep learning model to solve this problem. In particular, color discrimination of a urine test strip is improved in a smartphone using a lightweight CNN (Convolutional Neural Networks) model. CNN is a useful model for image recognition and pattern finding, and a lightweight version is also available. Through this, it is possible to operate a deep learning model on a smartphone and extract accurate urine test results. Urine test strips were taken in various environments to prepare deep learning model training images, and a urine test service application was designed using MobileNet V3.

Hue Shift Model and Hue Correction in High Luminance Display (고휘도 디스플레이의 색상이동모델과 색 보정)

  • Lee, Tae-Hyoung;Kwon, Oh-Seol;Park, Tae-Yong;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.60-69
    • /
    • 2007
  • The human eye usually experiences a loss of color sensitivity when it is subjected to high levels of luminance, and perceives a discrepancy in color between high and normal-luminance displays, generally known as a hue shift. Accordingly, this paper models the hue-shift phenomenon and proposes a hue-correction method to provide perceptual matching between high and normal-luminance displays. The value of hue-shift is determined by perceived hue matching experiments. At first the phenomenon is observed at three lightness levels, that is, the ratio of luminance is the same between high and normal-luminance display when the perceived hue matching experiments we performed. To quantify the hue-shift phenomenon for the whole hue angle, color patches with the same lightness are first created and equally spaced inside the hue angle. These patches are then displayed one-by-one on both displays with the ratio of luminance between two displays. Next, the hue value for each patch appearing on the high-luminance display is adjusted by observers until the perceived hue for the patches on both displays appears the same visually. After obtaining the hue-shift values, these values are fit piecewise to allow shifted-hue amounts to be approximately determined for arbitrary hue values of pixels in a high-luminance display and then used for correction. Essentially, input RGB values of an image is converted to CIELAB values, and then, LCh (lightness, chroma, and hue) values are calculated to obtain the hue values for all the pixels. These hue values are shifted according to the amount calculated by the functions of the hue-shift model. Finally, the corrected CIELAB values are calculated from corrected hue values, after that, output RGB values for all pixels are estimated. For evaluation, an observer's preference test was performed with hue-shift results and Almost observers conclude that the images from hue-shift model were visually matched with images on normal luminance display.