• Title/Summary/Keyword: YIQ

Search Result 52, Processing Time 0.028 seconds

Face region detection algorithm of natural-image (자연 영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.1
    • /
    • pp.55-60
    • /
    • 2014
  • In this paper, we proposed a method for face region extraction by skin-color hue, saturation and facial feature extraction in natural images. The proposed algorithm is composed of lighting correction and face detection process. In the lighting correction step, performing correction function for a lighting change. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. Eye detection using C element in the CMY color model and mouth detection using Q element in the YIQ color model for extracted candidate areas. Face area detected based on human face knowledge for extracted candidate areas. When an experiment was conducted with 10 natural images of face as input images, the method showed a face detection rate of 100%.

The Fire Detection Method Using Image Logical Operation and Fire Feature (영상 논리곱 연산과 화재 특징자를 이용한 화재 검출 방법)

  • Piao, Peng-Ji;Moon, Kwang-Seok;Ryu, Ji-Goo;Jung, Shin-Il;Kim, Jong-Nam
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.594-597
    • /
    • 2010
  • This paper proposes a fire detection algorithm using low-cost camera to detect visual features of fire. In the previous work sensor cameras were used, but here we use very simple cameras. This method uses YCbCr and YIQ color model to detect candidate regions of fire. The candidate areas are extracted from the boundaries of the fire. noise removal elimination is performed. Regardless of environmental changes around the fire area, the results of the proposed algorithm are very satisfactory.

  • PDF

Optimal Combination of Component Images for Segmentation of Color Codes (칼라 코드의 영역 분할을 위한 성분 영상들의 최적 조합)

  • Kwon B. H;Yoo H-J.;Kim T. W.;Kim K D.
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.33-42
    • /
    • 2005
  • Identifying color codes needs precise color information of their constituents, and is far from trivial because colors usually suffer severe distortions throughout the entire procedures from printing to acquiring image data. To accomplish accurate identification of colors, we need a reliable segmentation method to separate different color regions from each other, which would enable us to process the whole pixels in the region of a color statistically, instead of a subset of pixels in the region. Color image segmentation can be accomplished by performing edge detection on component image(s). In this paper, we separately detected edges on component images from RGB, HSI, and YIQ color models, and performed mathematical analyses and experiments to find out a pair of component images that provided the best edge image when combined. The best result was obtained by combining Y- and R-component edge images.

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Design and Implementation of Efficient Plate Number Region Detecting System in Vehicle Number Plate Image (자동차 번호판 영상에서 효율적인 번호판 영역 검출 시스템의 설계 및 개발)

  • Lee Hyun-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.87-94
    • /
    • 2005
  • This paper describes the method of detecting the region of vehicle number plate in colored car image with number plate. Vehicle number plate region generally shows formula colors in accordance with type of car. According to this, we use the method to combine a color ingredient H of HSI color model and a color ingredient Q of YIQ color model. However, the defect which a total operation time takes much exists if it uses such method. Therefore, in this paper, the concurrent accomplishes a candidate area extraction operation as draw a color H and Q ingredient among steps of extracting a region of vehicle number Plate. After the above step, as a next step in combination with color H and Q we can accomplish an region extraction fast by comparing to candidate regions extracted from each steps not to do a comparison operation to all of image pixel information. We also show implementation results Processed at each steps and compare with extraction time according to image resolutions.

  • PDF

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

Face Region Detection Algorithm using Euclidean Distance of Color-Image (칼라 영상에서 유클리디안 거리를 이용한 얼굴영역 검출 알고리즘)

  • Jung, Haing-sup;Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.79-86
    • /
    • 2009
  • This study proposed a method of detecting the facial area by calculating Euclidian distances among skin color elements and extracting the characteristics of the face. The proposed algorithm is composed of light calibration and face detection. The light calibration process performs calibration for the change of light. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. From the extracted facial area candidate, the eyes were detected in space C of color model CMY, and the mouth was detected in space Q of color model YIQ. From the extracted facial area candidate, the facial area was detected based on the knowledge of an ordinary face. When an experiment was conducted with 40 color images of face as input images, the method showed a face detection rate of 100%.

  • PDF

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

The Extraction of Face Regions based on Optimal Facial Color and Motion Information in Image Sequences (동영상에서 최적의 얼굴색 정보와 움직임 정보에 기반한 얼굴 영역 추출)

  • Park, Hyung-Chul;Jun, Byung-Hwan
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.193-200
    • /
    • 2000
  • The extraction of face regions is required for Head Gesture Interface which is a natural user interface. Recently, many researchers are interested in using color information to detect face regions in image sequences. Two most widely used color models, HSI color model and YIQ color model, were selected for this study. Actually H-component of HSI and I-component of YIQ are used in this research. Given the difference in the color component, this study was aimed to compare the performance of face region detection between the two models. First, we search the optimum range of facial color for each color component, examining the detection accuracy of facial color regions for variant threshold range about facial color. And then, we compare the accuracy of the face box for both color models by using optimal facial color and motion information. As a result, a range of $0^{\circ}{\sim}14^{\circ}$ in the H-component and a range of $-22^{\circ}{\sim}-2^{\circ}$ in the I-component appeared to be the most optimum range for extracting face regions. When the optimal facial color range is used, I-component is better than H-component by about 10% in accuracy to extract face regions. While optimal facial color and motion information are both used, I-component is also better by about 3% in accuracy to extract face regions.

  • PDF

Detecting Boundaries between Different Color Regions in Color Codes

  • Kwon B. H.;Yoo H. J.;Kim T. W.
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.846-849
    • /
    • 2004
  • Compared to the bar code which is being widely used for commercial products management, color code is advantageous in both the outlook and the number of combinations. And the color code has application areas complement to the RFID's. However, due to the severe distortion of the color component values, which is easily over $50{\%}$ of the scale, color codes have difficulty in finding applications in the industry. To improve the accuracy of recognition of color codes, it'd better to statistically process an entire color region and then determine its color than to process some samples selected from the region. For this purpose, we suggest a technique to detect edges between color regions in this paper, which is indispensable for an accurate segmentation of color regions. We first transformed RGB color image to HSI and YIQ color models, and then extracted I- and Y-components from them, respectively. Then we performed Canny edge detection on each component image. Each edge image usually had some edges missing. However, since the resulting edge images were complementary, we could obtain an optimal edge image by combining them.

  • PDF