• Title/Summary/Keyword: Color Saturation

Search Result 339, Processing Time 0.025 seconds

Novel Defog Algorithm via Evaluation of Local Color Saturation (국부영역 색포화 평가 방법을 통한 안개제거 알고리즘)

  • Park, Hyungjo;Park, Dubok;Ko, Hanseok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.119-128
    • /
    • 2014
  • This paper presents a new method for improving the quality of images corrupted by an external source that generates an attenuation and scattering of light like dust, water droplets and fog. Conventional defog methods typically encounter a distortion such that the restored image has low contrast and oversaturation of color in some regions because of the mis-estimated airlight and wrong media transmission. Therefore, in order to mitigate these problems, we propose a robust airlight selection method and local saturation evaluation method for estimating media transmission. The proposed method addresses the wrong media transmission and over-saturation problems caused by the mis-estimated airlight and thereby improves the restored image quality. The results of relevant experiments of the proposed method against conventional ones confirm the improved accuracy of atmospheric light estimation and the quality of restored images with regard to objective and subjective performance measures.

Improved Mean-Shift Tracking using Adoptive Mixture of Hue and Saturation (색상과 채도의 적응적 조합을 이용한 개선된 Mean-Shift 추적)

  • Park, Han-dong;Oh, Jeong-su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.10
    • /
    • pp.2417-2422
    • /
    • 2015
  • Mean-Shift tracking using hue has a problem that it fail in the object tracking when background has similar hue to the object. This paper proposes an improved Mean-Shift tracking algorithm using new data instead of a hue. The new data is generated by adaptive mixture of hue and saturation which have low interrelationship . That is, the proposed algorithm selects a main attribute of color that is able to distinguish the object and background well and a secondary one which don't, and places their upper 4 bits on upper 4 bits and lower 4 bits on the mixture data, respectively. The proposed algorithm properly tracks the object, keeping tracking error maximum 2.0~4.2 pixel and average 0.49~1.82 pixel, by selecting the saturation as the main attribute of color under tracking environment that background has similar hue to the object.

An RGB to RGBY Color Conversion Algorithm for Liquid Crystal Display Using RGW Pixel with Two-Field Sequential Driving Method

  • Hong, Sung-Jin;Kwon, Oh-Kyong
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.777-782
    • /
    • 2014
  • This paper proposes an RGB to RGBY color conversion algorithm for liquid crystal display (LCD) using RGW pixel structure with two-field (yellow and blue) sequential driving method. The proposed algorithm preserves the hue and saturation of the original color by maintaining the RGB ratio, and it increases the luminance. The performance of the proposed RGBY conversion algorithm is verified using the MATLAB simulation with 24 images of Kodak lossless true color image suite. The simulation results of average color difference CIEDE2000 (${\delta}E^*_{00}$) and scaling factor are 0.99 and 1.89, respectively. These results indicate that the average brightness is increased 1.89 times compared to LCD using conventional RGB pixel structure, without increasing the power consumption and degrading the image quality.

High Efficient FSC LCD using Color Break-up Reduction and Compensation (FSC LCD 에서의 컬러 분리 저감 및 화질 보상 기술)

  • Kim, Dae-Sik;Cho, Seong-Phil;Lee, Ho-Sup;Kim, Choon-Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.486-488
    • /
    • 2015
  • FSC(Field Sequential Color) LCD has high efficiency, high brightness and color saturation due to 3 times aperture larger than conventional LCD. However it is well known that color break-up (CBU) and color interference are hot issue need to be solved. We propose a novel sequential driving method with edge-lit light guide composed of $16{\times}15$ blocks to reduce CBU and color interference. The experimental results show not only suppression of CBU but also the side effects are minimized.

  • PDF

Video Haze Removal Method in HLS Color Space (HLS 색상 공간에서 동영상의 안개제거 기법)

  • An, Jae Won;Ko, Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.32-42
    • /
    • 2017
  • This paper proposes a new haze removal method for moving image sequence. Since the conventional dark channel prior haze removal method adjusts each color component separately in RGB color space, there can be severe color distortion in the haze removed output image. In order to resolve this problem, this paper proposes a new haze removal scheme that adjusts luminance and saturation components in HLS color space while retaining hue component. Also the conventional dark channel prior haze removal method is developed to obtain best haze removal performance for a single image. Therefore, if it is applied to a moving image sequence, the estimated parameter values change rapidly and the haze removed output image sequence shows unnatural glitter defects. To overcome this problem, a new parameter estimation method using Kalman filter is proposed for moving image sequence. Experimental results demonstrate that the haze removal performance of the proposed method is better than that of the conventional dark channel prior method.

Machine Vision Based Detection of Disease Damaged Leave of Tomato Plants in a Greenhouse (기계시각장치에 의한 토마토 작물의 병해엽 검출)

  • Lee, Jong-Whan
    • Journal of Biosystems Engineering
    • /
    • v.33 no.6
    • /
    • pp.446-452
    • /
    • 2008
  • Machine vision system was used for analyzing leaf color disorders of tomato plants in a greenhouse. From the day when a few leave of tomato plants had started to wither, a series of images were captured by 4 times during 14 days. Among several color image spaces, Saturation frame in HSI color space was adequate to eliminate a background and Hue frame was good to detect infected disease area and tomato fruits. The processed image ($G{\sqcup}b^*$ image) by OR operation between G frame in RGB color space and $b^*$ frame in $La^*b^*$ color space was useful for image segmentation of a plant canopy area. This study calculated a ratio of the infected area to the plant canopy and manually analyzed leaf color disorders through an image segmentation for Hue frame of a tomato plant image. For automatically analyzing plant leave disease, this study selected twenty-seven color patches on the calibration bars as the corresponding to leaf color disorders. These selected color patches could represent 97% of the infected area analyzed by the manual method. Using only ten color patches among twenty-seven ones could represent over 85% of the infected area. This paper showed a proposed machine vision system may be effective for evaluating various leaf color disorders of plants growing in a greenhouse.

The Confusing Color line of the Color deficiency in Panel D-15 using CIELab Color Space (CIELab 표색계를 이용한 Panel D-15의 색각이상 혼돈색 line 연구)

  • Park, Sang-An;Kim, YongGeun
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.6 no.1
    • /
    • pp.139-144
    • /
    • 2001
  • In order to analyze of the color perception Farnsworth Test Panel D-15 in the CIELab color space coordinates, it was measured by the reflectance spectrum of the 380~780nm wavelength regions. The Test Panel D-15 was situated in the near origin point of higher the saturation in CIELab coordinates (a, b). Normal person perceived to the similar color for the color of small color difference, and color deficiency person depended on the confusing color line and the neutral point unconcerned with the color difference. In case of Ptotanopia, Deutrnopia, r-g defect, y-b defect with the color deficiency, the neutral points position (a,b) were each (2.12,1.02), (4.25,2.05), (2.51,0.25), (1.20,-1.10).

  • PDF

Research of Quantitative Modeling that Classify Personal Color Skin Tone (퍼스널 컬러 스킨 톤 유형 분류의 정량적 평가 모델 구축에 대한 연구)

  • Kim, Yong Hyeon;Oh, Yu Seok;Lee, Jung Hoon
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.42 no.1
    • /
    • pp.121-132
    • /
    • 2018
  • Recent beauty trends focus on suitability to individual features. A personal color system is a recent aesthetic concept that influences color make up and coordination. However, a personal color concept has several weaknesses. For example, type classification is qualitative and not quantitative because its measuring system is a sensory test with no industry standard of personal color system. A quantitative personal color type classification model is the purpose of this study, which can be a solution to above problems. This model is a kind of mapping system in a 3D Cartesian coordinate system which has own axes, Value, Saturation, and Yellowness. The cheek color of the individual sample is also independent variable and personal color type is a dependent variable. In order to construct the model, this study conducted a colorimetric survey on a 993 sampling frequency of Korean women in their 20s and 30s. The significance of this study is as follows. First, through this study, personal color system is established on quantitative color space; in addition, the model has flexibility and scalability because it consisted of independent axis that allows for the inclusion of any other critical variable in the form of variable axis.

A Study on Production of Optimum Profile Considered Color Rendering in Input Device (입력 장치에서 컬러 랜더링을 고려한 최적의 프로파일 제작에 관한 연구)

  • Koo, Chul-Whoi;Cho, Ga-Ram;Lee, Sung-Hyung
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.28 no.2
    • /
    • pp.117-128
    • /
    • 2010
  • Advancements in digital image have put high quality digital camera into the hands of many image professionals and consumers alike. High quality digital camera images consist originally of raw which have a set of color rendering operation applied to them to produce good images. With color rendering, the raw file was converted to Adobe RGB and sRGB color space. Also color rendering can incorporate factor such as white balance, contrast, saturation. Therefore, in this paper we conduct a study on production of optimum profile considered color rendering in digital camera. To do the experiment, the images were Digital ColorChecker SG target and ColorChecker DC target. A profiling tool was ProfileMaker 5.03. The results were analyzed by comparing in color gamut of $CIEL^*a^*b^*$ color space and calculating ${\Delta}E^*_{ab}$. Also results were analyzed in terms of different $CIEL^*a^*b^*$ color space quadrants based on lightness, chroma.

Recognition of Colors of Image Code Using Hue and Saturation Values (색상 및 채도 값에 의한 이미지 코드의 칼라 인식)

  • Kim Tae-Woo;Park Hung-Kook;Yoo Hyeon-Joong
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.150-159
    • /
    • 2005
  • With the increase of interest in ubiquitous computing, image code is attracting attention in various areas. Image code is important in ubiquitous computing in that it can complement or replace RFID (radio frequency identification) in quite a few areas as well as it is more economical. However, because of the difficulty in reading precise colors due to the severe distortion of colors, its application is quite restricted by far. In this paper, we present an efficient method of image code recognition including automatically locating the image code using the hue and saturation values. In our experiments, we use an image code whose design seems most practical among currently commercialized ones. This image code uses six safe colors, i.e., R, G, B, C, M, and Y. We tested for 72 true-color field images with the size of $2464{\times}1632$ pixels. With the color calibration based on the histogram, the localization accuracy was about 96%, and the accuracy of color classification for localized codes was about 91.28%. It took approximately 5 seconds to locate and recognize the image code on a PC with 2 GHz P4 CPU.

  • PDF