• Title/Summary/Keyword: color matching

Search Result 505, Processing Time 0.027 seconds

Study on full color RGB LED source lighting for general lighting and Improvement of CRI (Color Rendering Index)

  • Park, Yung-Kyung
    • Science of Emotion and Sensibility
    • /
    • v.15 no.3
    • /
    • pp.381-388
    • /
    • 2012
  • The purpose of this study is to check if LED lighting can be used as general lighting and examine the color rendering property of full color RGB LED lighting. CRI is one of the important properties of evaluating lighting. However the present CRI does not fully evaluate LED lightings. Firstly, the performance of a simple task was compared other than comparing CRI values for different lighting. For experimental preparation three types of lightings were used; standard D65 fluorescent tube, general household fluorescent tube, and RGB LED lighting. All three lightings show high error for Purple-Red. All three lightings show similar error for all hues and prove that color discrimination is not affected by the lighting. This proves that LED could be used as general lighting. Secondly, problems of the conventional CIE CRI method are considered and new models are suggested for the new lighting source. Each of the models was evaluated with visual experiment results obtained by the white light matching experiment. The suggested model is based on the CIE CRI method but replaces the color space model by CIELAB, color difference model by CIEDE2000, and chromatic adaptation model by CAT02.

  • PDF

2D Planar Object Tracking using Improved Chamfer Matching Likelihood (개선된 챔퍼매칭 우도기반 2차원 평면 객체 추적)

  • Oh, Chi-Min;Jeong, Mun-Ho;You, Bum-Jae;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.37-46
    • /
    • 2010
  • In this paper we have presented a two dimensional model based tracking system using improved chamfer matching. Conventional chamfer matching could not calculate similarity well between the object and image when there is very cluttered background. Then we have improved chamfer matching to calculate similarity well even in very cluttered background with edge and corner feature points. Improved chamfer matching is used as likelihood function of particle filter which tracks the geometric object. Geometric model which uses edge and corner feature points, is a discriminant descriptor in color changes. Particle Filter is more non-linear tracking system than Kalman Filter. Then the presented method uses geometric model, particle filter and improved chamfer matching for tracking object in complex environment. In experimental result, the robustness of our system is proved by comparing other methods.

An Efficient Video Clip Matching Algorithm Using the Cauchy Function (커쉬함수를 이용한 효율적인 비디오 클립 정합 알고리즘)

  • Kim Sang-Hyul
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.4
    • /
    • pp.294-300
    • /
    • 2004
  • According to the development of digital media technologies various algorithms for video clip matching have been proposed to match the video sequences efficiently. A large number of video search methods have focused on frame-wise query, whereas a relatively few algorithms have been presented for video clip matching or video shot matching. In this paper, we propose an efficient algorithm to index the video sequences and to retrieve the sequences for video clip query. To improve the accuracy and performance of video sequence matching, we employ the Cauchy function as a similarity measure between histograms of consecutive frames, which yields a high performance compared with conventional measures. The key frames extracted from segmented video shots can be used not only for video shot clustering but also for video sequence matching or browsing, where the key frame is defined by the frame that is significantly different from the previous frames. Experimental results with color video sequences show that the proposed method yields the high matching performance and accuracy with a low computational load compared with conventional algorithms.

  • PDF

Real-time Motion Detection and Tracking using Line-matching Algorithm (라인 매칭 기법을 이용한 실시간 움직임 검출과 추적기법)

  • 이재호;장석환;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.425-428
    • /
    • 2000
  • 본 논문에서는 Pan/Tilt 움직임이 있는 카메라 영상에서 실시간으로 이동하는 물체를 검출하고 추적하기 위한 라인매칭(Line-matching)알고리즘을 제안한다. 또한 물체를 추적하기 위해 색상 성분의 분포와 물체의 움직임을 동시에 이용하여 특징 값을 매칭 하는 모션-칼라 매칭(Motion-Color matching)방법을 제안한다. 본 논문에서 제시한 라인매칭 알고리즘은 움직이는 카메라 영상 안에서 움직이는 물체를 추적하는데 있어 효율적으로 카메라의 움직임을 보정하며, 그에 따른 연산 시간도 현저히 줄일 수 있는 방법이다. 실험에 의하면 카메라로부터 입력되는 영상에서 움직임을 검출 추적하는데에 있어 초당 10∼12 frame의 연산 속도를 보였으며, 추적하는 대상에 대하여 배경의 움직임이나 주위의 환경에 영향을 받지 않는 강인한 추적 결과를 보였다.

  • PDF

Extraction of Corresponding Points of Stereo Images Based on Dynamic Programming (동적계획법 기반의 스테레오영상의 대응점 탐색)

  • Lee, Ki-Yong;Lee, Joon-Woong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.5
    • /
    • pp.397-404
    • /
    • 2011
  • This paper proposes an algorithm capable of extracting corresponding points between a pair of stereo images based on dynamic programming. The purpose of extracting the corresponding points is to provide the stereo disparity data to a road-slope estimation algorithm with high accuracy and in real-time. As the road-slope estimation algorithm does not require dense disparity data, the proposed stereo matching algorithm aims at extracting corresponding points accurately and quickly. In order to realize this contradictory goal, this paper exploits dynamic programming, and minimizes matching candidates using vertical components of color edges. Furthermore, the typical occlusion problem in stereo vision is solved. The proposed algorithm is proven to be effective through experiments with various images captured on the roads.

Color Correction in Portable-type Urine Analyzer

  • Kim, Jae-Hyung;Park, Chang-Hee;Lee, Seung-Jin;Jeon, Gye-Rok;Kim, Gi-Ryon
    • Transactions on Electrical and Electronic Materials
    • /
    • v.3 no.4
    • /
    • pp.21-26
    • /
    • 2002
  • Color correction methods of chromaticity coordinates using Color Matching Function (CMF) were studied to develop a device-independent portable-type urine analyzer. The reflection spectra were measured for the degrees of 10 test items of the urine reagent strip (urine strip) to develop a portable-type urine analyzer. A computer simulation was performed to quantitatively distinguish the color reactions of the urine system, by using the spectral power distribution of Light Emitting Diode(LED), the reflection of a urine strip, and spectral sensitivity of a photodiode. To develop a device-independent system, chromaticity coordinates were modified to reduce the color deviations in the urine strip, by using the temperature compensation of LED and the color transformation by CMF. The experimental values obtained by developed urine system exhibited the accuracy above 95% for all color samples.

A Spectrophotometric Study on Color Differences between Various Light-Cured Composite Resins and Shade Guides (광중합형 복합레진과 shade guide의 색차에 관한 연구)

  • Lim, Kyung-Min;Lee, Min-Ho;Song, Kwang-Yeob
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.25 no.1
    • /
    • pp.13-22
    • /
    • 2009
  • The composite resin, due to its esthetic quality, is considered the material of choice for restoration of anterior teeth. To get a satisfactory result in the composite resin restorations, it is necessary to choose right shade. At present, most of the commercial composite resins are based on the Vita Lumin shade guides or shade guides that are provided by their company, but color differences among them might be expected even using the same shade in various materials. This study is to measure color differences between various light-cured composite resins and shade guides and to provide the clinicians with information which may aid in improved color match of esthetic restoration. Four kinds of light-cured composite resins (Gradia Direct (GD), Z250 (Z250), Clearfil AP-X (AP-X), Esthet X (E X)) and shade guides with A2 and A3 shade were used. Three specimens of each material and one specimen of each shade guide were made. Each composite resin was filled into the Teflon mold (1.35 mm depth, 8 mm diameter), followed by compression, polymerization and polishing with wet sandpaper. Shade guides were grinded with polishing stones and rubber points to a thickness of approximately 1.35 mm. Color characteristics were performed with a spectrophotometer(color i5, GretagMacbeth, USA). A computer-controlled spectrophotometer was used to determine CIELAB coordinates ($L^*$, $a^*$, $b^*$) of each specimen and shade guide. The CIELAB measurements made it possible to evaluate the amount of the color difference values (${\Delta}E^*ab$) between composite resins and shade guides. CIE standard D65 was used as the light source. The results were as follows : 1. Among the $L^*$, $a^*$, $b^*$ values of most of 4 kinds of composite resin specimens which are produced by same shade, there were significant differences(p<0.05). 2. Among all 4 kinds of composite resin specimens which are produced by same shade, there were color differences that is perceptible to human eye(${\Delta}E^*>3.3$). 3. Between most of composite resin specimens investigated and their corresponding shade guides, there were color differences that is perceptible to human eye(${\Delta}E^*>3.3$). 4. In the clinical environment, it is recommended that custom shade guides be made from resin material itself for better color matching. Shade guides supplied by manufacturers or Vita Lumin shade guide may not provide clinicians a accurate standard in matching color of composite resins, and there are perceptible color differences in most of products. Therefore, it is recommended that custom shade guides be made from resin material itself and used for better color matching.

The effects of emotional matching between video color-temperature and scent on reality improvement (영상의 색온도와 향의 감성적 일치가 영상실감 향상에 미치는 효과)

  • Lee, Guk-Hee;Li, Hyung-Chul O.;Ahn, ChungHyun;Ki, MyungSeok;Kim, ShinWoo
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.1
    • /
    • pp.29-41
    • /
    • 2015
  • Technologies for video reality (e.g., 3D displays, vibration, surround sound, etc.) utilize various sensory input and many of them are now commercialized. However, when it comes to the use of olfaction for video reality, there has not been much progress in both practical and academic respects. Because olfactory sense is tightly associated with human emotion, proper use of this sense is expected to help to achieve a high degree of video reality. This research tested the effects of a video's color-temperature related scent on reality improvement when the video does not have apparent object (e.g., coffee, flower, etc.) which suggest specific smell. To this end, we had participants to rate 48 scents based on a color-temperature scale of 1,500K (warm)-15,000K (cold) and chose 8 scents (4 warm scents, 4 cold scents) which showed clear correspondence with warm or cold color-temperatures (Expt. 1). And then after applying warm (3,000K), neutral (6,500K), or cold (14,000K) color-temperatures to images or videos, we presented warm or cold scents to participants while they rate reality improvement on a 7-point scale depending on relatedness of scent vs. color-temperature (related, unrelated, neutral) (Expts. 2-3). The results showed that participants experienced greater reality when scent and color-temperature was related than when they were unrelated or neutral. This research has important practical implications in demonstrating the possibility that provision of color-temperature related scent improves video reality even when there are no concrete objects that suggest specific olfactory information.

Color Reproduction in DLP Projector using Hue Shift Model according to Additional White Channel (화이트 채널 추가에 따른 색상이동모델를 이용한 DLP 프로젝터의 색 재현)

  • Park, Il-Su;Ha, Ho-Gun;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.4
    • /
    • pp.40-48
    • /
    • 2012
  • This paper models the hue shift phenomenon and proposes a hue correction method to give perceptual matching between projector with and without additional white channel. To quantify the hue shift phenomenon for whole hue angle, 24 color patches with the same lightness are frist created along equally-spaced hue angle, and these are displayed one by one both displays with different luminance levels. Next, each hue value of the patches appeared on the projector with additional white channel is adjusted by observers until the hue values of patches on both displays appear the same visually. After obtaining the hue shift values from the color matching experiment, these values are piecewise fit into six polynomial functions, which approximately determine shifted hue amounts for an arbitrary hue values of each pixel in projector with additional white channel and are utilized to correct them. Actually, an input RGB image is converted to CIELAB LCH color space to get hue values of each pixel and this hue value is shifted as much as the amount calculated by the functions of hue shift model for correction. Finally, corrected image is inversely converted to an output RGB image. For an evaluation, the matching experiment with several test images and the z-score comparisons were performed.

Content Based Image Retrieval Based on A Novel Image Block Technique Combining Color and Edge Features

  • Kwon, Goo-Rak;Haoming, Zou;Park, Sei-Seung
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.2
    • /
    • pp.185-190
    • /
    • 2010
  • In this paper we propose the CBIR algorithm which is based on a novel image block method that combined both color and edge feature. The main drawback of global histogram representation is dependent of the color without spatial or shape information, a new image block method that divided the image to 8 related blocks which contained more information of the image is utilized to extract image feature. Based on these 8 blocks, histogram equalization and edge detection techniques are also used for image retrieval. The experimental results show that the proposed image block method has better ability of characterizing the image contents than traditional block method and can perform the retrieval system efficiently.