• Title/Summary/Keyword: Two-color-ratio method

Search Result 79, Processing Time 0.021 seconds

A color compensation method for a projector considering non-flatness of color screen and mean lightness of the projected image (유색 스크린의 굴곡과 영상의 평균밝기를 고려한 프로젝터용 색 보정 기법)

  • Sung, Soo-Jin;Lee, Cheol-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.1
    • /
    • pp.213-224
    • /
    • 2010
  • In this paper, we propose an algorithm both geometric correction using a grid point image and radiometric adaptive projection that dependent upon the luminance of the input image and that of the background. This method projects and captures the grid point image then calculates the geometrically corrected position by difference between the two images. Next, to compensate color, a corrected image is calculated by the ratio divided luminance of an input image by luminance of arbitrary surface. In addition, we found the scaling factor which controls the contrast to avoid clipping error. At this time, the scaling factor is dependent on mean image lightness when background is determined. Experimental results show that the proposed method achieves good performance and is able to reduce the perceived color clipping and artifacts, better approximating the projection on a white screen.

Region-based Spectral Correlation Estimator for Color Image Coding (컬러 영상 부호화를 위한 영역 기반 스펙트럴 상관 추정기)

  • Kwak, Noyoon
    • Journal of Digital Contents Society
    • /
    • v.17 no.6
    • /
    • pp.593-601
    • /
    • 2016
  • This paper is related to the Region-based Spectral Correlation Estimation(RSCE) coding method that makes it possible to achieve the high-compression ratio by estimating color component images from luminance image. The proposed method is composed of three steps. First, Y/C bit-plane summation image is defined using normalized chrominance summation image and luminance image, and then the Y/C bit-plane summation image is segmented for extracting the shape information of the regions. Secondly, the scale factor and the offset factor minimizing the approximation square errors between luminance image and R, B images by the each region are calculated. Finally, the scale factor and the offset factor for the each region are encoded into bit stream. Referring to the results of computer simulation, the proposed method provides more than two or three times higher compression ratio than JPEG/Baseline or JPEG2000/EBCOT algorithm in terms of bpp needed for encoding two color component images with the same PSNR.

Edge Adaptive Color Interpolation for Ultra-Small HD-Grade CMOS Video Sensor in Camera Phones

  • Jang, Won-Woo;Kim, Joo-Hyun;Yang, Hoon-Gee;Lee, Gi-Dong;Kang, Bong-Soon
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.1
    • /
    • pp.51-58
    • /
    • 2010
  • This paper proposes an edge adaptive color interpolation for an ultra-small HD-grade complementary metal-oxide semiconductor (CMOS) video sensor in camera phones that can process 720-p/30-fps videos. Recently, proposed methods with great image quality perceptually reconstruct the green component and then estimate the red/blue component using the reconstructed green and neighbor red and blue pixels. However, these methods require the bulky memory line buffers in order to temporally store the reconstructed green components. The edge adaptive color interpolation method uses seven or nine patterns to calculate the six edge directions. At the same time, the threshold values are adaptively adjusted by the sum of the color values of the selected pixels. This method selects the suitable one among the patterns using two flowcharts proposed in this paper, and then interpolates the missing color values. For verification, we calculated the peak-signal-to-noise-ratio (PSNR) in the test images, which were processed by the proposed algorithm, and compared the calculated PSNR of the existing methods. The proposed color interpolation is also fabricated with the 0.18-${\mu}m$ CMOS flash memory process.

Velocity Distribution Measurements in Mach 2.0 Supersonic Nozzle using Two-Color PIV Method (Two Color PIV 기법을 이용한 마하 2.0 초음속 노즐의 속도분포 측정)

  • 안규복;임성규;윤영빈
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.4 no.4
    • /
    • pp.18-25
    • /
    • 2000
  • A two-color particle image velocimetry (PIV) has been developed for measuring two dimensional velocity flowfields and applied to a Mach 2.0 supersonic nozzle. This technique is similar to a single-color PIV technique except that two different color laser beams are used to solve the directional ambiguity problem. A green-color laser sheet (532 nm: 2nd harmonic beam of YAG laser) and a red-color laser sheet (619 nm: output beam from YAG pumped Dye laser using Rhodamine 640) are employed to illuminate the seeded particles. A high resolution (3060${\times}$2036) digital color CCD camera is used to record the particle positions. This system eliminates the photographic-film processing time and subsequent digitization time as well as the complexities associated with conventional image shifting techniques for solving directional ambiguity problem. The two-color PIV also has the advantage that velocity distributions in high speed flowfields can be measured simply and accurately by varying the time interval between two different laser beams due to its high signal-to-noise ratio and thereby less requirement of panicle pair numbers for a velocity vector in one interrogation spot. The velocity distribution in the Mach 2.0 supersonic nozzle has been measured and the over-expanded shock cell structure can be predicted by the strain rate field. These results are compared and analyzed with the schlieren photograph for the velocity distributions and shock location.

  • PDF

Title Extraction from Book Cover Images Using Histogram of Oriented Gradients and Color Information

  • Do, Yen;Kim, Soo Hyung;Na, In Seop
    • International Journal of Contents
    • /
    • v.8 no.4
    • /
    • pp.95-102
    • /
    • 2012
  • In this paper, we present a technique to extract the title areas from book cover images. A typical book cover image may contain text, pictures, diagrams as well as complex and irregular background. In addition, the high variability of character features such as thickness, font, position, background and tilt of the text also makes the text extraction task more complicated. Therefore, we propose a two steps efficient method that uses Histogram of Oriented Gradients and color information to find the title areas. Firstly, text localization is carried out to find the title candidates. Finally, refinement process is performed to find the sufficient components of title areas. To obtain the best result, we also use other constraints about the size, ratio between the length and width of the title. We achieve encouraging results of extracted title regions from book cover images which prove the advantages and efficiency of the proposed method.

Retinex Algorithm Improvement for Color Compensation in Back-Light Image Efficently (역광 이미지의 효율적인 컬러 색상 보정을 위한 Retinex 알고리즘의 성능 개선)

  • Kim, Young-Tak;Yu, Jae-Hyoung;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.1
    • /
    • pp.61-69
    • /
    • 2011
  • This paper proposes a new algorithm that improve color component of compensated image using Retinex method for back-light image. A back-light image has two regions, one of the region is too bright and the other one is too dark. If an back-light image is improved contrast using Retinex method, it loses color information in the part of brightness of the image. In order to make up loss information, proposed algorithm adds color components from original image. The histogram can be divided three parts that brightness, darkness, midway using K-mean (k=3) algorithm. For the brightness, it is used color information of the original image. For the darkness, it is converted using by Retinex method. The midway region is mixed between original image and Retinex result image in the ratio of histogram. The ratio is determined by distance from dark area. The proposed algorithm was tested on nature back-light images to evaluate performance, and the experimental result shows that proposed algorithm is more robust than original Retinex algorithm.

Region Classification and Image Based on Region-Based Prediction (RBP) Model

  • Cassio-M.Yorozuya;Yu-Liu;Masayuki-Nakajima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.165-170
    • /
    • 1998
  • This paper presents a new prediction method RBP region-based prediction model where the context used for prediction contains regions instead of individual pixels. There is a meaningful property that RBP can partition a cartoon image into two distinctive types of regions, one containing full-color backgrounds and the other containing boundaries, edges and home-chromatic areas. With the development of computer techniques, synthetic images created with CG (computer graphics) becomes attactive. Like the demand on data compression, it is imperative to efficiently compress synthetic images such as cartoon animation generated with CG for storage of finite capacity and transmission of narrow bandwidth. This paper a lossy compression method to full-color regions and a lossless compression method to homo-chromatic and boundaries regions. Two criteria for partitioning are described, constant criterion and variable criterion. The latter criterion, in form of a linear function, gives the different threshold for classification in terms of contents of the image of interest. We carry out experiments by applying our method to a sequence of cartoon animation. We carry out experiments by applying our method to a sequence of cartoon animation. Compared with the available image compression standard MPEG-1, our method gives the superior results in both compression ratio and complexity.

  • PDF

Method of Measuring Color Difference Between Images using Corresponding Points and Histograms (대응점 및 히스토그램을 이용한 영상 간의 컬러 차이 측정 기법)

  • Hwang, Young-Bae;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.305-315
    • /
    • 2012
  • Color correction between two or multiple images is very crucial for the development of subsequent algorithms and stereoscopic 3D camera system. Even though various color correction methods are proposed recently, there are few methods for measuring the performance of these methods. In addition, when two images have view variation by camera positions, previous methods for the performance measurement may not be appropriate. In this paper, we propose a method of measuring color difference between corresponding images for color correction. This method finds matching points that have the same colors between two scenes to consider the view variation by correspondence searches. Then, we calculate statistics from neighbor regions of these matching points to measure color difference. From this approach, we can consider misalignment of corresponding points contrary to conventional geometric transformation by a single homography. To handle the case that matching points cannot cover the whole regions, we calculate statistics of color difference from the whole image regions. Finally, the color difference is computed by the weighted summation between correspondence based and the whole region based approaches. This weight is determined by calculating the ratio of occupying regions by correspondence based color comparison.

Effect of Curing Method on Physical Properties of a New Flue-cured Tobacco Variety KF114 (황색종 신품종 KFl14의 건조방법 조절이 잎담배 물리성에 미치는 영향)

  • 이철환;조수헌;이병철;진정의
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.22 no.1
    • /
    • pp.13-18
    • /
    • 2000
  • All steps of the curing process are automatically controlled by preseted program according to stalk positions in flue-cured tobacco. The bulk curing experiment was carried out to evaluate the effect of the basic and modified curing program in curing time schedule of two bulk models in physical properties of cured leaves in a new flue-cured tobacco variety KF 114(Wicotiano tabacum L.). The curing process of KF 114 was prolonged in yellowing and quicker in browning stage than those of NC 82. There was no significant difference in physical properties and chromatic characteristics of the cured leaves between basic and modified program at two bulk models. The ratio of normal leaf color tended to increase and the greenish leaf decreased in the modified curing program of two models, but no difference in the brownish leaf ratio was olserved between two programs.

  • PDF

A NEW METHOD OF MASKING CLOUD-AFFECTED PIXELS IN OCEAN COLOR IMAGERY BASED ON SPECTRAL SHAPE OF WATER REFLECTANCE

  • Fukushima, Hajime;Tamura, Jin;Toratani, Mitsuhiro;Murakami, Hiroshi
    • Proceedings of the KSRS Conference
    • /
    • v.1
    • /
    • pp.25-28
    • /
    • 2006
  • We propose a new method of masking cloud-affected pixels in satellite ocean color imageries such as of GLI. Those pixels, mostly found around cloud pixels or in scattered cloud area, have anomalous features in either in chlorophyll-a estimate or in water reflectance. This artifact is most likely caused by residual error of inter-band registration correction. Our method is to check the pixel-wise 'soundness' of the spectral water reflectance Rw retrieved after the atmospheric correction. First, we define two spectral ratio between water reflectance, IRR1 and IRR2, each defined as RW(B1)/RW (B3) RW (B3) and as RW (B2)/RW(B4) respectively, where $B1{\sim}B4$ stand for 4 consecutive visible bands. We show that an almost linear relation holds over log-scaled IRR1 and IRR2 for shipmeasured RW data of SeaBAM in situ data set and for GLI cloud-free Level 2 sub-scenes. The method we propose is to utilize this nature, identifying those pixels that show significant discrepancy from that relationship. We apply this method to ADEOS-II/GLI ocean color data to evaluate the performance over Level-2 data, which includes different water types such as case 1, turbid case 2 and coccolithophore bloom waters.

  • PDF