• Title/Summary/Keyword: Color pixels

Search Result 381, Processing Time 0.024 seconds

Edge-preserving demosaicing method for digital cameras with Bayer-like W-RGB color filter array

  • Park, Jongjoo;Chong, Jongwha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.1011-1025
    • /
    • 2014
  • A demosaicing method for a Bayer-like W-RGB color filter array (CFA) is proposed. When reproducing images from a W-RGB CFA, conventional color separation methods for W-RGB CFA are likely to cause blurring near the edges due to rough averaging using a color ratio of neighboring pixels. Moreover, these methods cannot be applied to real-life digital cameras with W-RGB CFA because the methods were proposed under an ideal situation, W=R+G+B, not a real-life situation, $W{\neq}R+G+B$. To improve edge performance, we propose a method of constant color difference assumption with inversed weight, which uses information from all edge directions for interpolating all missing color channels. The proposed method calculates the correlation between W, R, G, and B to enable its application to real-life digital cameras with W-RGB CFA. Simulations were performed to evaluate the proposed method using images captured from a real-life digital camera with W-RGB CFA. Simulation results shows that we can demosaic by using the proposed algorithm compared with the conventional one in about +34.79% SNR, +11.43% PSNR, +1.54% SSIM and 14.02% S-CIELAB error. Thus, the proposed method demosaics better than the conventional methods.

An Effective Moving Cast Shadow Removal in Gray Level Video for Intelligent Visual Surveillance (지능 영상 감시를 위한 흑백 영상 데이터에서의 효과적인 이동 투영 음영 제거)

  • Nguyen, Thanh Binh;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.4
    • /
    • pp.420-432
    • /
    • 2014
  • In detection of moving objects from video sequences, an essential process for intelligent visual surveillance, the cast shadows accompanying moving objects are different from background so that they may be easily extracted as foreground object blobs, which causes errors in localization, segmentation, tracking and classification of objects. Most of the previous research results about moving cast shadow detection and removal usually utilize color information about objects and scenes. In this paper, we proposes a novel cast shadow removal method of moving objects in gray level video data for visual surveillance application. The proposed method utilizes observations about edge patterns in the shadow region in the current frame and the corresponding region in the background scene, and applies Laplacian edge detector to the blob regions in the current frame and the corresponding regions in the background scene. Then, the product of the outcomes of application determines moving object blob pixels from the blob pixels in the foreground mask. The minimal rectangle regions containing all blob pixles classified as moving object pixels are extracted. The proposed method is simple but turns out practically very effective for Adative Gaussian Mixture Model-based object detection of intelligent visual surveillance applications, which is verified through experiments.

Robust Lane Detection Algorithm for Realtime Control of an Autonomous Car (실시간 무인 자동차 제어를 위한 강인한 차선 검출 알고리즘)

  • Han, Myoung-Hee;Lee, Keon-Hong;Jo, Sung-Ho
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.2
    • /
    • pp.165-172
    • /
    • 2011
  • This paper presents a robust lane detection algorithm based on RGB color and shape information during autonomous car control in realtime. For realtime control, our algorithm increases its processing speed by employing minimal elements. Our algorithm extracts yellow and white pixels by computing the average and standard deviation values calculated from specific regions, and constructs elements based on the extracted pixels. By clustering elements, our algorithm finds the yellow center and white stop lanes on the road. Our algorithm is insensitive to the environment change and its processing speed is realtime-executable. Experimental results demonstrate the feasibility of our algorithm.

Full color reflective cholesteric liquid cystal using photosensitive chiral dopant (감광성 도판트를 이용한 풀컬러 구현 가능 반사형 콜레스테릭 액정)

  • Park, Seo-Kyu;Cho, Hee-Seok;Kwon, Soon-Bum;Kim, Jeong-Soo;Reznikov, Yu.
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2007.11a
    • /
    • pp.394-395
    • /
    • 2007
  • In order to make full color cholesteric displays, color filter-less R, G, B sub-pixel structured cholesteric LC cells have been studied. To make R, G, B colors, UV induced pitch variant chiral dopant was added to cholesteric LC mixtures. The concentration of the photo-sensitive chiral dopant was adjusted so that the initial state showed blue color and the color was changed from blue to green and red with increase of UV irradiation to the cholesteric cells. To prevent the mixing of R, G, B reflective sub-pixel liquid crystals, separation walls were formed using negative photo resister in boundary area between sub-pixels. Through the optimization of the material concentrations and UV irradiation condition, vivid R, G, B colors were achieved.

  • PDF

Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation (스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술)

  • Park, Soon-Yong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

Wide-QQVGA Flexible Full-Color Active-Matrix OLED Display with an Organic TFT Backplane

  • Nakajima, Yoshiki;Takei, Tatsuya;Tsuzuki, Toshimitsu;Suzuki, Mitsunori;Fukagawa, Hirohiko;Fujisaki, Yoshihide;Yamamoto, Toshihiro;Kikuchi, Hiroshi;Tokito, Shizuo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.189-192
    • /
    • 2008
  • A 5.8-inch wide-QQVGA flexible full-color active-matrix OLED display was fabricated on a plastic substrate. Low-voltage-operation organic TFTs and high-efficiency phosphorescent OLEDs were used as the backplane and emissive pixels, respectively. The fabricated display clearly showed color moving images when the driving voltage was below 15 V.

  • PDF

Improved LCD color performance using RGB gamma curve control

  • Lee, Seung-Woo;Lee, Jun-Pyo;Kim, Tae-Sung;Berkeley, Brian H.;Kim, Sang-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2006.08a
    • /
    • pp.1291-1294
    • /
    • 2006
  • The technique presented in this paper maximizes LCD color performance by way of advanced gamma control technology. First, two gamma curves corresponding to two sub-pixels are mixed to minimize gamma distortion off-axis, then RGB gamma curve control is used to establish accurate on-axis color. Independent RGB curve control for each sub-pixel improves the LCD's performance both on- and off-axis.

  • PDF

Color Nanotube Field Emission Displays for HDTV

  • Dean, K.A.;Coll, B.F.;Dinsmore, A.;Howard, E.;Hupp, M.;Johnson, S.V.;Johnson, M.R.;Jordan, D.C.;Li, H.;Marshbanks, L.;McMurtry, T.;Tisinger, L.Hilt;Wieck, S.;Baker, J.;Dauksher, W. J.;Smith, S.M.;Wei, Y.;Weston, D.;Young, S.R.;Jaskie, J.E.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2005.07b
    • /
    • pp.1003-1007
    • /
    • 2005
  • We demonstrate color video displays driven by carbon nanotube electron field emitters. These nanotubes are incorporated into the device by selective growth using low temperature chemical vapor deposition. The device structure is simple and inexpensive to fabricate, and a 45 V switching voltage enables the use of low cost driver electronics. The prototype units are sealed 4.6” diagonal displays with 726 um pixels. They represent a piece of a 42” diagonal 1280x720 high definition television. The carbon nanotube growth process is performed as the last processing step and creates nanotubes ready for field emission. No activation post-processing steps are required, so chemical and particulate contamination is not introduced. Control of the nanotube dimension, orientation, and spatial distribution during growth enables uniform, highquality, color video performance.

  • PDF

Rotation Angle Estimation of Multichannel Images (다채널 이미지의 회전각 추정)

  • Lee Bong-Kyu;Yang Yo-Han
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.6
    • /
    • pp.267-271
    • /
    • 2002
  • The Hotelling transform is based on statistical properties of an image. The principal uses of this transform are in data compression. The basic concept of the Hotelling transform is that the choice of basis vectors pointing the direction of maximum variance of the data. This property can be used for rotation normalization. Many objects of interest in pattern recognition applications can be easily standardized by performing a rotation normalization that aligns the coordinate axes with the axes of maximum variance of the pixels in the object. However, this transform can not be used to rotation normalization of color images directly. In this paper, we propose a new method for rotation normalization of color images based on the Hotelling transform. The Hotelling transform is performed to calculate basis vectors of each channel. Then the summation of vectors of all channels are processed. Rotation normalization is performed using the result of summation of vectors. Experimental results showed the proposed method can be used for rotation normalization of color images effectively.