• Title/Summary/Keyword: YUV format

Search Result 10, Processing Time 0.023 seconds

Enhanced Intra predction for the Characteristics of Color Filter Array (컬러 필터 배열 구조를 고려한 화면 내 예측 개선 기법)

  • Lee, Jae-Hoon;Lee, Chul-Hee
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.656-659
    • /
    • 2012
  • In general, images captured by the color filter array are compressed after applying demosaicking process. Since this process introduces data redundancy which can reduce coding efficiency, several methods have been proposed to address this problem. While some conventional approachs convert color format to GBR or YUV 4:2:2 format, we propose to use the YCoCg 4:2:2 format to carry out compression. The proposed method shows an average bits reduction of 3.91% and PSNR increase of 0.04dB compared with H.264 YUV 4:2:0 intra-profile prediction method.

  • PDF

A Study on Simple chip Design that Convert Improved YUV signal to RGB signal (개선된 YUV신호를 RGB신호로 변환하는 단일칩 설계에 관한 연구)

  • Lee, Chi-Woo;Park, Sang-Bong;Jin, Hyun-Jun;Park, Nho-Kyung
    • Journal of IKEEE
    • /
    • v.7 no.2 s.13
    • /
    • pp.197-209
    • /
    • 2003
  • A current TV out format is quite different from that of HDTV or PC monitor in encoding techniques. In other words, a conventional analog TV uses interlaced display while HDTV or PC monitor uses Non-interlaced / Progressive-scanned display. In order to encode image signals coming from devices that takes interlaced display format for progressive scanned display, a hardware logic in which scanning and interpolation algorithms are implemented is necessary. The ELA(Edge-Based Line Average) algorithm have been widely used because it provided good characteristics. In this study, the ADI(Adaptive De-interlacing Interpolation) algorithm using to improve the ELA algorithm which shows low quality in vertical edge detections and low efficiency of horizontal edge lines. With the De-interlacing ASIC chip that converts the interlaced Digital YUV to De-interlaced Digital RGB is designed. The VHDL is used for chip design.

  • PDF

Simulation of YUV-Aware Instructions for High-Performance, Low-Power Embedded Video Processors (고성능, 저전력 임베디드 비디오 프로세서를 위한 YUV 인식 명령어의 시뮬레이션)

  • Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.252-259
    • /
    • 2007
  • With the rapid development of multimedia applications and wireless communication networks, consumer demand for video-over-wireless capability on mobile computing systems is growing rapidly. In this regard, this paper introduces YUV-aware instructions that enhance the performance and efficiency in the processing of color image and video. Traditional multimedia extensions (e.g., MMX, SSE, VIS, and AltiVec) depend solely on generic subword parallelism whereas the proposed YUV-aware instructions support parallel operations on two-packed 16-bit YUV (6-bit Y, 5-bits U, V) values in a 32-bit datapath architecture, providing greater concurrency and efficiency for color image and video processing. Moreover, the ability to reduce data format size reduces system cost. Experiment results on a representative dynamically scheduled embedded superscalar processor show that YUV-aware instructions achieve an average speedup of 3.9x over the baseline superscalar performance. This is in contrast to MMX (a representative Intel#s multimedia extension), which achieves a speedup of only 2.1x over the same baseline superscalar processor. In addition, YUV-aware instructions outperform MMX instructions in energy reduction (75.8% reduction with YUV-aware instructions, but only 54.8% reduction with MMX instructions over the baseline).

A study of the color De-interlacing ASIC Chip design adopted the improved interpolation Algorithm for improving the picture quality using color space converter. (ADI 보간 알고리듬을 적용한 Color Space Converter 칩 설계에 관한 연구)

  • 이치우;박노경;진현준;박상봉
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.199-202
    • /
    • 2001
  • A current TV-OUT format is quite different from that of HDTY or PC monitor in encoding techniques. In other words, a conventional analog TV uses interlaced display while HDTV or PC monitor uses Non-interlaced / Progressive-scanned display. In order to encode image signals coming from devices that takes interlaced display format for progressive scanned display, a hardware logic in which scanning and interpolation algorithms are implemented is necessary. The ELA (Edge-Based Line Average) algorithm have been widely used because it provided good characteristics. In this study, the ADI(Adaptive De-interlacing Interpolation) algorithm using to improve the algorithm which shows low quality in vertical edge detections and low efficiency of horizontal edge lines. With the De-interlacing ASIC chip that converts the interlaced Digital YUV to De-interlaced Digital RGB is designed. The VHDL is used for chip design.

  • PDF

Facial Point Classifier using Convolution Neural Network and Cascade Facial Point Detector (컨볼루셔널 신경망과 케스케이드 안면 특징점 검출기를 이용한 얼굴의 특징점 분류)

  • Yu, Je-Hun;Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.3
    • /
    • pp.241-246
    • /
    • 2016
  • Nowadays many people have an interest in facial expression and the behavior of people. These are human-robot interaction (HRI) researchers utilize digital image processing, pattern recognition and machine learning for their studies. Facial feature point detector algorithms are very important for face recognition, gaze tracking, expression, and emotion recognition. In this paper, a cascade facial feature point detector is used for finding facial feature points such as the eyes, nose and mouth. However, the detector has difficulty extracting the feature points from several images, because images have different conditions such as size, color, brightness, etc. Therefore, in this paper, we propose an algorithm using a modified cascade facial feature point detector using a convolutional neural network. The structure of the convolution neural network is based on LeNet-5 of Yann LeCun. For input data of the convolutional neural network, outputs from a cascade facial feature point detector that have color and gray images were used. The images were resized to $32{\times}32$. In addition, the gray images were made into the YUV format. The gray and color images are the basis for the convolution neural network. Then, we classified about 1,200 testing images that show subjects. This research found that the proposed method is more accurate than a cascade facial feature point detector, because the algorithm provides modified results from the cascade facial feature point detector.

A Study on u-CCTV Fire Prevention System Development of System and Fire Judgement (u-CCTV 화재 감시 시스템 개발을 위한 시스템 및 화재 판별 기술 연구)

  • Kim, Young-Hyuk;Lim, Il-Kwon;Li, Qigui;Park, So-A;Kim, Myung-Jin;Lee, Jae-Kwang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.463-466
    • /
    • 2010
  • In this paper, CCTV based fire surveillance system should aim to development. Advantages and Disadvantages analyzed of Existing sensor-based fire surveillance system and video-based fire surveillance system. To national support U-City, U-Home, U-Campus, etc, spread the ubiquitous environment appropriate to fire surveillance system model and a fire judgement technology. For this study, Microsoft LifeCam VX-1000 using through the capturing images and analyzed for apple and tomato, Finally we used H.264. The client uses the Linux OS with ARM9 S3C2440 board was manufactured, the client's role is passed to the server to processed capturing image. Client and the server is basically a 1:1 video communications. So to multiple receive to video multicast support will be a specification. Is fire surveillance system designed for multiple video communication. Video data from the RGB format to YUV format and transfer and fire detection for Y value. Y value is know movement data. The red color of the fire is determined to detect and calculate the value of Y at the fire continues to detect the movement of flame.

  • PDF

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Approximate-SAD Circuit for Power-efficient H.264 Video Encoding under Maintaining Output Quality and Compression Efficiency

  • Le, Dinh Trang Dang;Nguyen, Thi My Kieu;Chang, Ik Joon;Kim, Jinsang
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.5
    • /
    • pp.605-614
    • /
    • 2016
  • We develop a novel SAD circuit for power-efficient H.264 encoding, namely a-SAD. Here, some highest-order MSB's are approximated to single MSB. Our theoretical estimations show that our proposed design simultaneously improves performance and power of SAD circuit, achieving good power efficiency. We decide that the optimal number of approximated MSB's is four under 8-bit YUV-420 format, the largest number not to affect video quality and compression-rate in our video experiments. In logic simulations, our a-SAD circuit shows at least 9.3% smaller critical-path delay compared to existing SAD circuits. We compare power dissipation under iso-throughput scenario, where our a-SAD circuit obtains at least 11.6% power saving compared to other designs. We perform same simulations under two- and three-stage pipelined architecture. Here, our a-SAD circuit delivers significant performance (by 13%) and power (by 17% and 15.8% for two and three stages respectively) improvements.

Stereoscopic Video Display System Based on H.264/AVC (H.264/AVC 기반의 스테레오 영상 디스플레이 시스템)

  • Kim, Tae-June;Kim, Jee-Hong;Yun, Jung-Hwan;Bae, Byung-Kyu;Kim, Dong-Wook;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.6C
    • /
    • pp.450-458
    • /
    • 2008
  • In this paper, we propose a real-time stereoscopic display system based on H.264/AVC. We initially acquire stereo-view images from stereo web-cam using OpenCV library. The captured images are converted to YUV 4:2:0 format as a preprocess. The input files are encoded by stereo-encoder, which has a proposed estimation structure, with more than 30 fps. The encoded bitstream are decoded by stereo-decoder reconstructing left and right images. The reconstructed stereo images are postprocessed by stereoscopic image synthesis technique to offer users more realistic images with 3D effect. Experimental results show that the proposed system has better encoding efficiency compared with using a conventional stereo CODEC(coder and decoder) and operates with real-time processing and low complexity suitable for an application with a mobile environment.

A Study on an Improved H.264 Inter mode decision method (H.264 인터모드 결정 방법 개선에 관한 연구)

  • Gong, Jae-Woong;Jung, Jae-Jin;Hwang, Eui-Sung;Kim, Tae-Hyoung;Kim, Doo-Young
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.9 no.4
    • /
    • pp.245-252
    • /
    • 2008
  • In this paper, we propose a new method for improving the H 264 encoding process and motion estimation part. Our approach is a method to reduce the encoding running time through the omission of reference frame in the mode selection process of H 264 and an improvement of SAD computing process. To evaluate the proposed method, we used the H 264 standard image of QCIF size and TIN 4:2:0 format. Experimental results show that proposed SAD algorithm 1 can improve the speed of encoding runnung time by an average of 4.7% with a negligible degradation of PSNR. However, SAD algorithm 2 can improve the speed of encoding runnung time by an average of 9.6% with 0.98dB degradation of PSNR.

  • PDF