• Title/Summary/Keyword: YUV

Search Result 60, Processing Time 0.024 seconds

Sign Language Recognition Using ART2 Algorithm (ART2 알고리즘을 이용한 수화 인식)

  • Kim, Kwang-Baek;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.937-941
    • /
    • 2008
  • People who have hearing difficulties use sign language as the most important communication method, and they can broaden personal relations and manage their everyday lives without inconvenience through sign language. But they suffer from absence of interpolation between normal people and people who have hearing difficulties in increasing video chatting or video communication services by recent growth of internet communication. In this paper, we proposed a sign language recognition method in order to solve such a problem. In the proposed method, regions of two hands are extracted by tracking of two hands using RGB, YUV and HSI color information from a sign language image acquired from a video camera and by removing noise in the segmented images. The extracted regions of two hands are teamed and recognized by ART2 algorithm that is robust for noise and damage. In the experiment by the proposed method and images of finger number from 1 to 10, we verified the proposed method recognize the numbers efficiently.

An Ambient Light Control System using The Image Difference between Video Frames (인접한 동영상 프레임의 차영상을 이용한 디스플레이 주변 조명효과의 제어)

  • Shin, Su-Chul;Han, Soon-Hun
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.3
    • /
    • pp.7-16
    • /
    • 2010
  • In this paper, we propose an ambient light control method based on the difference of image frames in video. The proposed method is composed of three steps. 1) The first step is to extract a dominant color of a current frame. 2) The second step is to compute the amount of change and the representative color in the changed region using the difference image. 3) The third step is to make a new representative color. The difference image is created from two images transformed into the YUV color space. The summed color difference of each pixel is used for the amount of change. The new representative color is created by synthesizing the current color and the changed color in proportion to the amount of change. We compare the variations of the light effect according to time with and without the proposed method for the same video. The result shows that the new method generates more dynamic light effects.

A study of the color De-interlacing ASIC Chip design adopted the improved interpolation Algorithm for improving the picture quality using color space converter. (ADI 보간 알고리듬을 적용한 Color Space Converter 칩 설계에 관한 연구)

  • 이치우;박노경;진현준;박상봉
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.199-202
    • /
    • 2001
  • A current TV-OUT format is quite different from that of HDTY or PC monitor in encoding techniques. In other words, a conventional analog TV uses interlaced display while HDTV or PC monitor uses Non-interlaced / Progressive-scanned display. In order to encode image signals coming from devices that takes interlaced display format for progressive scanned display, a hardware logic in which scanning and interpolation algorithms are implemented is necessary. The ELA (Edge-Based Line Average) algorithm have been widely used because it provided good characteristics. In this study, the ADI(Adaptive De-interlacing Interpolation) algorithm using to improve the algorithm which shows low quality in vertical edge detections and low efficiency of horizontal edge lines. With the De-interlacing ASIC chip that converts the interlaced Digital YUV to De-interlaced Digital RGB is designed. The VHDL is used for chip design.

  • PDF

CNN-based In-loop Filtering Using Block Information (블록정보를 이용한 CNN기반 인 루프 필터)

  • Kim, Yangwoo;Lee, Yung-lyul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.27-29
    • /
    • 2019
  • VVC(Versatile Video Coding)는 입력 YUV영상을 CTU(Coding Tree Unit)으로 분할하고, 다시 이를 QTBTTT(Quad Tree, Binary Tree, Ternery Tree)로 최적의 블록으로 분할하고 각각의 블록을 공간적, 시간적 정보를 이용하여 예측하고 예측블록과 원본블록의 차분신호를 변환, 양자화를 통해 전송한다. 이를 위해 여러가지 인코딩정보가 디코더에 전송되며 이를 이용하여 디코더는 인코더와 똑같은 순서로 영상을 복원 할 수 있다. 본 논문에서는 이러한 VVC 인코더에서 반드시 전송하는 정보를 추가적으로 이용하여 딥러닝 기반의 Convolutional Neural Netwrok로 영상의 압축률 및 화질개선 하는 방법을 제안한다.

  • PDF

Robust Color Classifier for Robot Soccer System under Illumination Variations (조명 변화에 강인한 로봇 축구 시스템의 색상 분류기)

  • 이성훈;박진현;전향식;최영규
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.32-39
    • /
    • 2004
  • The color-based vision systems have been used to recognize our team robots, the opponent team robots and a ball in the robot soccer system. The color-based vision systems have the difficulty in that they are very sensitive to color variations brought by brightness changes. In this paper, a neural network trained with data obtained from various illumination conditions is used to classify colors in the modified YUV color space for the robot soccer vision system. For this, a new method to measure brightness is proposed by use of a color card. After the neural network is constructed, a look-up-table is generated to replace the neural network in order to reduce the computation time. Experimental results show that the proposed color classification method is robust under illumination variations.

Facial Feature Tracking from a General USB PC Camera (범용 USB PC 카메라를 이용한 얼굴 특징점의 추적)

  • 양정석;이칠우
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.10b
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF

Speed Sign Recognition Using Sequential Cascade AdaBoost Classifier with Color Features

  • Kwon, Oh-Seol
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.185-190
    • /
    • 2019
  • For future autonomous cars, it is necessary to recognize various surrounding environments such as lanes, traffic lights, and vehicles. This paper presents a method of speed sign recognition from a single image in automatic driving assistance systems. The detection step with the proposed method emphasizes the color attributes in modified YUV color space because speed sign area is affected by color. The proposed method is further improved by extracting the digits from the highlighted circle region. A sequential cascade AdaBoost classifier is then used in the recognition step for real-time processing. Experimental results show the performance of the proposed algorithm is superior to that of conventional algorithms for various speed signs and real-world conditions.

Development of the Image Capture System Using and RISC Type CPU (RISC 구조 프로세서 및 CMOS이미지 센서를 이용한 영상신호처리 시스템 개발)

  • Yoon, Su-Jeong;Kim, Woo-Sik;Kim, Eung-Seok
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2664-2666
    • /
    • 2005
  • In this paper, we develop the on board type image processing system using the CMOS sensor and the RISC type main processor. The main processor transmits YUV 4:2:2 type raw data captured by a CMOS image sensor to another processor(such as motion controller, PC, etc) via serial communication (rs232, SPI, I2C, etc). The role of another processor is line and obstacle detecting in image data received from the image processing board developed in this paper.

  • PDF

Traffic Sign Recognition Using Color Information and Neural Network with Multi-layer Perceptron (컬러정보와 다층퍼셉트론 신경망을 이용한 교통표지판 인식)

  • Bang, Gul-Won;Kang, Dea-Yook;Kim, Byung-Ki;Cho, Wan-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.305-308
    • /
    • 2007
  • 본 논문은 교통표지판을 자동으로 인식하는 방법에 관한 연구로 기존의 교통표지판 인식시스템에서는 인식하는데 걸리는 시간이 길고 잡음환경에서 인식률이 저하되며 변경된 교통표지판은 인식하지 못하는 문제점이 있다. 본 논문에서는 이와 같은 문제점을 해결하기위해 컬러정보를 이용하여 교통표지판 영역을 추출하고 추출된 이미지를 인식하는데 다층퍼셉트론 신경망 알고리즘을 적용하여 교통표지판 인식시스템을 제안한다. 제안된 방법은 교통표지판의 컬러를 분석하여 영상에서 교통표지판 영역을 추출한다. 영역을 추출하는 방법은 RGB 컬러 공간으로부터 YUV, YIQ, CMYK 컬러 공간이 가지는 특성을 이용한다. 형태처리는 교통표지판의 기하학적 특성을 이용하여 군집화한다. 교통표지판 인식은 학습이 가능한 다층퍼셉트론의 오류역전파알고리즘을 적용하여 인식한다. 다층퍼셉트론 신경망 알고리즘은 패턴인식 분야에서 우수한 성능이 입증 되었다.

  • PDF

Approximate-SAD Circuit for Power-efficient H.264 Video Encoding under Maintaining Output Quality and Compression Efficiency

  • Le, Dinh Trang Dang;Nguyen, Thi My Kieu;Chang, Ik Joon;Kim, Jinsang
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.5
    • /
    • pp.605-614
    • /
    • 2016
  • We develop a novel SAD circuit for power-efficient H.264 encoding, namely a-SAD. Here, some highest-order MSB's are approximated to single MSB. Our theoretical estimations show that our proposed design simultaneously improves performance and power of SAD circuit, achieving good power efficiency. We decide that the optimal number of approximated MSB's is four under 8-bit YUV-420 format, the largest number not to affect video quality and compression-rate in our video experiments. In logic simulations, our a-SAD circuit shows at least 9.3% smaller critical-path delay compared to existing SAD circuits. We compare power dissipation under iso-throughput scenario, where our a-SAD circuit obtains at least 11.6% power saving compared to other designs. We perform same simulations under two- and three-stage pipelined architecture. Here, our a-SAD circuit delivers significant performance (by 13%) and power (by 17% and 15.8% for two and three stages respectively) improvements.