• Title/Summary/Keyword: Video Enhancement

Search Result 273, Processing Time 0.022 seconds

Performance of H.264 SVC with Base Layer Repetition and HARQ over Wireless Link (무선링크에서 기본 계층의 반복과 HARQ를 적용한 H.264 SVC의 성능)

  • Ahn, Sung-Kyun;Han, Dong-Ha;Hwang, Seung-Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8A
    • /
    • pp.689-697
    • /
    • 2012
  • In this paper, we propose base layer repetition and HARQ schemes for improving the reliability and the performances of H.264 SVC video transmission in a wireless channel, and investigate its performances. The proposed method may solve the problems of transmission delay as well as the scarcity of wireless resources, since the proposed scheme was applied for only base layer, not for enhancement layer. The numerical results show that the proposed scheme can enhance the BER performance of $1.5{\times}10^{-5}$ and the FER of $1.2{\times}10^{-3}$, when SNR=3.4dB. Also, it was confirmed through the resultant images that the proposed method can improve the SVC performance in the wireless link.

Temporally adaptive and region-selective signaling of applying multiple neural network models

  • Ki, Sehwan;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.237-240
    • /
    • 2020
  • The fine-tuned neural network (NN) model for a whole temporal portion in a video does not always yield the best quality (e.g., PSNR) performance over all regions of each frame in the temporal period. For certain regions (usually homogeneous regions) in a frame for super-resolution (SR), even a simple bicubic interpolation method may yield better PSNR performance than the fine-tuned NN model. When there are multiple NN models available at the receivers where each NN model is trained for a group of images having a specific category of image characteristics, the performance of Quality enhancement can be improved by selectively applying an appropriate NN model for each image region according to its image characteristic category to which the NN model was dedicatedly trained. In this case, it is necessary to signal which NN model is applied for each region. This is very advantageous for image restoration and quality enhancement (IRQE) applications at user terminals with limited computing capabilities.

  • PDF

Hardware Design of Super Resolution on Human Faces for Improving Face Recognition Performance of Intelligent Video Surveillance Systems (지능형 영상 보안 시스템의 얼굴 인식 성능 향상을 위한 얼굴 영역 초해상도 하드웨어 설계)

  • Kim, Cho-Rong;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.9
    • /
    • pp.22-30
    • /
    • 2011
  • Recently, the rising demand for intelligent video surveillance system leads to high-performance face recognition systems. The solution for low-resolution images acquired by a long-distance camera is required to overcome the distance limits of the existing face recognition systems. For that reason, this paper proposes a hardware design of an image resolution enhancement algorithm for real-time intelligent video surveillance systems. The algorithm is synthesizing a high-resolution face image from an input low-resolution image, with the help of a large collection of other high-resolution face images, called training set. When we checked the performance of the algorithm at 32bit RISC micro-processor, the entire operation took about 25 sec, which is inappropriate for real-time target applications. Based on the result, we implemented the hardware module and verified it using Xilinx Virtex-4 and ARM9-based embedded processor(S3C2440A). The designed hardware can complete the whole operation within 33 msec, so it can deal with 30 frames per second. We expect that the proposed hardware could be one of the solutions not only for real-time processing at the embedded environment, but also for an easy integration with existing face recognition system.

An improvement in FGS coding scheme for high quality scalability (고화질 확장성을 위한 FGS 코딩 구조의 개선)

  • Boo, Hee-Hyung;Kim, Sung-Ho
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.249-254
    • /
    • 2011
  • FGS (fine granularity scalability) supporting scalability in MPEG-4 Part 2 is a scalable video coding scheme that provides bit-rate adaptation to varying network bandwidth thereby achieving of its optimal video quality. In this paper, we proposed FGS coding scheme which performs one more bit-plane coding for residue signal occured in the enhancement-layer of the basic FGS coding scheme. The experiment evaluated in terms of video quality scalability of the proposed FGS coding scheme by comparing with FGS coding scheme of the MPEG-4 verification model (VM-FGS). The comparison was conducted by analysis of PSNR values of three tested video sequences. The results showed that when using rate control algorithm VM5+, the proposed FGS coding scheme obtained Y, U, V PSNR of 0.4 dB, 9.4 dB, 9 dB averagely higher and when using fixed QP value 17, obtained Y, U, V PSNR of 4.61 dB, 20.21 dB, 16.56 dB averagely higher than the existing VM-FGS. From results, we found that the proposed FGS coding scheme has higher video quality scalability to be able to achieve video quality from minimum to maximum than VM-FGS.

Bitrate Scalable Video Coder (비트율 계위 비디오 부호기)

  • 임범렬;임성호;민병의;황승구;황재정
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.206-215
    • /
    • 1997
  • We pror.a;e a H.263-based video ceder with two-layer ocalability. The bare layer is ceded by using the default H.263 axling algorithms to achieve high compresred video data and the enhanced layer is axied by enhanced axling such as HVS-based quantization updating. The enhanced layer contains only arled refinement data for the OCT coefficients of the bare layer. Bitstream syntax and semantics for enhancement layer are designed and quantizer design using the HVS is pror.ooed. Data from the two layers are combined after inverse quantization and inverse OCT prcx:esses in the deaxier. We show with e~rirrental results that the pror.a;ed layered arlee achieves comparable picture quality with non-layered arlee at bitrates of 30 kbr;s or less. Overhead information for the bitstream layer can 00 limited to less than 0.5 kbits/frame.

  • PDF

Rain Detection and Removal Algorithm using Motion-Compensated Non-local Means Filter for Video Sequences (동영상을 위한 움직임 보상 기반 Non-Local Means 필터를 이용한 우적 검출 및 제거 알고리즘)

  • Seo, Seung Ji;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.153-163
    • /
    • 2015
  • This paper proposes a rain detection and removal algorithm that is robust against camera motion in video sequences. In detection part, the proposed algorithm initially detects possible rain streaks by using intensity properties and spatial properties. Then, the rain streak candidates are selected based on Gaussian distribution model. In removal part, a non-rain block matching algorithm is performed between adjacent frames to find similar blocks to the block that has rain pixels. If the similar blocks to the block are obtained, the rain region of the block is reconstructed by non-local means (NLM) filter using the similar neighbors. Experimental results show that the proposed algorithm outperforms the previous works in terms of subjective visual quality of de-rained video sequences.

Feature based Pre-processing Method to compensate color mismatching for Multi-view Video (다시점 비디오의 색상 성분 보정을 위한 특징점 기반의 전처리 방법)

  • Park, Sung-Hee;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.12
    • /
    • pp.2527-2533
    • /
    • 2011
  • In this paper we propose a new pre-processing algorithm applied to multi-view video coding using color compensation algorithm based on image features. Multi-view images have a difference between neighboring frames according to illumination and different camera characteristics. To compensate this color difference, first we model the characteristics of cameras based on frame's feature from each camera and then correct the color difference. To extract corresponding features from each frame, we use Harris corner detection algorithm and characteristic coefficients used in the model is estimated by using Gauss-Newton algorithm. In this algorithm, we compensate RGB components of target images, separately from the reference image. The experimental results with many test images show that the proposed algorithm peformed better than the histogram based algorithm as much as 14 % of bit reduction and 0.5 dB ~ 0.8dB of PSNR enhancement.

Studies on Applying Scalable Video Coding Signals to Ka band Satellite HDTV Service (SVC 신호의 Ka대역 HDTV 위성방송서비스 적용에 관한 연구)

  • Yoon, Ki-Chang;Chang, Dae-Ig;Sohn, Won
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.905-914
    • /
    • 2008
  • The paper studied the scheme of applying the MPEG-4 SVC signal to the Ka band satellite broadcasting system through the JSCC system to resolve the rain fading problem generated when providing the Ka band HDTV satellite broadcasting service. The Ka band satellite broadcasting system is based on the VCM mode of the DVB-S2, and the SVC signal is considered as one of the spatial scalability, the SNR scalability and the temporal scalability. The JSCC system jointed all the layers of the source coding system and the channel coding system, and allocated bit rate to source coding and channel coding for each layer to get the optimum receiving quality. The layers are consists of a base layer and an enhancement layer, and the bit rate of each layer is affected by the SVC signal. The applicability of the three SVC signals to the Ka band satellite broadcasting service is analyzed with respect to the rain fading, and the scheme of applying the most excellent SVC to the service is considered.

Enhancement on 3 DoF Image Stitching Using Inertia Sensor Data (관성 센서 데이터를 활용한 3 DoF 이미지 스티칭 향상)

  • Kim, Minwoo;Kim, Sang-Kyun
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.51-61
    • /
    • 2017
  • This paper proposes a method to generate panoramic images by combining conventional feature extraction algorithms (e.g., SIFT, SURF, MPEG-7 CDVS) with sensed data from an inertia sensor to enhance the stitching results. The challenge of image stitching increases when the images are taken from two different mobile phones with no posture calibration. Using inertia sensor data obtained by the mobile phone, images with different yaw angles, pitch angles, roll angles are preprocessed and adjusted before performing stitching process. Performance of stitching (e.g., feature extraction time, inlier point numbers, stitching accuracy) between conventional feature extraction algorithms is reported along with the stitching performance with/without using the inertia sensor data.

Image Resolution Enhancement by Improved S&A Method using POCS (POCS 이론을 이용한 개선된 S&A 방법에 의한 영상의 화질 향상)

  • Yoon, Soo-Ah;Lee, Tae-Gyoun;Lee, Sang-Heon;Son, Myoung-Kyu;Kim, Duk-Gyoo;Won, Chul-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.11
    • /
    • pp.1392-1400
    • /
    • 2011
  • In most digital imaging applications, high-resolution images or videos are usually desired for later image processing and analysis. The image signal obtained from general imaging system occurs image degradation during the process of image acquirement caused by the optics, physical constraints and the atmosphere effects. Super-resolution reconstruction, one of the solution to address this problem, is image reconstruction technique that produces a high-resolution image from several low-resolution frames in video sequences. In this paper, we propose an improved super-resolution method using Projection onto Convex Sets (POCS) method based on Shift & Add (S&A). The image using conventional algorithms is sensitive to noise. To solve this problem, we propose a fusion algorithm of S&A and POCS. Also we solve the problem using BLPF (Butterworth Low-pass Filter) in frequency domain as optical blur. Our method is robust to noise and has sharpness enhancement ability. Experimental results show that the proposed super-resolution method has better resolution enhancement performance than other super-resolution methods.