• Title/Summary/Keyword: Video Enhancement

Search Result 269, Processing Time 0.443 seconds

CHROMA FORMAT SCALABLE VIDEO CODING

  • Jia, Jie;Kim, Hae-Kwang;Choi, Hae-Chul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.23-27
    • /
    • 2009
  • A scalable video coding (SVC) extension to the H.264/AVC standard has been developed by the Joint Video Team (JVT). SVC provides spatial, temporal and quality scalability with high coding efficiency and low complexity. SVC is now developing the extension of the first version including color format scalability. The paper proposes to remove some luminance related header and luminance coefficients when an enhancement layer adds only additional color information to its lower layer. Experimental results shows 0.6 dB PSNR gain on average in coding efficiency compared with an approach using the existing SVC standard.

  • PDF

Adaptive Keyframe and ROI selection for Real-time Video Stabilization (실시간 영상 안정화를 위한 키프레임과 관심영역 선정)

  • Bae, Ju-Han;Hwang, Young-Bae;Choi, Byung-Ho;Chon, Je-Youl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.288-291
    • /
    • 2011
  • Video stabilization is an important image enhancement widely used in surveillance system in order to improve recognition performance. Most previous methods calculate inter-frame homography to estimate global motion. These methods are relatively slow and suffer from significant depth variations or multiple moving object. In this paper, we propose a fast and practical approach for video stabilization that selects the most reliable key frame as a reference frame to a current frame. We use optical flow to estimate global motion within an adaptively selected region of interest in static camera environment. Optimal global motion is found by probabilistic voting in the space of optical flow. Experiments show that our method can perform real-time video stabilization validated by stabilized images and remarkable reduction of mean color difference between stabilized frames.

  • PDF

Accelerating the Retinex Algorithm with CUDA

  • Seo, Hyo-Seok;Kwon, Oh-Young
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.3
    • /
    • pp.323-327
    • /
    • 2010
  • Recently, the television market trend is change to HD television and the need of the study on HD image enhancement is increased rapidly. To enhancement of image quality, the retinex algorithm is commonly used. That's why we studied how to accelerate the retinex algorithm with CUDA on GPGPU (general purpose graphics processing unit). Calculating average part in retinex algorithm is similar to pyramidal calculation. We parallelize this recursive pyramidal average calculating for all layers, map the average data into the 2D plane and reduce the calculating time dramatically. Sequential C code takes 8948ms to get the average values for all layers in $1024{\times}1024$ image, but proposed method takes only only about 0.9ms for the same image. We are going to study about the real-time HD video rendering and image enhancement.

A Study on Super Resolution Image Reconstruction for Effective Spatial Identification

  • Park Jae-Min;Jung Jae-Seung;Kim Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.345-354
    • /
    • 2005
  • Super resolution image reconstruction method refers to image processing algorithms that produce a high resolution(HR) image from observed several low resolution(LR) images of the same scene. This method has proven to be useful in many practical cases where multiple frames of the same scene can be obtained, such as satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. In this paper, we applied the super resolution reconstruction method in spatial domain to video sequences. Test images are adjacently sampled images from continuous video sequences and are overlapped at high rate. We constructed the observation model between the HR images and LR images applied with the Maximum A Posteriori(MAP) reconstruction method which is one of the major methods in the super resolution grid construction. Based on the MAP method, we reconstructed high resolution images from low resolution images and compared the results with those from other known interpolation methods.

  • PDF

Raindrop Removal and Background Information Recovery in Coastal Wave Video Imagery using Generative Adversarial Networks (적대적생성신경망을 이용한 연안 파랑 비디오 영상에서의 빗방울 제거 및 배경 정보 복원)

  • Huh, Dong;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.5
    • /
    • pp.1-9
    • /
    • 2019
  • In this paper, we propose a video enhancement method using generative adversarial networks to remove raindrops and restore the background information on the removed region in the coastal wave video imagery distorted by raindrops during rainfall. Two experimental models are implemented: Pix2Pix network widely used for image-to-image translation and Attentive GAN, which is currently performing well for raindrop removal on a single images. The models are trained with a public dataset of paired natural images with and without raindrops and the trained models are evaluated their performance of raindrop removal and background information recovery of rainwater distortion of coastal wave video imagery. In order to improve the performance, we have acquired paired video dataset with and without raindrops at the real coast and conducted transfer learning to the pre-trained models with those new dataset. The performance of fine-tuned models is improved by comparing the results from pre-trained models. The performance is evaluated using the peak signal-to-noise ratio and structural similarity index and the fine-tuned Pix2Pix network by transfer learning shows the best performance to reconstruct distorted coastal wave video imagery by raindrops.

SVC-based Adaptive Video Streaming over Content-Centric Networking

  • Lee, Junghwan;Hwang, Jaehyun;Choi, Nakjung;Yoo, Chuck
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.10
    • /
    • pp.2430-2447
    • /
    • 2013
  • In recent years, HTTP adaptive streaming (HAS) has attracted considerable attention as the state-of-the-art technology for video transport. HAS dynamically adjusts the quality of video streaming according to the network bandwidth and device capability of users. Content-Centric Networking (CCN) has also emerged as a future Internet architecture, which is a novel communication paradigm that integrates content delivery as a native network primitive. These trends have led to the new research issue of harmonizing HAS with the in-network caching provided by CCN routers. Previous research has shown that the performance of HAS can be improved by using the H.264/SVC(scalable video codec) in the in-network caching environments. However, the previous study did not address the misbehavior that causes video freeze when overestimating the available network bandwidth, which is attributable to the high cache hit rate. Thus, we propose a new SVC-based adaptation algorithm that utilizes a drop timer. Our approach aims to stop the downloading of additional enhancement layers that are not cached in the local CCN routers in a timely manner, thereby preventing excessive consumption of the video buffer. We implemented our algorithm in the SVC-HAS client and deployed a testbed that could run Smooth-Streaming, which is one of the most popular HAS solutions, over CCNx, which is the reference implementation of CCN. Our experimental results showed that the proposed scheme (SLA) could avoid video freeze in an effective manner, but without reducing the high hit rate on the CCN routers or affecting the high video quality on the SVC-HAS client.

VLSI Architecture for Video Object Boundary Enhancement (비디오객체의 경계향상을 위한 VLSI 구조)

  • Kim, Jinsang-
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.11A
    • /
    • pp.1098-1103
    • /
    • 2005
  • The edge and contour information are very much appreciated by the human visual systems and are responsible for our perceptions and recognitions. Therefore, if edge information is integrated during extracting video objects, we can generate boundaries of oects closer to human visual systems for multimedia applications such as interaction between video objects, object-based coding, and representation. Most of object extraction methods are difficult to implement real-time systems due to their iterative and complex arithmetic operations. In this paper, we propose a VLSI architecture integrating edge information to extract video objects for precisely located object boundaries. The proposed architecture can be easily implemented into hardware due to simple arithmetic operations. Also, it can be applied to real-time object extraction for object-oriented multimedia applications.

Complexity Analysis of HM and JEM Encoder Software

  • Li, Xiang;Wu, Xiangjian;Marzuki, Ismail;Ahn, Yong-Jo;Sim, Donggyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.264-266
    • /
    • 2016
  • During the $2^{nd}$ JVET (Joint Group on Future Video Coding Technology Exploration) meeting, up to 22 coding tools focusing on Future Video Coding (FVC) were proposed. Despite that the application of proposed coding tools has a considerable performance enhancement, however, the encoding time of Joint Exploration Model (JEM) software is over 20 times for All Intra coding mode, 6 times for Random Access coding mode, of HEVC reference model (HM), and decoding time is 1.6 times for All Intra coding mode, 7.9 times for Random Access coding mode, of HM. This paper focuses on analyzing the complexity of the JEM software compared with HM.

  • PDF

Soft-$\alpha$ Filter Technology for image enhancement of MPEG-2 Video (MPEG-2 비디오의 화질 향상을 위한 소프트-$\alpha$ 필터 기법)

  • 심비연;박영배
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2002.04b
    • /
    • pp.109-111
    • /
    • 2002
  • Visual organs play an important role in human information recognition processes. If they are expressed in a way of digital information, it makes much bigger amount of visual information among any other information. For that reason, MPEG-2 has been taken use of to represent information compressing technology in multi-media. Although the imported data would basically contain noises, when original video images are encoded into MPET-2. Accordingly, we propose soft- $\alpha$ filter to improve image quality of digital image received from the actual image and to reduce noises from them. We also propose a method combining vertical/horizontal filter and soft- $\alpha$ filter on MPEG-2 video image. We can get two kinds of effects from the advantages of this kind of combination. Firstly, it will reduce processing time ducting horizontal and vetical filtering process. It will cover time for soft- $\alpha$ filter. Secondly, it will simplify the colors in horizontal and vertical filter. Therefore we can get clearer quality without noises from soft- $\alpha$ filter.

  • PDF

Analysis for coding modes and data in the enhancement layer of Scalable HEVC (Scaleable HEVC에서 향상계층의 제한적 부호화에 따른 통계적 특성 분석)

  • Jeong, Yeon-Kyeong;Kang, Jung-Won;Lee, Ha-Hyun;Lee, Jin-ho;Han, Jong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.97-100
    • /
    • 2014
  • 다양한 동영상 콘텐츠를 이용하는 사용자들의 단말기 성능이나 네트워크 상황, 또는 단말기의 해상도 등에 실시간으로 대응할 수 있는 영상 압축 방법으로 스케일러블 영상 코딩 (Scalable Video Coding, SVC)[1]을 사용하고 있다. 최근 JCT-VC(Joint Joint Collaborative Team on Video Coding)에서는 초고해상도를 타겟으로 하는 동영상 압축기술인 HEVC(Efficiency Video Coding)를 기반으로한 Scaleable HEVC(SHVC)[3]를 표준화 중에 있다. SHVC는 공간적(Spatial), 시간적(Temporal), 화질적(SNR) 스케일러빌러티를 제공을 하며, HEVC v.1에 비해 높은 복잡도를 가진다. 본 논문에서는 SHVC의 공간적 스케일러빌러티의 부호화 속도 개선을 위한 알고리즘 개발에 앞서 제한적 실험을 통한 통계적 분석을 하였다.

  • PDF