• Title/Summary/Keyword: Perceptual quality-based video coding

Search Result 18, Processing Time 0.023 seconds

Perceptual Quality-based Video Coding with Foveated Contrast Sensitivity (Foveated Contrast Sensitivity를 이용한 인지품질 기반 비디오 코딩)

  • Ryu, Jiwoo;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.19 no.4
    • /
    • pp.468-477
    • /
    • 2014
  • This paper proposes a novel perceptual quality-based (PQ-based) video coding method with foveated contrast sensitivity (FCS). Conventional methods on PQ-based video coding with FCS achieve minimum loss on perceptual quality of compressed video by exploiting the property of human visual system (HVS), that is, its sensitivity differs by the spatial frequency of visual stimuli. On the other hand, PQ-based video coding with foveated masking (FM) exploits the difference of the sensitivity of the HVS between the central vision and the peripheral vision. In this study, a novel FCS model is proposed which considers both the conventional DCT-based JND model and the FM model. Psychological study is conducted to construct the proposed FCS model, and the proposed model is applied to PQ-based video coding algorithm implemented on HM10.0 reference software. Experimental results show that the proposed method decreases bitrate by the average of 10% without loss on the perceptual quality.

Analysis of the JND-Suppression Effect in Quantization Perspective for HEVC-based Perceptual Video Coding

  • Kim, Jaeil;Kim, Munchurl
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.1
    • /
    • pp.22-27
    • /
    • 2015
  • Transform-domain JND (Just Noticeable Difference)-based for PVC (Perceptual Video Coding) is often performed in quantization processes to effectively remove perceptual redundancy. This study examined the JND-suppression effects on quantized coefficients of transform in HEVC (High Efficiency Video Coding). To reveal the JND-suppression effect in quantization, the properties of the floor functions were used for modeling the quantized coefficients, and a JND-adjustment process in an HEVC-compliant PVC scheme was used to tune the JND values by analyzing the JND suppression effect. In the experimental results, the bitrate reduction decreases slightly, but the PSNR and perceptual quality are improved significantly when the proposed JND adjustment process is applied.

Coding Unit-level Multi-loop Encoding Method based on JND for Perceptual Coding (JND 모델을 사용한 코딩 유닛 레벨 멀티-루프 인코딩 기반의 비디오 압축 방법)

  • Lim, Woong;Sim, Donggyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.5
    • /
    • pp.147-154
    • /
    • 2015
  • In this paper, we employed a model which defines the sensitivity according to the background luminance, so called JND (Just Noticeable Difference), and applied to the video coding. The proposed method finds out the maximum possible quantization parameter for the current unit based on the threshold of JND model and reduce the bitrate with similar perceptual quality. It selects the higher quantization parameter and reduce the bitrate when the reconstructed signal which is coded with higher quantization parameter is in a range of allowance based on the JND threshold, i.e. the signal has the similar perceptual quality compared to that is coded with the initial quantization parameter. The proposed algorithm was implemented on HM16.0, which is a reference software of the latest video coding standard HEVC (High Efficiency Video Coding) and the coding performance was evaluated. Compared to HM16.0, the proposed algorithm achieved maximum 20.21% and 6.18% of average bitrate reduction with the similar perceptual quality.

No-Referenced Video-Quality Assessment for H.264 SVC with Packet Loss (패킷 손실시 H.264 SVC의 무기준법 영상 화질 평가 방법)

  • Kim, Hyun-Tae;Kim, Yo-Han;Shin, Ji-Tae;Won, Seok-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.11C
    • /
    • pp.655-661
    • /
    • 2011
  • The transmission issues for the scalable video coding extension of H.264/AVC (H.264 SVC) video has been widely studied. In this paper, we propose an objective video-quality assessment metric based on no-reference for H.264 SVC using scalability information. The proposed metric estimate the perceptual video-quality reflecting error conditions with the consideration of the motion vectors, error propagation patterns with the hierarchical prediction structure, quantization parameters, and number of frame which damaged by packet loss. The proposed metric reflects the human perceptual quality of video and we evaluate the performance of proposed metric by using correlation relationship between differential mean opinion score (DMOS) as a subjective quality and proposed one.

Reliability-Based Deblocking Filter for Wyner-Ziv Video Coding

  • Dinh, Khanh Quoc;Shim, Hiuk Jae;Jeon, Byeungwoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.2
    • /
    • pp.129-142
    • /
    • 2016
  • In Wyner-Ziv coding, video signals are reconstructed by correcting side information generated by block-based motion estimation/compensation at the decoder. The correction is not always accurate due to the limited number of parity bits and early stopping of low-density parity check accumulate (LDPCA) decoding in distributed video coding, or due to the limited number of measurements in distributed compressive video sensing. The blocking artifacts caused by block-based processing are usually conspicuous in smooth areas and degrade the perceptual quality of the reconstructed video. Conventional deblocking filters try to remove the artifacts by treating both sides of the block boundary equally; however, coding errors generated by block-based processing are not necessarily the same on both sides of the block boundaries. Such a block-wise difference is exploited in this paper to improve deblocking for Wyner-Ziv frameworks by designing a filter where the deblocking strength at each block can be non-identical, depending on the reliability of the reconstructed pixels. Test results show that the proposed filter not only improves subjective quality by reducing the coding artifacts considerably, but also gains rate distortion performance.

Visual-Attention-Aware Progressive RoI Trick Mode Streaming in Interactive Panoramic Video Service

  • Seok, Joo Myoung;Lee, Yonghun
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.253-263
    • /
    • 2014
  • In the near future, traditional narrow and fixed viewpoint video services will be replaced by high-quality panorama video services. This paper proposes a visual-attention-aware progressive region of interest (RoI) trick mode streaming service (VA-PRTS) that prioritizes video data to transmit according to the visual attention and transmits prioritized video data progressively. VA-PRTS enables the receiver to speed up the time to display without degrading the perceptual quality. For the proposed VA-PRTS, this paper defines a cutoff visual attention metric algorithm to determine the quality of the encoded video slice based on the capability of visual attention and the progressive streaming method based on the priority of RoI video data. Compared to conventional methods, VA-PRTS increases the bitrate saving by over 57% and decreases the interactive delay by over 66%, while maintaining a level of perceptual video quality. The experiment results show that the proposed VA-PRTS improves the quality of the viewer experience for interactive panoramic video streaming services. The development results show that the VA-PRTS has highly practical real-field feasibility.

Joint Spatial-Temporal Quality Improvement Scheme for H.264 Low Bit Rate Video Coding via Adaptive Frameskip

  • Cui, Ziguan;Gan, Zongliang;Zhu, Xiuchang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.1
    • /
    • pp.426-445
    • /
    • 2012
  • Conventional rate control (RC) schemes for H.264 video coding usually regulate output bit rate to match channel bandwidth by adjusting quantization parameter (QP) at fixed full frame rate, and the passive frame skipping to avoid buffer overflow usually occurs when scene changes or high motions exist in video sequences especially at low bit rate, which degrades spatial-temporal quality and causes jerky effect. In this paper, an active content adaptive frame skipping scheme is proposed instead of passive methods, which skips subjectively trivial frames by structural similarity (SSIM) measurement between the original frame and the interpolated frame via motion vector (MV) copy scheme. The saved bits from skipped frames are allocated to coded key ones to enhance their spatial quality, and the skipped frames are well recovered based on MV copy scheme from adjacent key ones at the decoder side to maintain constant frame rate. Experimental results show that the proposed active SSIM-based frameskip scheme acquires better and more consistent spatial-temporal quality both in objective (PSNR) and subjective (SSIM) sense with low complexity compared to classic fixed frame rate control method JVT-G012 and prior objective metric based frameskip method.

Video Coding Method Using Visual Perception Model based on Motion Analysis (움직임 분석 기반의 시각인지 모델을 이용한 비디오 코딩 방법)

  • Oh, Hyung-Suk;Kim, Won-Ha
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.223-236
    • /
    • 2012
  • We develop a video processing method that allows the more advanced human perception oriented video coding. The proposed method necessarily reflects all influences by the rate-distortion based optimization and the human visual perception that is affected by the visual saliency, the limited space-time resolution and the regional moving history. For reflecting the human perceptual effects, we devise an online moving pattern classifier using the Hedge algorithm. Then, we embed the existing visual saliency into the proposed moving patterns so as to establish a human visual perception model. In order to realize the proposed human visual perception model, we extend the conventional foveation filtering method. Compared to the conventional foveation filter only smoothing less stimulus video signals, the developed foveation filter can locally smooth and enhance signals according to the human visual perception without causing any artifacts. Due to signal enhancement, the developed foveation filter more efficiently transfers the bandwidth saved at smoothed signals to the enhanced signals. Performance evaluation verifies that the proposed video processing method satisfies the overall video quality, while improving the perceptual quality by 12%~44%.

A Perception-based Color Correction Method for Multi-view Images

  • Shao, Feng;Jiang, Gangyi;Yu, Mei;Peng, Zongju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.2
    • /
    • pp.390-407
    • /
    • 2011
  • Three-dimensional (3D) video technologies are becoming increasingly popular, as it can provide users with high quality and immersive experiences. However, color inconsistency between the camera views is an urgent problem to be solved in multi-view imaging. In this paper, a perception-based color correction method for multi-view images is proposed. In the proposed method, human visual sensitivity (VS) and visual attention (VA) models are incorporated into the correction process. Firstly, the VS property is used to reduce the computational complexity by removing these visual insensitive regions. Secondly, the VA property is used to improve the perceptual quality of local VA regions by performing VA-dependent color correction. Experimental results show that compared with other color correction methods, the proposed method can greatly promote the perceptual quality of local VA regions greatly and reduce the computational complexity, and obtain higher coding performance.

Load Balancing Based on Transform Unit Partition Information for High Efficiency Video Coding Deblocking Filter

  • Ryu, Hochan;Park, Seanae;Ryu, Eun-Kyung;Sim, Donggyu
    • ETRI Journal
    • /
    • v.39 no.3
    • /
    • pp.301-309
    • /
    • 2017
  • In this paper, we propose a parallelization method for a High Efficiency Video Coding (HEVC) deblocking filter with transform unit (TU) split information. HEVC employs a deblocking filter to boost perceptual quality and coding efficiency. The deblocking filter was designed for data-level parallelism. In this paper, we demonstrate a method of distributing equal workloads to all cores or threads by anticipating the deblocking filter complexity based on the coding unit depth and TU split information. We determined that the average time saving of our proposed deblocking filter parallelization method has a speed-up factor that is 2% better than that of the uniformly distributed parallel deblocking filter, and 6% better than that of coding tree unit row distribution parallelism. In addition, we determined that the speed-up factor of our proposed deblocking filter parallelization method, in terms of percentage run-time, is up to 3.1 compared to the run-time of the HEVC test model 12.0 deblocking filter with a sequential implementation.