• Title/Summary/Keyword: coding efficiency

Search Result 1,017, Processing Time 0.027 seconds

VVC의 엔트로피 코딩

  • Kim, Dae-Yeon
    • Broadcasting and Media Magazine
    • /
    • v.24 no.4
    • /
    • pp.102-108
    • /
    • 2019
  • VVC(Versatile Video Coding)는 H.264/AVC(Advanced Video Coding)와 H.265/HEVC(High Efficiency Video Coding)의 엔트로피 코딩 기술로 사용되었던 CABAC(Context-based Adaptive Binary Arithmetic Coding)을 기반으로하여 압축율과 처리율을 향상시킬 수 있는 다양한 기술들이 채택되어 현재 CD(Committee Draft)가 완성되었고 참조 모델인 VTM6.0이 정식으로 배포되었다. 본 논문에서는 VVC Draft 6에 채택된 엔트로피 코딩 관련 기술들과 H.265/HEVC의 엔트로피 코딩의 차이점을 설명하고 엔트로피 코딩의 압축 성능과 엔트로피 코딩의 복잡도를 분석한다.

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

Efficient Motion Information Representation in Splitting Region of HEVC (HEVC의 분할 영역에서 효율적인 움직임 정보 표현)

  • Lee, Dong-Shik;Kim, Young-Mo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.485-491
    • /
    • 2012
  • This paper proposes 'Coding Unit Tree' based on quadtree efficiently with motion vector to represent splitting information of a Coding Unit (CU) in HEVC. The new international video coding, High Efficiency Video Coding (HEVC), adopts various techniques and new unit concept: CU, Prediction Unit (PU), and Transform Unit (TU). The basic coding unit, CU is larger than macroblock of H.264/AVC and it splits to process image-based quadtree with a hierarchical structure. However, in case that there are complex motions in CU, the more signaling bits with motion information need to be transmitted. This structure provides a flexibility and a base for a optimization, but there are overhead about splitting information. This paper analyzes those signals and proposes a new algorithm which removes those redundancy. The proposed algorithm utilizes a type code, a dominant value, and residue values at a node in quadtree to remove the addition bits. Type code represents a structure of an image tree and the two values represent a node value. The results show that the proposed algorithm gains 13.6% bit-rate reduction over the HM-1.0.

A Fast TU Size Decision Method for HEVC RQT Coding

  • Wu, Jinfu;Guo, Baolong;Yan, Yunyi;Hou, Jie;Zhao, Dan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.6
    • /
    • pp.2271-2288
    • /
    • 2015
  • The emerging high efficiency video coding (HEVC) standard adopts the quadtree-structured transform unit (TU) in the residual quadtree (RQT) coding. Each TU allows to be split into four equal sub-TUs recursively. The RQT coding is performed for all the possible transform depth levels to achieve the highest coding efficiency, but it requires a very high computational complexity for HEVC encoders. In order to reduce the computational complexity requested by the RQT coding, in this paper, we propose a fast TU size decision method incorporating an adaptive maximum transform depth determination (AMTD) algorithm and a full check skipping - early termination (FCS-ET) algorithm. Because the optimal transform depth level is highly content-dependent, it is not necessary to perform the RQT coding at all transform depth levels. By the AMTD algorithm, the maximum transform depth level is determined for current treeblock to skip those transform depth levels rarely used by its spatially adjacent treeblocks. Additionally, the FCS-ET algorithm is introduced to exploit the correlations of transform depth level between four sub-CUs generated by one coding unit (CU) quadtree partitioning. Experimental results demonstrate that the proposed overall algorithm significantly reduces on average 21% computational complexity while maintaining almost the same rate distortion (RD) performance as the HEVC test model reference software, HM 13.0.

Dynamic Universal Variable Length Coding with Fixed Re-Association Table (고정 재배정 테이블 기반 동적 UVLC 부호화 방법)

  • Choe, Ung-Il;Jeon, Byeong-U;Yu, Guk-Yeol;Cheon, Gang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.56-68
    • /
    • 2002
  • The Universal Variable Length Coding(UVLC) scheme in H.26L has nice features such as error resiliency and two-way decodability. However, it has lower coding efficiency than the conventional Huffman coding. To improve the coding efficiency of UVLC, we Propose to use a dynamic codeword mapping that changes association between symbols and codewords in order to utilize the statistical characteristics of symbols as much as possible but without losing any features of the UVLC. Both encoder and decoder use the same re-association table, and hence the encoder need not send additional overhead for the re-mapping relationship to the decoder. Simulation results show that without significant change of the current H.26L coding scheme, the proposed method additionally attains up to about 8% and about 5% bit reductions respectively in intra and inter frames over the current H.26L encoding method.

Overview and Performance Analysis of the Emerging Scalable Video Coding (스케일러블 비디오 부호화의 개요 및 성능 분석)

  • Choi, Hae-Chul;Lee, Kyung-Il;Kang, Jung-Woo;Bae, Seong-Jun;Yoo, Jeong-Ju
    • Journal of Broadcast Engineering
    • /
    • v.12 no.6
    • /
    • pp.542-554
    • /
    • 2007
  • Seamless streaming of multimedia content via heterogeneous networks to viewers using a variety of devices has been a desire for many multimedia services, for which the multimedia contents should be adapted to usage environments such as network characteristics, terminal capabilities, and user preferences. Scalability in video coding is one of attractive features to meet dynamically changing requirements of heterogeneous networks. Currently a new scalable video coding (SVC) is standardizing in the Joint Video Team (JVT) of the ISO/IEC Moving Picture Experts Group (MPEG) and the ITU-T Video Coding Experts Group (VCEG), which will be released as Extension 3 of H.264/MPEG-4 AVC. In this paper, we introduce new technologies of SVC and evaluate performance of it especially regarding on overhead bit-rate and coding efficiency to support spatial, temporal, and quality scalability.

Context-based Predictive Coding Scheme for Lossless Image Compression (무손실 영상 압축을 위한 컨텍스트 기반 적응적 예측 부호화 방법)

  • Kim, Jongho;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.1
    • /
    • pp.183-189
    • /
    • 2013
  • This paper proposes a novel lossless image compression scheme composed of direction-adaptive prediction and context-based entropy coding. In the prediction stage, we analyze the directional property with respect to the current coding pixel and select an appropriate prediction pixel. In order to further reduce the prediction error, we propose a prediction error compensation technique based on the context model defined by the activities and directional properties of neighboring pixels. The proposed scheme applies a context-based Golomb-Rice coding as the entropy coding since the coding efficiency can be improved by using the conditional entropy from the viewpoint of the information theory. Experimental results indicate that the proposed lossless image compression scheme outperforms the low complexity and high efficient JPEG-LS in terms of the coding efficiency by 1.3% on average for various test images, specifically for the images with a remarkable direction the proposed scheme shows better results.

Selective Interpolation Filter for Video Coding (비디오 압축을 위한 선택적인 보간 필터)

  • Nam, Jung-Hak;Jo, Hyun-Ho;Sim, Dong-Gyu;Lee, Soo-Youn
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.58-66
    • /
    • 2012
  • Even after establishment of H.264/AVC standard, the video coding experts group (VCEG) of ITU-T has researched on development of promising coding techniques to increase coding efficiency based on the key technology area (KTA) software. Recently, the joint collaboration team video coding (JCT-VC) which was composed of the VCEG and the motion picture experts group (MPEG) of ISO/IEC is developing a next-generation video standard namely HEVC intended to gain twice efficiency than H.264/AVC. An adaptive interpolation technique, one of various next-generation techniques, reported higher coding efficiency. However, it has high computational complexity and does not deal with various error characteristics for videos. In this paper, we investigate characteristics of interpolation filters and propose an effective fixed interpolation filter bank including diverse properties of error. Experimental results is shown that the proposed method achieved bitrate reduction by 0.7% and 1.3% compared to fixed directional interpolation filter (FDIF) of the KTA and the directional interpolation filter (DIF) of the HEVC test model, respectively.

A Method of Merge Candidate List Construction using an Alternative Merge Candidate (대체 병합 후보를 이용한 병합 후보 리스트 구성 기법)

  • Park, Do-Hyeon;Yoon, Yong-Uk;Do, Ji-Hoon;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.41-47
    • /
    • 2019
  • Recently, enhanced methods on the inter merging have been being investigated in Versatile Video Coding (VVC) standardization which will be a next generation video coding standard with capability beyond the High Efficiency Video Coding (HEVC). If there is not enough motion information available in the neighboring blocks in the merge mode, zero motion candidate is inserted into the merge candidate list, which could make the coding efficiency decreased. In this paper, we propose an efficient method of constructing the merge mode candidate list to reduce the case that the zero motion is used as a candidate by generating an alternative merge candidate. Experimental results show that the proposed method gives the average BD-rate gain of 0.2% with the decoding time increase of 3% in the comparison with VTM 1.0.

Uni-directional 4X4 Intra Prediction Mode for H.264/AVC Coding Efficiency (H.264/AVC에서 성능 향상을 위한 단방향의 4X4 인트라 예측 모드)

  • Jung, Kwang-Su;Park, Sea-Nae;Sim, Dong-Gyu;Lee, Yoon-Jin;Park, Gwang-Hoon;Oh, Seoung-Jun;Jeong, Sey-Yoon;Choi, Jin-Soo
    • Journal of Broadcast Engineering
    • /
    • v.15 no.6
    • /
    • pp.815-829
    • /
    • 2010
  • In this paper, we propose a new $4{\times}4$ intra coding method by unidirectional prediction for improvement of intra-frame coding efficiency of H.264/AVC. There are $4{\times}4$, $8{\times}8$, and $16{\times}16$ intra prediction modes in the current H.264/AVC. For the $4{\times}4$ intra prediction, coding efficiency is achieved by accurate prediction with small block size in relatively complicated regions, and the $16{\times}16$ intra prediction method can predict more accurately compared to $4{\times}4$ intra prediction with only one directional information in relatively homogeneous regions. We propose a unidirectional $4{\times}4$ intra prediction method adopting a small-size prediction and one directional prediction approaches. In order to improve coding efficiency, the proposed method is conducted by $4{\times}4$ block and their prediction directions are all the same, resulting that we need to send only one directional information for each macroblock. For intra-frame coding setting, we achieve 10.47% and 1.57% coding efficiency in BD-bitrate for only $16{\times}16$ intra mode and $4{\times}4$, $16{\times}16$ intra mode, respectively.