• Title/Summary/Keyword: Versatile Video Coding (VVC)

Search Result 76, Processing Time 0.025 seconds

A Review on Motion Estimation and Compensation for Versatile Video Coding Technology (VVC)

  • Choi, Young-Ju;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.7
    • /
    • pp.770-779
    • /
    • 2019
  • Video coding technologies are progressively becoming more efficient and complex. The Versatile Video Coding (VVC) is a new state-of-the art video compression standard that is going to be a standard, as the next generation of High Efficiency Video Coding (HEVC) standard. To explore the future video coding technologies beyond the HEVC, numerous efficient methods have been adopted by the Joint Video Exploration Team (JVET). Since then, the next generation video coding standard named as VVC and its software model called VVC Test Model (VTM) have emerged. In this paper, several important coding features for motion estimation and motion compensation in the VVC standard is introduced and analyzed in terms of the performance. Improved coding tools introduced for ME and MC in VVC, can achieve much better and good balance between coding efficiency and coding complexity compared with the HEVC.

Fast Affine Motion Estimation Method for Versatile Video Coding (다목적 비디오 부호화를 위한 고속 어파인 움직임 예측 방법)

  • Jung, Seong-Won;Jun, Dong-San
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.4_2
    • /
    • pp.707-714
    • /
    • 2022
  • Versatile Video Coding (VVC) is the most recent video coding standard, which had been developed by Joint Video Expert Team (JVET). It can improve significant coding performance compared to the previous standard, namely High Efficiency Video Coding (HEVC). Although VVC can achieve the powerful coding performance, it requires the tremendous computational complexity of VVC encoder. Especially, affine motion compensation (AMC) was adopted the block-based 4-parameter or 6-parameter affine prediction to overcome the limit of translational motion model while VVC require the cost of higher encoding complexity. In this paper, we proposed the early termination of AMC that determines whether the affine motion estimation for AMC is performed or not. Experimental results showed that the proposed method reduced the encoding complexity of affine motion estimation (AME) up to 16% compared to the VVC Test Model 17 (VTM17).

Multi-Layer Perceptron Based Ternary Tree Partitioning Decision Method for Versatile Video Coding (다목적 비디오 부/복호화를 위한 다층 퍼셉트론 기반 삼항 트리 분할 결정 방법)

  • Lee, Taesik;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.783-792
    • /
    • 2022
  • Versatile Video Coding (VVC) is the latest video coding standard, which had been developed by the Joint Video Experts Team (JVET) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) in 2020. Although VVC can provide powerful coding performance, it requires tremendous computational complexity to determine the optimal block structures during the encoding process. In this paper, we propose a fast ternary tree decision method using two neural networks with 7 nodes as input vector based on the multi-layer perceptron structure, names STH-NN and STV-NN. As a training result of neural network, the STH-NN and STV-NN achieved accuracies of 85% and 91%, respectively. Experimental results show that the proposed method reduces the encoding complexity up to 25% with unnoticeable coding loss compared to the VVC test model (VTM).

CNN Based In-loop Filter in Versatile Video Coding (VVC) (CNN 기반의 VVC 인-루프 필터 설계)

  • Moon, Hyeonchul;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.270-271
    • /
    • 2018
  • 본 논문에서는 새로이 시작된 비디오 압축 표준인 VVC(Versatile Video Coding)의 인-루프(in-loop) 필터링을 위한 CNN 구조를 제안한다. 제안하는 CNN 구조는 복호화된 영상을 입력으로 하고 원본 영상과 복호화된 영상의 오차를 손실함수로 사용하여 학습을 진행한다. 또한, 비디오 부호화에서의 다양한 크기의 CU(Coding Unit)를 고려한 다양한 크기의 컨볼루션 필터를 사용하여 특징을 추출하는 구조에 기반하고 있다. 실험을 통하여 제안한 CNN 기반의 필터링이 VVC 의 시험모델인 VTM(VVC Test Model)의 인-루프 필터링의 성능을 개선할 수 있음을 확인하였다.

  • PDF

CNN-based Fast Split Mode Decision Algorithm for Versatile Video Coding (VVC) Inter Prediction

  • Yeo, Woon-Ha;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.147-158
    • /
    • 2021
  • Versatile Video Coding (VVC) is the latest video coding standard developed by Joint Video Exploration Team (JVET). In VVC, the quadtree plus multi-type tree (QT+MTT) structure of coding unit (CU) partition is adopted, and its computational complexity is considerably high due to the brute-force search for recursive rate-distortion (RD) optimization. In this paper, we aim to reduce the time complexity of inter-picture prediction mode since the inter prediction accounts for a large portion of the total encoding time. The problem can be defined as classifying the split mode of each CU. To classify the split mode effectively, a novel convolutional neural network (CNN) called multi-level tree (MLT-CNN) architecture is introduced. For boosting classification performance, we utilize additional information including inter-picture information while training the CNN. The overall algorithm including the MLT-CNN inference process is implemented on VVC Test Model (VTM) 11.0. The CUs of size 128×128 can be the inputs of the CNN. The sequences are encoded at the random access (RA) configuration with five QP values {22, 27, 32, 37, 42}. The experimental results show that the proposed algorithm can reduce the computational complexity by 11.53% on average, and 26.14% for the maximum with an average 1.01% of the increase in Bjøntegaard delta bit rate (BDBR). Especially, the proposed method shows higher performance on the sequences of the A and B classes, reducing 9.81%~26.14% of encoding time with 0.95%~3.28% of the BDBR increase.

Neural Network-Based Intra Prediction Considering Multiple Transform Selection in Versatile Video Coding (VVC 의 다중 변환 선택을 고려한 신경망 기반 화면내 예측)

  • Dohyeon Park;Gihwa Moon;Sung-Chang Lim;Jae-Gon Kim
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.11a
    • /
    • pp.8-9
    • /
    • 2022
  • 최근 VVC(Versatile Video Coding) 표준 완료 이후 JVET(Joint Video Experts Team)에서는 NNVC(Neural Network-based Video Coding) EE(Exploration Experiment)를 통하여 화면내 예측을 포함한 신경망 기반의 부호화 기술들을 탐색하고 검증하고 있다. 본 논문에서는 VVC 에 채택되어 있는 다중 변환 선택(MTS: Multiple Transform Selection)에 따라서 적절한 예측 블록을 선택할 수 있는 TDIP(Transform-Dependent Intra Prediction) 모델을 제안한다. 실험결과 제안기법은 VVC 의 AI(All Intra) 부호화 환경에서 VTM(VVC Test Model) 대비 Y, U, V 에 각각 0.87%, 0.87%, 0.99%의 BD-rate 절감의 비디오 부호화 성능 향상을 보였다.

  • PDF

Fast Decision Method of Adaptive Motion Vector Resolution (적응적 움직임 벡터 해상도 고속 결정 기법)

  • Park, Sang-hyo
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.305-312
    • /
    • 2020
  • As a demand for a new video coding standard having higher coding efficiency than the existing standards is growing, recently, MPEG and VCEG has been developing and standardizing the next-generation video coding project, named Versatile Video Coding (VVC). Many inter prediction techniques have been introduced to increase the coding efficiency, and among them, an adaptive motion vector resolution (AMVR) technique has contributed on increasing the efficiency of VVC. However, the best motion vector can only be determined by computing many rate-distortion costs, thereby increasing encoding complexity. It is necessary to reduce the complexity for real-time video broadcasting and streaming services, but it is yet an open research topic to reduce the complexity of AMVR. Therefore, in this paper, an efficient technique is proposed, which reduces the encoding complexity of AMVR. For that, the proposed method exploits a special VVC tree structure (i.e., multi-type tree structure) to accelerate the decision process of AMVR. Experiment results show that the proposed decision method reduces the encoding complexity of VVC test model by 10% with a negligible loss of coding efficiency.

Generation of Alternative Merge Candidates for Versatile Video Coding(VVC) (VVC를 위한 대체 움직임 정보 병합 후보 생성 기법)

  • Park, Dohyeon;Lee, Jinho;Kang, Jung Won;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.147-148
    • /
    • 2018
  • 최근 JVET(Joint Video Experts Team)은 새로운 비디오 압축 표준인 VVC(Versatile Video Coding)의 표준화를 시작하였다. 기존의 HEVC 및 VVC의 참조 SW 코덱인 HM 및 VTM(Versatile Test Model)에서는 효율적인 화면간 예측 부호화를 위한 움직임 정보 병합(Merge) 모드를 사용하고 있다. 본 논문에서는 VTM 의 Merge 후보 리스트 구성에서 공간적 주변블록의 움직임 정보가 존재하지 않을 경우, 이를 대체할 수 있는 Merge 후보 리스트 생성 기법을 제시한다. JVET CTC(Common Test Condition)를 이용하여 제안한 기법의 실험을 진행하였고, 실험결과 Y, U, V 성분 각각 0.2%, 0.17%, 0.12%의 BD-rate 감소를 확인하였다.

  • PDF

VVC의 엔트로피 코딩

  • Kim, Dae-Yeon
    • Broadcasting and Media Magazine
    • /
    • v.24 no.4
    • /
    • pp.102-108
    • /
    • 2019
  • VVC(Versatile Video Coding)는 H.264/AVC(Advanced Video Coding)와 H.265/HEVC(High Efficiency Video Coding)의 엔트로피 코딩 기술로 사용되었던 CABAC(Context-based Adaptive Binary Arithmetic Coding)을 기반으로하여 압축율과 처리율을 향상시킬 수 있는 다양한 기술들이 채택되어 현재 CD(Committee Draft)가 완성되었고 참조 모델인 VTM6.0이 정식으로 배포되었다. 본 논문에서는 VVC Draft 6에 채택된 엔트로피 코딩 관련 기술들과 H.265/HEVC의 엔트로피 코딩의 차이점을 설명하고 엔트로피 코딩의 압축 성능과 엔트로피 코딩의 복잡도를 분석한다.

Block Shape Adaptive Candidate List Derivation for Inter Prediction in Versatile Video Coding (VVC) (VVC 의 블록모양 적응적 화면간 예측 후보 리스트 유도 기법)

  • Do, JiHoon;Park, Dohyeon;Kim, Jae-Gon;Jeong, Dae-Gwon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.257-259
    • /
    • 2018
  • 최근 JVET(Joint Video Experts Team)는 새로운 비디오 압축 표준을 VVC(Versatile Video Coding)으로 이름 짓고 2020 년 완료를 목표로 그 표준화를 시작하였다. HEVC 및 VVC 에서는 화면간 예측의 부호화 효율을 위하여 공간적/시간적 주변블록의 움직임 정보로부터 Merge/AMVP(Advanced Motion Vector Prediction)의 후보 리스트를 구성하고 최적의 움직임 정보를 활용한다. 본 논문에서는 Merge/AMVP 의 후보 리스트를 유도할 때, 현재블록의 모양을 고려하여 상관성이 높은 주변블록의 움직임 정보를 우선 순위로 유도하는 기법을 제안한다. 실험을 통하여 VTM(VVC TM) 대비 제안기법의 성능을 확인한다.

  • PDF