• Title/Summary/Keyword: VTM

Search Result 56, Processing Time 0.023 seconds

Multiple Signature Comparison of LogTM-SE for Fast Conflict Detection (다중 시그니처 비교를 통한 트랜잭셔널 메모리의 충돌해소 정책의 성능향상)

  • Kim, Deok-Ho;Oh, Doo-Hwan;Ro, Won-W.
    • The KIPS Transactions:PartA
    • /
    • v.18A no.1
    • /
    • pp.19-24
    • /
    • 2011
  • As era of multi-core processors has arrived, transactional memory has been considered as an effective method to achieve easy and fast multi-threaded programming. Various hardware transactional memory systems such as UTM, VTM, FastTM, LogTM, and LogTM-SE, have been introduced in order to implement high-performance multi-core processors. Especially, LogTM-SE has provided study performance with an efficient memory management policy and a practical thread scheduling method through conflict detection based on signatures. However, increasing number of cores on a processor imposes the hardware complexity for signature processing. This causes overall performance degradation due to the heavy workload on signature comparison. In this paper, we propose a new architecture of multiple signature comparison to improve conflict detection of signature based transactional memory systems.

Pyramid Feature Compression with Inter-Level Feature Restoration-Prediction Network (계층 간 특징 복원-예측 네트워크를 통한 피라미드 특징 압축)

  • Kim, Minsub;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.283-294
    • /
    • 2022
  • The feature map used in the network for deep learning generally has larger data than the image and a higher compression rate than the image compression rate is required to transmit the feature map. This paper proposes a method for transmitting a pyramid feature map with high compression rate, which is used in a network with an FPN structure that has robustness to object size in deep learning-based image processing. In order to efficiently compress the pyramid feature map, this paper proposes a structure that predicts a pyramid feature map of a level that is not transmitted with pyramid feature map of some levels that transmitted through the proposed prediction network to efficiently compress the pyramid feature map and restores compression damage through the proposed reconstruction network. Suggested mAP, the performance of object detection for the COCO data set 2017 Train images of the proposed method, showed a performance improvement of 31.25% in BD-rate compared to the result of compressing the feature map through VTM12.0 in the rate-precision graph, and compared to the method of performing compression through PCA and DeepCABAC, the BD-rate improved by 57.79%.

Accurate Prediction of VVC Intra-coded Block using Convolutional Neural Network (VVC 화면 내 예측에서의 딥러닝 기반 예측 블록 개선을 통한 부호화 효율 향상 기법)

  • Jeong, Hye-Sun;Kang, Je-Won
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.477-486
    • /
    • 2022
  • In this paper, we propose a novel intra-prediction method using convolutional neural network (CNN) to improve a quality of a predicted block in VVC. The proposed algorithm goes through a two-step procedure. First, an input prediction block is generated using one of the VVC intra-prediction modes. Second, the prediction block is further refined through a CNN model, by inputting the prediction block itself and reconstructed reference samples in the boundary. The proposed algorithm outputs a refined block to reduce residual signals and enhance coding efficiency, which is enabled by a CU-level flag. Experimental results demonstrate that the proposed method achieves improved rate-distortion performance as compared a VVC reference software, I.e., VTM version 10.0.

Fast Inverse Transform Considering Multiplications (곱셈 연산을 고려한 고속 역변환 방법)

  • Hyeonju Song;Yung-Lyul Lee
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.100-108
    • /
    • 2023
  • In hybrid block-based video coding, transform coding converts spatial domain residual signals into frequency domain data and concentrates energy in a low frequency band to achieve a high compression efficiency in entropy coding. The state-of-the-art video coding standard, VVC(Versatile Video Coding), uses DCT-2(Discrete Cosine Transform type 2), DST-7(Discrete Sine Transform type 7), and DCT-8(Discrete Cosine Transform type 8) for primary transform. In this paper, considering that DCT-2, DST-7, and DCT-8 are all linear transformations, we propose an inverse transform that reduces the number of multiplications in the inverse transform by using the linearity of the linear transform. The proposed inverse transform method reduced encoding time and decoding time by an average 26%, 15% in AI and 4%, 10% in RA without the increase of bitrate compared to VTM-8.2.

Separate Scale for Position Dependent Intra Prediction Combination of VVC

  • Yoon, Yong-Uk;Park, Dohyeon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.20-21
    • /
    • 2019
  • The Joint Video Experts Team (JVET) has been working on the development of next generation of video coding standard called Versatile Video Coding (VVC). Position Dependent Intra Prediction Combination (PDPC) which is one of the major tools for intra prediction refines the prediction through a linear combination between the reconstructed samples and the predicted samples according to the sample position. In VVC WD6, nScale which is shift value that adjusts the weight is determined by the width and height of the current block. It may cause that PDPC is applied to regions that do not fit the characteristics of the current intra prediction mode. In this paper, we define nScale for each width and height so that the weight can be applied independently to the left and top reference samples, respectively. Experimental results show that, compared to VTM 6.0, the proposed method gives -0.01%, -0.04% and 0.01% Bjotegaard-Delta (BD)-rate performance, for Y, Cb, and Cr components, respectively, in All-Intra (AI) configuration.

  • PDF

The Effect on the Gas Selectivity of Gas Sensors by Binder in SWNTs Solution (SWCNTs 용액 속의 바인더로 인한 가스센서의 가스 선택성의 효과)

  • Kim, Seong-Jeen;Gam, Byung-Min;Lee, Ho-Jung;Han, Jung-Tak;Lee, Gun-Woong
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2009.04b
    • /
    • pp.103-105
    • /
    • 2009
  • 본 연구에서 우리는 single walled carbon nanotube(SWNT) 용액 속의 TEOS와 VTMS 바인더가 나노튜브 센서의 감도의 증가와 선택성을 가지는 것에 대해서 조사하였다. 일반적으로 혼합된 SWNT 용액 속의 유기 화합물인 바인더는 기판에 잘 부착된다. 그리고 가수분해 된 바인더의 표면에는 바인더가 경화 되었을 때 기능화 된 하나의 그룹이 형성된다. 그것은 SWNT의 표면에 가수분해 된 TEOS와 VTMS의 -OH와 $-CH=CH_2$ 같이 기능화된 그룹이다. 그러므로 전하는 carbon nanotube와 흡착된 분자 사이에서 이동되고 그에 대한 영향으로 전기적인 전도성이 변화한다. 이 실험에서 증가되어지는 알콜 농도에 따라 TEOS 바인더를 사용한 센서의 저항은 감소하는 반면 VTMS 바인더를 사용한 센서의 저항은 증가한다.

  • PDF

하드웨어 복호화기 구현을 고려한 VVC 부호화 도구 및 제약 분석

  • Lee, Sang-Heon
    • Broadcasting and Media Magazine
    • /
    • v.24 no.4
    • /
    • pp.120-131
    • /
    • 2019
  • 고해상도 영상 서비스 증가와 Youtube, Netflix와 같은 OTT 중심의 동영상 시청 환경 변화에 따라, 기존 대비 높은 압축률을 목표로 하는 새로운 비디오 코덱 표준들이 활발히 개발되고 있다. 그 중 가장 대표적인 것은 MPEG에서 개발 중인 VVC(Versatile Video Coding)로 기존 HEVC(High Efficiency Video Coding) 대비 2배의 압축률 달성을 목표로 하고 있다. 하지만 압축률 향상을 위한 새로운 부호화 도구들의 복잡도가 점점 증가함에 따라, 하드웨어 복호화기 구현시 많은 어려움이 있을 것으로 예상된다. VVC 표준화에서는 이런 어려움을 극복하기 위하여, 하드웨어 구현 관점에서 부호화 도구들을 최적화하고 이를 사용함에 있어 몇가지 제약들을 정의하고 있다. 본 논문에서는 현재 VVCCD 및 VTM 6.0 소프트웨어 기준으로 하드웨어 복호화기 구현을 고려하여 채택된 부호화 도구 및 제약에 대하여 분석한다.

A Review on Motion Estimation and Compensation for Versatile Video Coding Technology (VVC)

  • Choi, Young-Ju;Kim, Byung-Gyu
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.7
    • /
    • pp.770-779
    • /
    • 2019
  • Video coding technologies are progressively becoming more efficient and complex. The Versatile Video Coding (VVC) is a new state-of-the art video compression standard that is going to be a standard, as the next generation of High Efficiency Video Coding (HEVC) standard. To explore the future video coding technologies beyond the HEVC, numerous efficient methods have been adopted by the Joint Video Exploration Team (JVET). Since then, the next generation video coding standard named as VVC and its software model called VVC Test Model (VTM) have emerged. In this paper, several important coding features for motion estimation and motion compensation in the VVC standard is introduced and analyzed in terms of the performance. Improved coding tools introduced for ME and MC in VVC, can achieve much better and good balance between coding efficiency and coding complexity compared with the HEVC.

Block Shape Adaptive Candidate List Derivation for Inter Prediction in Versatile Video Coding (VVC) (VVC 의 블록모양 적응적 화면간 예측 후보 리스트 유도 기법)

  • Do, JiHoon;Park, Dohyeon;Kim, Jae-Gon;Jeong, Dae-Gwon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.257-259
    • /
    • 2018
  • 최근 JVET(Joint Video Experts Team)는 새로운 비디오 압축 표준을 VVC(Versatile Video Coding)으로 이름 짓고 2020 년 완료를 목표로 그 표준화를 시작하였다. HEVC 및 VVC 에서는 화면간 예측의 부호화 효율을 위하여 공간적/시간적 주변블록의 움직임 정보로부터 Merge/AMVP(Advanced Motion Vector Prediction)의 후보 리스트를 구성하고 최적의 움직임 정보를 활용한다. 본 논문에서는 Merge/AMVP 의 후보 리스트를 유도할 때, 현재블록의 모양을 고려하여 상관성이 높은 주변블록의 움직임 정보를 우선 순위로 유도하는 기법을 제안한다. 실험을 통하여 VTM(VVC TM) 대비 제안기법의 성능을 확인한다.

  • PDF

Multi-Layer Perceptron Based Ternary Tree Partitioning Decision Method for Versatile Video Coding (다목적 비디오 부/복호화를 위한 다층 퍼셉트론 기반 삼항 트리 분할 결정 방법)

  • Lee, Taesik;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.6
    • /
    • pp.783-792
    • /
    • 2022
  • Versatile Video Coding (VVC) is the latest video coding standard, which had been developed by the Joint Video Experts Team (JVET) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG) in 2020. Although VVC can provide powerful coding performance, it requires tremendous computational complexity to determine the optimal block structures during the encoding process. In this paper, we propose a fast ternary tree decision method using two neural networks with 7 nodes as input vector based on the multi-layer perceptron structure, names STH-NN and STV-NN. As a training result of neural network, the STH-NN and STV-NN achieved accuracies of 85% and 91%, respectively. Experimental results show that the proposed method reduces the encoding complexity up to 25% with unnoticeable coding loss compared to the VVC test model (VTM).