• Title/Summary/Keyword: 3D video coding

Search Result 193, Processing Time 0.022 seconds

3D Visual Attention Model and its Application to No-reference Stereoscopic Video Quality Assessment (3차원 시각 주의 모델과 이를 이용한 무참조 스테레오스코픽 비디오 화질 측정 방법)

  • Kim, Donghyun;Sohn, Kwanghoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.110-122
    • /
    • 2014
  • As multimedia technologies develop, three-dimensional (3D) technologies are attracting increasing attention from researchers. In particular, video quality assessment (VQA) has become a critical issue in stereoscopic image/video processing applications. Furthermore, a human visual system (HVS) could play an important role in the measurement of stereoscopic video quality, yet existing VQA methods have done little to develop a HVS for stereoscopic video. We seek to amend this by proposing a 3D visual attention (3DVA) model which simulates the HVS for stereoscopic video by combining multiple perceptual stimuli such as depth, motion, color, intensity, and orientation contrast. We utilize this 3DVA model for pooling on significant regions of very poor video quality, and we propose no-reference (NR) stereoscopic VQA (SVQA) method. We validated the proposed SVQA method using subjective test scores from our results and those reported by others. Our approach yields high correlation with the measured mean opinion score (MOS) as well as consistent performance in asymmetric coding conditions. Additionally, the 3DVA model is used to extract information for the region-of-interest (ROI). Subjective evaluations of the extracted ROI indicate that the 3DVA-based ROI extraction outperforms the other compared extraction methods using spatial or/and temporal terms.

Complexity Analysis of Internet Video Coding (IVC) Decoding

  • Park, Sang-hyo;Dong, Tianyu;Jang, Euee S.
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.179-188
    • /
    • 2017
  • The Internet Video Coding (IVC) standard is due to be published by Moving Picture Experts Group (MPEG) for various Internet applications such as internet broadcast streaming. IVC aims at three things fundamentally: 1) forming IVC patents under a free of charge license, 2) reaching comparable compression performance to AVC/H.264 constrained Baseline Profile (cBP), and 3) maintaining computational complexity for feasible implementation of real-time encoding and decoding. MPEG experts have worked diligently on the intellectual property rights issues for IVC, and they reported that IVC already achieved the second goal (compression performance) and even showed comparable performance to even AVC/H.264 High Profile (HP). For the complexity issue, however, there has not been thorough analysis on IVC decoder. In this paper, we analyze the IVC decoder in view of the time complexity by evaluating running time. Through the experimental results, IVC is 3.6 times and 3.1 times more complex than AVC/H.264 cBP under constrained set (CS) 1 and CS2, respectively. Compared to AVC/H.264 HP, IVC is 2.8 times and 2.9 times slower in decoding time under CS1 and CS2, respectively. The most critical tool to be improved for lightweight IVC decoder is motion compensation process containing a resolution-adaptive interpolation filtering process.

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

A Cross-Layer Unequal Error Protection Scheme for Prioritized H.264 Video using RCPC Codes and Hierarchical QAM

  • Chung, Wei-Ho;Kumar, Sunil;Paluri, Seethal;Nagaraj, Santosh;Annamalai, Annamalai Jr.;Matyjas, John D.
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.53-68
    • /
    • 2013
  • We investigate the rate-compatible punctured convolutional (RCPC) codes concatenated with hierarchical QAM for designing a cross-layer unequal error protection scheme for H.264 coded sequences. We first divide the H.264 encoded video slices into three priority classes based on their relative importance. We investigate the system constraints and propose an optimization formulation to compute the optimal parameters of the proposed system for the given source significance information. An upper bound to the significance-weighted bit error rate in the proposed system is derived as a function of system parameters, including the code rate and geometry of the constellation. An example is given with design rules for H.264 video communications and 3.5-4 dB PSNR improvement over existing RCPC based techniques for AWGN wireless channels is shown through simulations.

The Design of Adaptive Quantizer to Improve Image Quality of the H.263 (H.263의 화질 개선을 위한 적응 양자화기 설계)

  • 신경철;이광형
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.6
    • /
    • pp.77-83
    • /
    • 1999
  • H.263 is an international standard of ITU-T that can makes the service such as video phone, video conference in the transmission line less than 64Kbps. This recommendation draft has used motion estimation/compensation, transform coding and quantizing methods. TMN5 used for the performance estimation of H.263 has fundamentally used DCT in transform coding method and presented quantizer for quantizing the DCT transform coefficient. This paper is presenting adaptive quantizer effectively able to quantize DCT coefficient considering the human visual sensitivity while the structure of TMN5 is maintaining. As quantizer that proposed DCT-based H.263 could make transmit more frame than TMN5 in a same transfer speed, it could lower the frame drop effect. And the luminance signal appeared the difference of -0.3 ~ +0.7dB in the average PSNR for the estimation of objective image quality and the chrominance signal appeared the improvement in about 1.5dB in comparision with TMN5. As a result it can attain the better image quality compared to TMN5 in the estimation of subjective image quality.

  • PDF

Multi-view Video Codec for 3DTV (3DTV를 위한 다시점 동영상 부호화 기법)

  • Bae Jin-Woo;Song Hyok;Yoo Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.3A
    • /
    • pp.337-344
    • /
    • 2006
  • In this paper, we propose a multi-view video codec for 3DTV system. The proposed algorithm is not only to reduce the temporal and spatial redundancy but also to reduce the redundancy among each view. With these results, we can improve the coding efficiency for multi-view video sequences. In order to reduce the redundancy of each view more efficiently, we define the assembled image(AI) that is generated by the global disparity compensation of each view. In addition, the proposed algorithm is based on MPEG-2 structure so that we can easily implement 3DTV system without changing the conventional 2D digital TV system. Experimental results show that the proposed algorithm performs very well. It also performs better than MPEG-2 simulcast coding method. The newly proposed codec also supports the view scalability, accurate temporal synchronization among multiple views and random access capability in view dimension.

Efficient Representation of Patch Packing Information for Immersive Video Coding (몰입형 비디오 부호화를 위한 패치 패킹 정보의 효율적인 표현)

  • Lim, Sung-Gyun;Yoon, Yong-Uk;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.126-128
    • /
    • 2021
  • MPEG(Moving Picture Experts Group) 비디오 그룹은 사용자에게 움직임 시차(motion parallax)를 제공하면서 3D 공간 내에서 임의의 위치와 방향의 시점(view)을 렌더링(rendering) 가능하게 하는 6DoF(Degree of Freedom)의 몰입형 비디오 부호화 표준인 MIV(MPEG Immersive Video) 표준화를 진행하고 있다. MIV 표준화 과정에서 참조 SW 인 TMIV(Test Model for Immersive Video)도 함께 개발하고 있으며 점진적으로 부호화 성능을 개선하고 있다. TMIV 는 여러 뷰로 구성된 방대한 크기의 6DoF 비디오를 압축하기 위하여 입력되는 뷰 비디오들 간의 중복성을 제거하고 남은 영역들은 각각 개별적인 패치(patch)로 만든 후 아틀라스에 패킹(packing)하여 부호화되는 화소수를 줄인다. 이때 아틀라스 비디오에 패킹된 패치들의 위치 정보를 메타데이터로 압축 비트열과 함께 전송하게 되며, 본 논문에서는 이러한 패킹 정보를 보다 효율적으로 표현하기 위한 방법을 제안한다. 제안방법은 기존 TMIV10.0 에 비해 약 10%의 메타데이터를 감소시키고 종단간 BD-rate 성능을 0.1% 향상시킨다.

  • PDF

A Method of Hole Filling for Atlas Generation in Immersive Video Coding (몰입형 비디오 부호화의 아틀라스 생성을 위한 홀 채움 기법)

  • Lim, Sung-Gyun;Lee, Gwangsoon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.75-77
    • /
    • 2021
  • MPEG 비디오 그룹은 제한된 3D 공간 내에서 움직임 시차(motion parallax)를 제공하면서 원하는 시점(view)을 렌더링(rendering)하기 위한 표준으로 TMIV(Test Model for Immersive Video)라는 테스트 모델과 함께 효율적인 몰입형 비디오의 부호화를 위한 MIV(MPEG Immersive Video) 표준을 개발하고 있다. 몰입감 있는 시각적 경험을 제공하기 위해서는 많은 수의 시점 비디오가 필요하기 때문에 방대한 양의 비디오를 고효율로 압축하는 것이 불가피하다. TMIV 는 여러 개의 입력 시점 비디오를 소수의 아틀라스(atlas) 비디오로 변환하여 부호화되는 화소수를 줄이게 된다. 아틀라스는 선택된 소수의 기본 시점(basic view) 비디오와 기본 시점으로부터 합성할 수 없는 나머지 추가 시점(additional view) 비디오의 영역들을 패치(patch)로 만들어 패킹(packing)한 비디오이다. 본 논문에서는 아틀라스 비디오의 보다 효율적인 부호화를 위해서 패치 내에 생기는 작은 홀(hole)들을 채우는 기법을 제안한다. 제안기법은 기존 TMIV8.0 에 비해 1.2%의 BD-rate 이 향상된 성능을 보인다.

  • PDF

Neural Network-Based Post Filtering of Atlas for Immersive Video Coding (몰입형 비디오 부호화를 위한 신경망 기반 아틀라스 후처리 필터링)

  • Lim, Sung-Gyun;Lee, Kun-Woo;Kim, Jeong-Woo;Yoon, Yong-Uk;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.239-241
    • /
    • 2022
  • MIV(MPEG Immersive Video) 표준은 제한된 3D 공간의 다양한 위치의 뷰(view)들을 효율적으로 압축하여 사용자에게 임의의 위치 및 방향에 대한 6 자유도(6DoF)의 몰입감을 제공한다. MIV 의 참조 소프트웨어인 TMIV(Test Model for Immersive Video)에서는 몰입감을 제공하기 위한 여러 시점의 입력 뷰들 간의 중복 영역을 제거하고 남은 영역들을 패치(patch)로 만들어 패킹(packing)한 아틀라스(atlas)를 생성하고 이를 압축 전송한다. 아틀라스 영상은 일반적인 영상 달리 많은 불연속성을 포함하고 있으며 이는 부호화 효율을 크게 저하시키다 본 논문에서는 아틀라스 영상의 부호화 손실을 줄이기 위한 신경망 기반의 후처리 필터링 기법을 제시한다. 제안기법은 기존의 TMIV 와 비교하여 아틀라스의 복원 화질 향상을 보여준다.

  • PDF

HEVC 표준화 동향과 Test-Model Version 1의 구성 및 성능

  • Han, U-Jin
    • Broadcasting and Media Magazine
    • /
    • v.15 no.4
    • /
    • pp.9-22
    • /
    • 2010
  • 최근 full-HD 3D 방송, UD(ultra-definition) 영상 서비스, mobile device 향 양방향 HD급 화상통신 등 기존 영상 서비스의 품질을 월등히 향상시키고자 하는 연구들이 진행되고 있다. 본 기고에서는 기존 H.264/AVC 영상 압축 표준의 성능을 2배 이상 향상시키는 것을 목표로 진행 중인 새로운 차세대 영상 압축 표준인 HEVC(high-efficiency video coding; MPEG-H/H.265)의 표준화 동향을 소개한다. 또한, 현재 HEVC test-model (HM) version 1을 구성하고 있는 요소 기술들을 결정하기 위해 진행되었던 성능 평가 과정에 대해 간략하게 소개하고, 마지막으로 HM의 전반적 구성 및 현재 성능 수준에 대한 평가결과를 보인다.