• Title/Summary/Keyword: Video encoder

Search Result 447, Processing Time 0.024 seconds

Lightweight video coding using spatial correlation and symbol-level error-correction channel code (공간적 유사성과 심볼단위 오류정정 채널 코드를 이용한 경량화 비디오 부호화 방법)

  • Ko, Bong-Hyuck;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.188-199
    • /
    • 2008
  • In conventional video coding, encoder complexity is much higher than that of decoder. However, investigations for lightweight encoder to eliminate motion prediction/compensation claiming most complexity in encoder have recently become an important issue. The Wyner-Ziv coding is one of the representative schemes for the problem and, in this scheme, since encoder generates only parity bits of a current frame without performing any type of processes extracting correlation information between frames, it has an extremely simple structure compared to conventional coding techniques. However, in Wyner-Ziv coding, channel decoding errors occur when noisy side information is used in channel decoding process. These channel decoding errors appear more frequently, especially, when there is not enough correlation between frames to generate accurate side information and, as a result, those errors look like Salt & Pepper type noise in the reconstructed frame. Since this noise severely deteriorates subjective video quality even though such noise rarely occurs, previously we proposed a computationally extremely light encoding method based on selective median filter that corrects such noise using spatial correlation of a frame. However, in the previous method, there is a problem that loss of texture from filtering may exceed gain from error correction by the filter for video sequences having complex torture. Therefore, in this paper, we propose an improved lightweight encoding method that minimizes loss of texture detail from filtering by allowing information of texture and that of noise in side information to be utilized by the selective median filter. Our experiments have verified average PSNR gain of up to 0.84dB compared to the previous method.

Fast Distributed Video Coding using Parallel LDPCA Encoding (병렬 LDPCA 채널코드 부호화 방법을 사용한 고속 분산비디오부호화)

  • Park, Jong-Bin;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.144-154
    • /
    • 2011
  • In this paper, we propose a parallel LDPCA encoding method for fast transform-domain Wyner-Ziv video encoding which is suitable in an ultra fast and low power video encoding. The conventional transform-domain Wyner-Ziv video encoding performs LDPCA channel coding of quantized transform coefficients in bitplane-serial fashion, which takes about 60% of total encoding time, and this computational complexity becomes severer as the bitrate increases. The proposed method binds several bitplanes into one packed message and carries out the LDPCA encoding in parallel. The proposed LDPCA encoding method improves the encoding speed by 8 ~ 55 times. In the experiment, the proposed Wyner-Ziv encoder can encode 700 ~ 2,300 QCIF size frames per second with GOP=64. The method can be applied to the pixel-domain Wyner-Ziv encoder using LDPCA, and has a wide scope of application.

Low-Power Video Decoding on a Variable Voltage Processor for Mobile Multimedia Applications

  • Lee, Seong-Soo
    • ETRI Journal
    • /
    • v.27 no.5
    • /
    • pp.504-510
    • /
    • 2005
  • This paper proposes a novel low-power video decoding scheme. In the encoded video bitstream, there is quite a large number of non-coded blocks. When the number of the non-coded blocks in a frame is known at the start of frame decoding, the workload of the video decoding can be estimated. Consequently, the supply voltage of very large-scale integration (VLSI) circuits can be lowered, and the power consumption can be reduced. In the proposed scheme, the encoder counts the number of non-coded blocks and stores this information in the frame header of the bitstream. Simulation results show that the proposed scheme reduces the power consumption to about 1/10 to 1/20.

  • PDF

Design of a motion estimator for MPEG-2 video encoder using array architecture (어레이 구조를 이용한 MPEG-2 비디오 인코더용 움직임 예측기 설계)

  • 심재술;박재현;주락현;김영민
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.7
    • /
    • pp.28-37
    • /
    • 1997
  • In this paper, we designed a motion estimator for MPEG-2 video coder using VHDL. Motion estimation is indispensable for encoding MPEG 2 video. Motion estimation takes over 50% computation power of video encoding 37 frames per second and is suitable for real-time processing. The number of data accesses for computation is fewer than 2 times compared with that of old one. This makes slower memory module available. We minimize input pins to migrate input data through PEs. This processor can compute various motio estimation modes at one calculation that is supported by MPEG-2 video standard. Also independent control architecture makes this processor a single processor or a sub module in amultimedia chip.

  • PDF

Zerotree Entropy Based Coding of Stereo Video Sequences

  • Thanapirom, S.;Fernando, W.A.C.;Edirisinghe, E.A.
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.908-911
    • /
    • 2002
  • Over the past 30 years, many efficient 2D video coding techniques have been presented and developed from many research centers for commercialization. However, direct application of these monocular compression schemes is not optimal for stereo video coding. In this paper, we present a new technique for coding stereo video sequences based on Discrete Wavelet Transform (DWT). The proposed technique exploits Zerotree Entropy Coding (ZTE) that makes use of the wavelet block concept to achieve low bit rate stereo video coding. The one of two image streams, called main stream, is independently coded by modified MPEG-4 encoder and the other stream, called auxiliary stream, is coded by predicting from its corresponding image, its previous image or its follow image.

  • PDF

Integrated Multimedia Application Format for Active Video Browsing and Retrieval (효율적인 비디오 브라우징 및 검색을 위한 통합 멀티미디어 응용 형식)

  • Cho, Jun-Ho;Jin, Sung-Ho;Yang, Seung-Ji;Ro, Yong-Man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.155-158
    • /
    • 2005
  • 본 논문에서는 MPEG 의 멀티미디어 응용 표준인 MAF(Multimedia Application Format)를 기반으로, 효율적인 비디오 콘텐츠의 검색 및 활용을 위한 통합 미디어 구조, 즉 비디오 MAF 를 제안한다. 제안하는 비디오 MAF 는 ISO 미디어 포맷을 기반으로 하고 단일의 비주얼 스트림과 다중 음성을 지원하기 위한 다수의 오디오 스트림, 내용기반의 정보를 포함하는 메타데이터, 그리고 비디오 콘텐츠의 대표 이미지를 동시에 포함하는 구조이다. 제안하는 파일포맷의 유용성을 검증하기 위해 비디오 MAF 로 생성 및 해석할 수 있는 부호기(encoder)와 복호기(decoder)를 설계하고 구현하여, 통합 미디어에 내재된 메타데이터를 이용한 효율적인 검색과 멀티트랙의 오디오 스트림을 활용한 다중 음성에 대한 지원이 가능함을 확인하였다. 또한 내재된 대표이미지는 비디오 콘텐츠에 대한 브라우징이 효과적으로 활용됨을 확인하였다.

  • PDF

Similarity-Based Patch Packing Method for Efficient Plenoptic Video Coding in TMIV

  • Kim, HyunHo;Kim, Yong-Hwan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.250-252
    • /
    • 2022
  • As immersive video contents have started to emerge in the commercial market, research on it is required. For this, efficient coding methods for immersive video are being studied in the MPEG-I Visual workgroup, and they released Test Model for Immersive Video (TMIV). In current TMIV, the patches are packed into atlas in order of patch size. However, this simple patch packing method can reduce the coding efficiency in terms of 2D encoder. In this paper, we propose patch packing method which pack the patches into atlases by using the similarity of each patch for improving coding efficiency of 3DoF+ video. Experimental result shows that there is a 0.3% BD-rate savings on average over the anchor of TMIV.

  • PDF

Latent Shifting and Compensation for Learned Video Compression (신경망 기반 비디오 압축을 위한 레이턴트 정보의 방향 이동 및 보상)

  • Kim, Yeongwoong;Kim, Donghyun;Jeong, Se Yoon;Choi, Jin Soo;Kim, Hui Yong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.31-43
    • /
    • 2022
  • Traditional video compression has developed so far based on hybrid compression methods through motion prediction, residual coding, and quantization. With the rapid development of technology through artificial neural networks in recent years, research on image compression and video compression based on artificial neural networks is also progressing rapidly, showing competitiveness compared to the performance of traditional video compression codecs. In this paper, a new method capable of improving the performance of such an artificial neural network-based video compression model is presented. Basically, we take the rate-distortion optimization method using the auto-encoder and entropy model adopted by the existing learned video compression model and shifts some components of the latent information that are difficult for entropy model to estimate when transmitting compressed latent representation to the decoder side from the encoder side, and finally compensates the distortion of lost information. In this way, the existing neural network based video compression framework, MFVC (Motion Free Video Compression) is improved and the BDBR (Bjøntegaard Delta-Rate) calculated based on H.264 is nearly twice the amount of bits (-27%) of MFVC (-14%). The proposed method has the advantage of being widely applicable to neural network based image or video compression technologies, not only to MFVC, but also to models using latent information and entropy model.

Fast Mode Decision For Depth Video Coding Based On Depth Segmentation

  • Wang, Yequn;Peng, Zongju;Jiang, Gangyi;Yu, Mei;Shao, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.4
    • /
    • pp.1128-1139
    • /
    • 2012
  • With the development of three-dimensional display and related technologies, depth video coding becomes a new topic and attracts great attention from industries and research institutes. Because (1) the depth video is not a sequence of images for final viewing by end users but an aid for rendering, and (2) depth video is simpler than the corresponding color video, fast algorithm for depth video is necessary and possible to reduce the computational burden of the encoder. This paper proposes a fast mode decision algorithm for depth video coding based on depth segmentation. Firstly, based on depth perception, the depth video is segmented into three regions: edge, foreground and background. Then, different mode candidates are searched to decide the encoding macroblock mode. Finally, encoding time, bit rate and video quality of virtual view of the proposed algorithm are tested. Experimental results show that the proposed algorithm save encoding time ranging from 82.49% to 93.21% with negligible quality degradation of rendered virtual view image and bit rate increment.

A Study for The Parallel Processing in The Polyphase Encoder (Polyphase 인코더의 병렬 처리에 대한 연구)

  • Cho, Dong-Sik;Ra, Sung-Woong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.199-205
    • /
    • 2010
  • In this paper, we proposed a polyphase encoder that consists of multiple internal encoders. The multiple internal encoders were configured in parallel. Successive frames of image were distributed to separate encoders by way of a image divider and processed in parallel. In this way, the sampling rate of the encoder was reduced by the factor of number of encoders in parallel. In our design, however, the PSNR is exactly the same as that to be achieved with the conventional single-phase encoder, which should require a much higher sampling rate.