• Title/Summary/Keyword: Entropy Coding

Search Result 175, Processing Time 0.02 seconds

A Study on the Hardware Design of High-Throughput HEVC CABAC Binary Arithmetic Encoder (높은 처리량을 갖는 HEVC CABAC 이진 산술 부호화기의 하드웨어 설계에 관한 연구)

  • Jo, Hyun-gu;Ryoo, Kwang-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.401-404
    • /
    • 2016
  • This paper proposes entropy coding method of HEVC CABAC Encoder for efficient hardware architecture. The Binary Arithmetic Encoder requires data dependency at each step, which is difficult to be operated in a fast. Proposed Binary Arithmetic Encoder is designed 4 stage pipeline to quickly process the input value bin. According to bin approach, either MPS or LPS is selected and the binary arithmetic encoding is performed. Critical path caused by repeated operation is reduced by using the LUT and designed as a shift operation which decreases hardware size and not using memory. The proposed Binary Arithmetic Encoder of CABAC is designed using Verilog-HDL and it was implemented in 65nm technology. Its gate count is 3.17k and operating speed is 1.53GHz.

  • PDF

Distributed Video Coding Based on Selective Block Encoding Using Feedback of Motion Information (움직임 정보의 피드백을 갖는 선택적 블록 부호화에 기초한 분산 비디오 부호화 기법)

  • Kim, Jin-Soo;Kim, Jae-Gon;Seo, Kwang-Deok;Lee, Myeong-Jin
    • Journal of Broadcast Engineering
    • /
    • v.15 no.5
    • /
    • pp.642-652
    • /
    • 2010
  • Recently, DVC (Distributed Video Coding) techniques are drawing a lot of interests as one of the future research works to achieve low complexity encoding in various applications. But, due to the limited computational complexity, the performances of DVC algorithms are inferior to those of conventional international standard video coders, which use zig-zag scan, run length code, entropy code and skipped macroblock. In this paper, in order to overcome the performance limit of the DVC system, the distortion for every block is estimated when side information is found at the decoder and then we propose a new selective block encoding scheme which provides the encoder side with the motion information for the highly distorted blocks and then allows the sender to encode the motion compensated frame difference signal. Through computer simulations, it is shown that the coding efficiency of the proposed scheme reaches almost that of the conventional inter-frame coding scheme.

VLSI Architecture of High Performance Huffman Codec (고성능 허프만 코덱의 VLSI 구조)

  • Choi, Hyun-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.2
    • /
    • pp.439-446
    • /
    • 2011
  • In this paper, we proposed and implemented a dedicated hardware for Huffman coding which is a method of entropy coding to use compressing multimedia data with video coding. The proposed Huffman codec consists Huffman encoder and decoder. The Huffman encoder converts symbols to Huffman codes using look-up table. The Huffman code which has a variable length is packetized to a data format with 32 bits in data packeting block and then sequentially output in unit of a frame. The Huffman decoder converts serial bitstream to original symbols without buffering using FSM(finite state machine) which has a tree structure. The proposed hardware has a flexible operational property to program encoding and decoding hardware, so it can operate various Huffman coding. The implemented hardware was implemented in Cyclone III FPGA of Altera Inc., and it uses 3725 LUTs in the operational frequency of 365MHz

On Improving Compression Ratio of JPEG Using AC-Coefficient Separation (교류 계수 분할 압축에 의한 JPEG 정지영상 압축 효율 향상 기법 연구)

  • Ahn, Young-Hoon;Shin, Hyun-Joon;Wee, Young-Cheul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.29-35
    • /
    • 2010
  • In this paper, we introduce a novel entropy coding method to improve the JPEG image compression standard. JPEG is one of the most widely used image compression methods due to its high visual quality for the compression ratio, and especially because of its high efficiency. Based on the observation that the blocks of data fed to the entropy coder usually contain consecutive sequences of numbers with small magnitudes including 0, 1, and -1, we separate those sequences from the data and encode them using a method dedicated to those values. We further improve the compression ratio based on the fact that this separation makes the lengths of blocks much shorter. In our experiment, we show that the proposed method can outperform the JPEG standard preserving its visual characteristics.

Geometry Coding of Three-dimensional Mesh Models Using a Joint Prediction (통합예측을 이용한 삼차원 메쉬의 기하정보 부호화 알고리듬)

  • 안정환;호요성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.3
    • /
    • pp.185-193
    • /
    • 2003
  • The conventional parallelogram prediction uses only three previously traversed vertices in a single adjacent triangle; thus, the predicted vertex can be located at a biased position. Moreover, vortices on curved surfaces may not be predicted effectively since each parallelogram is assumed to lie on the same plane. In order to improve the prediction performance, we use all the neighboring vertices that precede the current vertex. After we order vortices using a vertex layer traversal algorithm, we estimate the current vertex position based on observations of the previously coded vertex positions in the layer traversal order. The difference between the original and the predicted vertex coordinate values is encoded by a uniform quantizer and an entropy coder. The proposed scheme demonstrates improved coding efficiency for various VRML test data.

Edge Adaptive Hierarchical Interpolation for Lossless and Progressive Image Transmission

  • Biadgie, Yenewondim;Wee, Young-Chul;Choi, Jung-Ju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2068-2086
    • /
    • 2011
  • Based on the quincunx sub-sampling grid, the New Interleaved Hierarchical INTerpolation (NIHINT) method is recognized as a superior pyramid data structure for the lossless and progressive coding of natural images. In this paper, we propose a new image interpolation algorithm, Edge Adaptive Hierarchical INTerpolation (EAHINT), for a further reduction in the entropy of interpolation errors. We compute the local variance of the causal context to model the strength of a local edge around a target pixel and then apply three statistical decision rules to classify the local edge into a strong edge, a weak edge, or a medium edge. According to these local edge types, we apply an interpolation method to the target pixel using a one-directional interpolator for a strong edge, a multi-directional adaptive weighting interpolator for a medium edge, or a non-directional static weighting linear interpolator for a weak edge. Experimental results show that the proposed algorithm achieves a better compression bit rate than the NIHINT method for lossless image coding. It is shown that the compression bit rate is much better for images that are rich in directional edges and textures. Our algorithm also shows better rate-distortion performance and visual quality for progressive image transmission.

Two-stage variable block-size multiresolution motion estiation in the wavelet transform domain (웨이브렛 변환영역에서의 2단계 가변 블록 다해상도 움직임 추정)

  • 김성만;이규원;정학진;박규태
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1487-1504
    • /
    • 1997
  • In this paper, the two-stage variable block-size multiresolution motion algorithm is proposed for an interframe coding scheme in the wavelet decomposition. An optimal bit allocagion between motion vectors and the prediction error in sense of minimizing the total bit rate is obtained by the proposed algorithm. The proposed algorithm consists of two stages for motion estimatation and only the first stage can be separated and run on its own. The first stage of the algorithm introduces a new method to give the lower bit rate of the displaced frame difference as well as a smooth motion field. In the second stage of the algorithm, the technique is introduced to have more accurate motion vectors in detailed areas, and to decrease the number of motion vectors in uniform areas. The algorithm aims at minimizin gthe total bit rate which is sum of the motion vectors and the displaced frame difference. The optimal bit allocation between motion vectors and displaced frame difference is accomplished by reducing the number of motion vectors in uniform areas and it is based on a botom-up construction of a quadtree. An entropy criterion aims at the control of merge operation. Simulation resuls show that the algorithm lends itself to the wavelet based image sequence coding and outperforms the conventional scheme by up to the maximum 0.28 bpp.

  • PDF

Dimmable Spatial Intensity Modulation for Visible-light Communication: Capacity Analysis and Practical Design

  • Kim, Byung Wook;Jung, Sung-Yoon
    • Current Optics and Photonics
    • /
    • v.2 no.6
    • /
    • pp.532-539
    • /
    • 2018
  • Multiple LED arrays can be utilized in visible-light communication (VLC) to improve communication efficiency, while maintaining smart illumination functionality through dimming control. This paper proposes a modulation scheme called "Spatial Intensity Modulation" (SIM), where the effective number of turned-on LEDs is employed for data modulation and dimming control in VLC systems. Unlike the conventional pulse-amplitude modulation (PAM), symbol intensity levels are not determined by the amplitude levels of a VLC signal from each LED, but by counting the number of turned-on LEDs, illuminating with a single amplitude level. Because the intensity of a SIM symbol and the target dimming level are determined solely in the spatial domain, the problems of conventional PAM-based VLC and related MIMO VLC schemes, such as unstable dimming control, non uniform illumination functionality, and burdens of channel prediction, can be solved. By varying the number and formation of turned-on LEDs around the target dimming level in time, the proposed SIM scheme guarantees homogeneous illumination over a target area. An analysis of the dimming capacity, which is the achievable communication rate under the target dimming level in VLC, is provided by deriving the turn-on probability to maximize the entropy of the SIM-based VLC system. In addition, a practical design of dimmable SIM scheme applying the multilevel inverse source coding (MISC) method is proposed. The simulation results under a range of parameters provide baseline data to verify the performance of the proposed dimmable SIM scheme and applications in real systems.

LOFAR/DEMON grams compression method for passive sonars (수동소나를 위한 LOFAR/DEMON 그램 압축 기법)

  • Ahn, Jae-Kyun;Cho, Hyeon-Deok;Shin, Donghoon;Kwon, Taekik;Kim, Gwang-Tae
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.1
    • /
    • pp.38-46
    • /
    • 2020
  • LOw Frequency Analysis Recording (LOFAR) and Demodulation of Envelop Modulation On Noise (DEMON) grams are bearing-time-frequency plots of underwater acoustic signals, to visualize features for passive sonar. Those grams are characterized by tonal components, for which conventional data coding methods are not suitable. In this work, a novel LOFAR/DEMON gram compression algorithm based on binary map and prediction methods is proposed. We first generate a binary map, from which prediction for each frequency bin is determined, and then divide a frame into several macro blocks. For each macro block, we apply intra and inter prediction modes and compute residuals. Then, we perform the prediction of available bins in the binary map and quantize residuals for entropy coding. By transmitting the binary map and prediction modes, the decoder can reconstructs grams using the same process. Simulation results show that the proposed algorithm provides significantly better compression performance on LOFAR and DEMON grams than conventional data coding methods.

Wavelet-Based Image Compression Using the Properties of Subbands (대역의 특성을 이용한 웨이블렛 기반 영상 압축 부호화)

  • 박성완;강의성;문동영;고성제
    • Journal of Broadcast Engineering
    • /
    • v.1 no.2
    • /
    • pp.118-132
    • /
    • 1996
  • This paper proposes a wavelet transform- based image compression method using the energy distribution. The proposed method Involves two steps. First, we use a wavelet transform for the subband decomposition. The original image Is decomposed into one low resolution subimage and three high frequency subimages. Each high frequency subimages have horizontal, vertical, and diagonal directional edges. The wavelet transform is luther applied to these high frequency subimages. Resultant transformed subimages have different energy distributions corresponding to different orientation of the high pass filter. Second, for higer compression ratio and computational effciency, we discard some subimages with small energy. The remaining subimages are encoded using either DPCM or quantization followed by entropy coding. Experimental results show that the proposed coding scheme has better performance in the peak signal to noise ratio(PSNR) and higher compression ratio than conventional image coding method using the wavelet transform followed by the straightforward vector quantization.

  • PDF