• Title/Summary/Keyword: Entropy Coding

Search Result 175, Processing Time 0.032 seconds

Hardware Implementation of EBCOT TIER-1 for JPEG2000 Encoder (JPEG2000 Encoder를 위한 EBCOT Tier-1의 하드웨어 구현)

  • Lee, Sung-Mok;Jang, Won-Woo;Cho, Sung-Dae;Kang, Bong-Soon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.2
    • /
    • pp.125-131
    • /
    • 2010
  • This paper presents the implementation of a EBCOT TIER-1 for JPEG2000 Encoder. JPEG2000 is new standard for the compression of still image for overcome the artifact of JPEG. JPEG2000 standard is based on DWT(Discrete Wavelet Transform) and EBCOT Entropy coding technology. EBCOT(Embedded block coding with optimized truncation) is the most important technology that is compressed the image data in the JPEG2000. However, EBCOT has the artifact because the operations are bit-level processing and occupy the harf of the computation time of JPEG2000 Compression. Therefore, in this paper, we present modified context extraction method for enhance EBCOT computational efficiency and implemented MQ- Coder as arithmetic coder. The proposed system is implemented by Verilog-HDL, under the condition of TSMC 0.25um ASIC library, gate counts are 30,511EA and satisfied the 50MHz operating condition.

Efficient Pipeline Architecture of CABAC in H.264/AVC (H.264/AVC의 효율적인 파이프라인 구조를 적용한 CABAC 하드웨어 설계)

  • Choi, Jin-Ha;Oh, Myung-Seok;Kim, Jae-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.7
    • /
    • pp.61-68
    • /
    • 2008
  • In this paper, we propose an efficient hardware architecture and algorithm to increase an encoding process rate and implement a hardware for CABAC (Context Adaptive Binary Arithmetic Coding) which is used with one of the entropy coding ways for the latest video compression technique, H.264/AVC (Advanced Video Coding). CABAC typically provides a better high compression performance maximum 15% compared with CAVLC. However, the complexity of operation of CABAC is significantly higher than the CAVLC. Because of complicated data dependency during the encoding process, the complexity of operation is higher. Therefore, various architectures were proposed to reduce an amount of operation. However, they have still latency on account of complicated data dependency. The proposed architecture has two techniques to implement efficient pipeline architecture. The one is quick calculation of 7, 8th bits used to calculate a probability is the first step in Binary arithmetic coding. The other is one step reduced pipeline arcbitecture when the type of the encoded symbols is MPS. By adopting these two techniques, the required processing time was reduced about 27-29% compared with previous architectures. It is designed in a hardware description language and total logic gate count is 19K using 0.18um standard cell library.

CODING THEOREMS ON A GENERALIZED INFORMATION MEASURES.

  • Baig, M.A.K.;Dar, Rayees Ahmad
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.11 no.2
    • /
    • pp.3-8
    • /
    • 2007
  • In this paper a generalized parametric mean length $L(P^{\nu},\;R)$ has been defined and bounds for $L(P^{\nu},\;R)$ are obtained in terms of generalized R-norm information measure.

  • PDF

A Preprocessing Algorithm for Efficient Lossless Compression of Gray Scale Images

  • Kim, Sun-Ja;Hwang, Doh-Yeun;Yoo, Gi-Hyoung;You, Kang-Soo;Kwak, Hoon-Sung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2485-2489
    • /
    • 2005
  • This paper introduces a new preprocessing scheme to replace original data of gray scale images with particular ordered data so that performance of lossless compression can be improved more efficiently. As a kind of preprocessing technique to maximize performance of entropy encoder, the proposed method converts the input image data into more compressible form. Before encoding a stream of the input image, the proposed preprocessor counts co-occurrence frequencies for neighboring pixel pairs. Then, it replaces each pair of adjacent gray values with particular ordered numbers based on the investigated co-occurrence frequencies. When compressing ordered image using entropy encoder, we can expect to raise compression rate more highly because of enhanced statistical feature of the input image. In this paper, we show that lossless compression rate increased by up to 37.85% when comparing results from compressing preprocessed and non-preprocessed image data using entropy encoder such as Huffman, Arithmetic encoder.

  • PDF

Lossless Coding of Audio Spectral Coefficients Using Selective Bit-Plane Coding (선택적 비트 플레인 부호화를 이용한 오디오 주파수 계수의 무손실 부호화 기술)

  • Yoo, Seung-Kwan;Park, Ho-Chong;Oh, Seoung-Jun;Ahn, Chang-Beom;Sim, Dong-Gyu;Beak, Seung-Kwon;Kang, Kyoung-Ok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1
    • /
    • pp.18-25
    • /
    • 2008
  • In this paper, new lossless coding method of spectral coefficients for audio codec is proposed. Conventional lossless coder uses Huffman coding utilizing the statistical characteristics of spectral coefficients, but does not provide the high coding efficiency due to its simple structure. To solve this limitation, new lossless coding scheme with better performance is proposed that consists of bit-plane transform and run-length coding. In the proposed scheme, the spectral coefficients are first transformed by bit-plane into 1-D bit-stream with better correlative properties, which is then coded intorun-length and is finally Huffman coded. In addition, the coding performance is further increased by applying the proposed bit-plane coding selectively to each group, after the entire frequency is divided into 3 groups. The performance of proposed coding scheme is measured in terms of theoretical number of bits based on the entropy, and shows at most 6% enhancement compared to that of conventional lossless coder used in AAC audio codec.

A Robust Sequential Preprocessing Scheme for Efficient Lossless Image Compression (영상의 효율적인 무손실 압축을 위한 강인한 순차적 전처리 기법)

  • Kim, Nam-Yee;You, Kang-Soo;Kwak, Hoon-Sung
    • Journal of Internet Computing and Services
    • /
    • v.10 no.1
    • /
    • pp.75-82
    • /
    • 2009
  • In this paper, we propose a robust preprocessing scheme for entropy coding in gray-level image. The issue of this paper is to reduce additional information needed when bit stream is transmitted. The proposed scheme uses the preprocessing method of co-occurrence count about gray-levels in neighboring pixels. That is, gray-levels are substituted by their ranked numbers without additional information. From the results of computer simulation, it is verified that the proposed scheme could be reduced the compression bit rate by up to 44.1%, 37.5% comparing to the entropy coding and conventional preprocessing scheme respectively. So our scheme can be successfully applied to the application areas that require of losslessness and data compaction.

  • PDF

A Preprocessing Technique of Gray Scale Image for Efficient Entropy Coding (효율적인 엔트로피부호화를 위한 명암도 등급 이미지의 전처리 기법)

  • Kim, Sun-Ja;Han, Deuk-Su;Park, Jung-Man;You, Kang-Soo;Lee, Jong-Ha;Kwak, Hoon-Sung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.05a
    • /
    • pp.805-808
    • /
    • 2005
  • 엔트로피부호화(entropy coding)는 텍스트와 같은 일반적인 데이터들을 효율적으로 압축하는 반면에, 이미지 데이터들에 대해서는 그 성능이 다소 저하된다. 본 논문에서는 이러한 단점을 개선시키기 위한 효율적인 전처리기법(preprocessing technique)을 소개한다. 제안한 전처리기법은 입력된 명암도 등급 이미지를 무손실 압축하기 이전에, 이미지 내에서 인접한 명암도 값들의 발생빈도(occurrence frequency)를 조사한다. 다음으로 각 픽셀 쌍들의 명암도 값들을 발생빈도에 기반한 순서화된 값(ordered number)들로 대체시킨 후, 최종적으로 엔트로피부호화에 의한 압축을 수행한다. 이와 같은 단계들을 거치면서 이미지 데이터의 통계적인 특성(statistical feature)이 보다 강화되기 때문에, 엔트로피부호화에서의 무손실 압축 성능을 효율적으로 개선시킬 수 있다. 실험을 통하여 256 명암도 등급 이미지들을 산술부호화와 허프만부호화를 사용하여 압축한 결과, 제안한 전처리기법이 압축 후 비트율(bit rate)을 최대 37.49%까지 감소시켰음을 확인하였다.

  • PDF

An Efficient H.264/AVC Entropy Decoder Design (효율적인 H.264/AVC 엔트로피 복호기 설계)

  • Moon, Jeon-Hak;Lee, Seong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.12
    • /
    • pp.102-107
    • /
    • 2007
  • This paper proposes a H.264/AVC entropy decoder without embedded processor nor memory fabrication process. Many researches on H.264/AVC entropy decoders require ROM or RAM fabrication process, which is difficult to be implemented in general digital logic fabrication process. Furthermore, many researches require embedded processors for bitstream manipulation, which increases area and power consumption. This papers proposes hardwired H.264/AVC entropy decoder without embedded processor, which improves data processing speed and reduces power consumption. Furthermore, its CAVLC decoder optimizes lookup table and internal buffer without embedded memory, which reduces hardware size and can be implemented in general digital logic fabrication process without ROM or RAM fabrication process. Designed entropy decoder was embedded in H.264/AVC video decoder, and it was verified to operate correctly in the system. Synthesized in TSMC 90nm fabrication process, its maximum operation frequency is 125MHz. It supports QCIF, CIF, and QVGA image format. Under slight modification of nC register and other blocks, it also support VGA image format.

Design of video encoder using Multi-dimensional DCT (다차원 DCT를 이용한 비디오 부호화기 설계)

  • Jeon, S.Y.;Choi, W.J.;Oh, S.J.;Jeong, S.Y.;Choi, J.S.;Moon, K.A.;Hong, J.W.;Ahn, C.B.
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.732-743
    • /
    • 2008
  • In H.264/AVC, 4$\times$4 block transform is used for intra and inter prediction instead of 8$\times$8 block transform. Using small block size coding, H.264/AVC obtains high temporal prediction efficiency, however, it has limitation in utilizing spatial redundancy. Motivated on these points, we propose a multi-dimensional transform which achieves both the accuracy of temporal prediction as well as effective use of spatial redundancy. From preliminary experiments, the proposed multi-dimensional transform achieves higher energy compaction than 2-D DCT used in H.264. We designed an integer-based transform and quantization coder for multi-dimensional coder. Moreover, several additional methods for multi-dimensional coder are proposed, which are cube forming, scan order, mode decision and updating parameters. The Context-based Adaptive Variable-Length Coding (CAVLC) used in H.264 was employed for the entropy coder. Simulation results show that the performance of the multi-dimensional codec appears similar to that of H.264 in lower bit rates although the rate-distortion curves of the multi-dimensional DCT measured by entropy and the number of non-zero coefficients show remarkably higher performance than those of H.264/AVC. This implies that more efficient entropy coder optimized to the statistics of multi-dimensional DCT coefficients and rate-distortion operation are needed to take full advantage of the multi-dimensional DCT. There remains many issues and future works about multi-dimensional coder to improve coding efficiency over H.264/AVC.

Optimal sensor placement under uncertainties using a nondirective movement glowworm swarm optimization algorithm

  • Zhou, Guang-Dong;Yi, Ting-Hua;Zhang, Huan;Li, Hong-Nan
    • Smart Structures and Systems
    • /
    • v.16 no.2
    • /
    • pp.243-262
    • /
    • 2015
  • Optimal sensor placement (OSP) is a critical issue in construction and implementation of a sophisticated structural health monitoring (SHM) system. The uncertainties in the identified structural parameters based on the measured data may dramatically reduce the reliability of the condition evaluation results. In this paper, the information entropy, which provides an uncertainty metric for the identified structural parameters, is adopted as the performance measure for a sensor configuration, and the OSP problem is formulated as the multi-objective optimization problem of extracting the Pareto optimal sensor configurations that simultaneously minimize the appropriately defined information entropy indices. The nondirective movement glowworm swarm optimization (NMGSO) algorithm (based on the basic glowworm swarm optimization (GSO) algorithm) is proposed for identifying the effective Pareto optimal sensor configurations. The one-dimensional binary coding system is introduced to code the glowworms instead of the real vector coding method. The Hamming distance is employed to describe the divergence of different glowworms. The luciferin level of the glowworm is defined as a function of the rank value (RV) and the crowding distance (CD), which are deduced by non-dominated sorting. In addition, nondirective movement is developed to relocate the glowworms. A numerical simulation of a long-span suspension bridge is performed to demonstrate the effectiveness of the NMGSO algorithm. The results indicate that the NMGSO algorithm is capable of capturing the Pareto optimal sensor configurations with high accuracy and efficiency.