• Title/Summary/Keyword: entropy coding

Search Result 175, Processing Time 0.023 seconds

Soft-Decision Based Quantization of the Multimedia Signal Considering the Outliers in Rate-Allocation and Distortion (이상 비트율 할당과 신호왜곡 문제점을 고려한 멀티미디어 신호의 연판정 양자화 방법)

  • Lim, Jong-Wook;Noh, Myung-Hoon;Kim, Moo-Young
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.4
    • /
    • pp.286-293
    • /
    • 2010
  • There are two major conventional quantization algorithms: resolution-constrained quantization (RCQ) and entropy-constrained quantization (ECQ). Although RCQ works well for fixed transmission-rate, it produces the distortion outliers since the cell sizes are different. Compared with RCQ, ECQ has the constraints on the cell size but it produces the rate outliers. We propose the cell-size constrained vector quantization (CCVQ) that improves the generalized Lloyd algorithm (GLA). The CCVQ algorithm is able to make a soft-decision between RCQ and ECQ by using the flexible penalty measure according to the cell size. Although the proposed method increases the small amount of overall mean-distortion, it can reduce the distortion outliers.

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF

Image Analysis Using Digital Radiographic Lumbar Spine of Patients with Osteoporosis (골다공증 환자의 Digital 방사선 요추 Image를 이용한 영상분석)

  • Park, Hyong-Hu;Lee, Jin-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.362-369
    • /
    • 2014
  • This study aimed to propose an accurate diagnostic method for osteoporosis by realizing a computer-aided diagnosis system with the application of the statistical analysis of texture features using digital images of lateral lumbar spine of patients with osteoporosis and providing reliable supplementary diagnostic information by model experimental research for early diagnosis of diseases. For these purposes, digital images of lateral lumbar spine of normal individuals and patients with osteoporosis were used in the experiments, and the values of statistical texture features on the set ROI were expressed in six parameters. Among the texture feature values of the six parameters of osteoporosis, the highest and lowest recognition rates of 95 and 80% were shown in average gray level and uniformity, respectively. Moreover, all the six parameters showed recognition rates of over 80% for osteoporosis: 82.5% in average contrast, 90% in smoothness, 87.5% in skewness, and 87.5% in entropy. Therefore, if a program developing into a computer-aided diagnosis system for medical images is coded based on the results of this study, it is considered possible to be applied to preliminary diagnostic data for automatic detection of lesions and disease diagnosis using medical images, to provide information for definite diagnosis of diseases, to diagnose by limited device, and to be used to shorten the time to analyze medical images.

An Effective Method to Treat The Boundary Pixels for Image Compression with DWT (DWT를 이용한 영상압축을 위한 경계화소의 효과적인 처리방법)

  • 서영호;김종현;김대경;유지상;김동욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.6A
    • /
    • pp.618-627
    • /
    • 2002
  • In processing images using 2 dimensional Discrete Wavelet Transform(2D-DWT), the method to process the pixels around the image boundary may affect the quality of image and the cost to implement in hardware and software. This paper proposed an effective method to treat the boundary pixels, which is apt to implement in hardware and software without losing the quality of the image costly. This method processes the 2-D image as 1-D array so that 2-D DWT is performed by considering the image with the serial-sequential data structure (Serial-Sequential Processing). To show the performance and easiness in implementation of the proposed method, an image compression codec which compresses image and reconstructs it has been implemented and experimented. It included log-scale fried quantizer, but the entropy coder was not implemented. From the experimental results, the proposed method showed the SNR of almost the same SNR(Signal to Noise Ratio) to the Periodic Expansion(PE) method when the compression ratio(excluding entropy coding) of 2:1, 15.3% higher than Symmetric Expansion(SE) method, and 9.3% higher than 0-pixel Padding Expansion(ZPE) method. Also PE method needed 12.99% more memory space than the proposed method. By considering only the compression process, SE and ZPE methods needed additional operations than the proposed one. In hardware implementation, the proposed method in this paper had 5.92% of overall circuit as the control circuit, while SE, PE, and ZPE method has 22%, 21,2%, and 11.9% as the control circuit, respectively. Consequently, the proposed method can be thought more effective in implementing software and hardware without losing any image quality in the usual image processing applications.

An Optimization on the Psychoacoustic Model for MPEG-2 AAC Encoder (MPEG-2 AAC Encoder의 심리음향 모델 최적화)

  • Park, Jong-Tae;Moon, Kyu-Sung;Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.2
    • /
    • pp.33-41
    • /
    • 2001
  • Currently, the compression is one of the most important technology in multimedia society. Audio files arc rapidly propagated throughout internet Among them, the most famous one is MP-3(MPEC-1 Laver3) which can obtain CD tone from 128Kbps, but tone quality is abruptly down below 64Kbps. MPEC-II AAC(Advanccd Audio Coding) is not compatible with MPEG 1, but it has high compression of 1.4 times than MP 3, has max. 7.1 and 96KHz sampling rate. In this paper, we propose an algorithm that decreased the capacity of AAC encoding computation but increased the processing speed by optimizing psychoacoustic model which has enormous amount of computation in MPEG 2 AAC encoder. The optimized psychoacoustic model algorithm was implemented by C++ language. The experiment shows that the psychoacoustic model carries out FFT(Fast Fourier Transform) computation of 3048 point with 44.1 KHz sampling rate for SMR(Signal to Masking Ratio), and each entropy value is inputted to the subband filters for the control of encoder block. The proposed psychoacoustic model is operated with high speed because of optimization of unpredictable value. Also, when we transform unpredictable value into a tonality index, the speed of operation process is increased by a tonality index optimized in high frequency range.

  • PDF

Lossless Frame Memory Compression with Low Complexity based on Block-Buffer Structure for Efficient High Resolution Video Processing (고해상도 영상의 효과적인 처리를 위한 블록 버퍼 기반의 저 복잡도 무손실 프레임 메모리 압축 방법)

  • Kim, Jongho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.20-25
    • /
    • 2016
  • This study addresses a low complexity and lossless frame memory compression algorithm based on block-buffer structure for efficient high resolution video processing. Our study utilizes the block-based MHT (modified Hadamard transform) for spatial decorrelation and AGR (adaptive Golomb-Rice) coding as an entropy encoding stage to achieve lossless image compression with low complexity and efficient hardware implementation. The MHT contains only adders and 1-bit shift operators. As a result of AGR not requiring additional memory space and memory access operations, AGR is effective for low complexity development. Comprehensive experiments and computational complexity analysis demonstrate that the proposed algorithm accomplishes superior compression performance relative to existing methods, and can be applied to hardware devices without image quality degradation as well as negligible modification of the existing codec structure. Moreover, the proposed method does not require the memory access operation, and thus it can reduce costs for hardware implementation and can be useful for processing high resolution video over Full HD.

ASIP Design for Real-Time Processing of H.264 (실시간 H.264/AVC 처리를 위한 ASIP설계)

  • Kim, Jin-Soo;SunWoo, Myung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.5
    • /
    • pp.12-19
    • /
    • 2007
  • This paper presents an ASIP(Application Specific Instruction Set Processor) for implementation of H.264/AVC, called VSIP(Video Specific Instruction-set Processor). The proposed VSIP has novel instructions and optimized hardware architectures for specific applications, such as intra prediction, in-loop deblocking filter, integer transform, etc. Moreover, VSIP has hardware accelerators for computation intensive parts in video signal processing, such as inter prediction and entropy coding. The VSIP has much smaller area and can dramatically reduce the number of memory access compared with commercial DSP chips, which result in low power consumption. The proposed VSIP can efficiently perform in real-time video processing and it can support various profiles and standards.

A Balanced Binary Search Tree for Huffman Decoding (허프만 복호화를 위한 균형이진 검색 트리)

  • Kim Hyeran;Jung Yeojin;Yim Changhun;Lim Hyesook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.5C
    • /
    • pp.382-390
    • /
    • 2005
  • Huffman codes are widely used for image and video data transmission. As the increase of real-time data, a lot of studies on effective decoding algorithms and architectures have been done. In this paper, we proposed a balanced binary search tree for Huffman decoding and compared the performance of the proposed architecture with that of previous works. Based on definitions of the comparison of codewords with different lengths, the proposed architecture constructs a balanced binary tree which does not include empty internal nodes, and hence it is very efficient in the memory requirement. Performance evaluation results using actual image data show that the proposed architecture requires small number of table entries, and the decoding time is 1, 5, and 2.41 memory accesses in minimum, maximum, and average, respectively.

ECG Data Compression Using Adaptive Fractal Interpolation (적응 프랙탈 보간을 이용한 심전도 데이터 압축)

  • 전영일;윤영로
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.1
    • /
    • pp.121-128
    • /
    • 1996
  • This paper presents the ECG data compression method referred the adaptive fractal interpolation algorithm. In the previous piecewise fractal interpolation(PFI) algorithm, the size of range is fixed So, the reconstruction error of the PFI algorithm is nonuniformly distributed in the part of the original ECG signal. In order to improve this problem, the adaptive fractal interpolation(AEI) algorithm uses the variable range. If the predetermined tolerance was not satisfied, the range would be subdivided into two equal size blocks. large ranges are used for encoding the smooth waveform to yield high compression efficiency, and the smaller ranges are U for encoding rapidly varying parts of the signal to preserve the signal quality. The suggested algorithm was evaluated using MIT/BIH arrhythmia database. The AEI algorithm was found to yield a relatively low reconstruction error for a given compression ratio than the PFI algorithm. In applications where a PRD of about 7.13% was acceptable, the ASI algorithm yielded compression ratio as high as 10.51, without any entropy coding of the parameters of the fractal code.

  • PDF

Deblocking Filter for Low-complexity Video Decoder (저 복잡도 비디오 복호화기를 위한 디블록킹 필터)

  • Jo, Hyun-Ho;Nam, Jung-Hak;Jung, Kwang-Su;Sim, Dong-Gyu;Cho, Dae-Sung;Choi, Woong-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.32-43
    • /
    • 2010
  • This paper presents deblocking filter for low-complexity video decoder. Baseline profile of the H.264/AVC used for mobile devices such as mobile phones has two times higher compression performance than the MPEG-4 Visual but it has a problem of serious complexity as using 1/4-pel interpolation filter, adaptive entropy model and deblocking filter. This paper presents low-complexity deblocking filter for decreasing complexity of decoder with preserving the coding efficiency of the H.264/AVC. In this paper, the proposed low-complexity deblocking filter decreased 49% of branch instruction than conventional approach as calculating value of BS by using the CBP. In addition, a range of filtering of strong filter applied in intra macroblock boundaries was limited to two pixels. According to the experimental results, the proposed low-complexity deblocking filter decreased -0.02% of the BDBitrate comparison with baseline profile of the H.264/AVC, decreased 42% of the complexity of deblocking filter, and decreased 8.96% of the complexity of decoder.