• Title/Summary/Keyword: Information compression

Search Result 2,191, Processing Time 0.032 seconds

A Method of Quadtree-Based Compression for the Image by Wavelet Transform (웨이브렛 변환 영상에 대한 쿼드트리 기반 압축 방법)

  • Kwak, Chil-Seong;Kim, Ki-Moon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.10
    • /
    • pp.1773-1779
    • /
    • 2008
  • Images play the most important role in human perception. In order to send the image information by the digital type, the compression is essential. Recently, a lot of studies on encoding algorithms for image by wavelet transform are going on. In this paper, a quadtree-based method of image compression applied to the images decomposed by wavelet transform by using the correlations between pixels and '0' data grouping is proposed. For the proposed method, the experimental gray image with $256{\times}256$ size and 8[bit], is used. And, the performance of proposed method is evaluated to compare with DCT compression method.

2D ECG Compression Method Using Sorting and Mean Normalization (정렬과 평균 정규화를 이용한 2D ECG 신호 압축 방법)

  • Lee, Gyu-Bong;Joo, Young-Bok;Han, Chan-Ho;Huh, Kyung-Moo;Park, Kil-Houm
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.193-195
    • /
    • 2009
  • In this paper, we propose an effective compression method for electrocardiogram(ECG) signals. 1-D ECG signals are reconstructed to 2-D ECG data by period and complexity sorting schemes with image compression techniques to Increase inter and intra-beat correlation. The proposed method added block division and mean-period normalization techniques on top of conventional 2-D data ECG compression methods. JPEG 2000 is chosen for compression of 2-D ECG data. Standard MIT-BIH arrhythmia database is used for evaluation and experiment. The results show that the proposed method outperforms compared to the most recent literature especially in case of high compression rate.

  • PDF

A New ROM Compression Method for Continuous Data (연속된 데이터를 위한 새로운 롬 압축 방식)

  • 양병도;김이섭
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.5
    • /
    • pp.354-360
    • /
    • 2003
  • A new ROM compression method for continuous data is proposed. The ROM compression method is based on two proposed ROM compression algorithms. The first one is a region select ROM compression algorithm that stores only regions including data after dividing data into many small regions by magnitude and address. The second is a quantization ROM and error ROM compression algorithm that divides data into quantized data and their errors. Using these algorithms, 40~60% ROM size reductions aye achieved for various continuous data.

Reversible Data Hiding Scheme for VQ Indices Based on Absolute Difference Trees

  • Chang, Chin-Chen;Nguyen, Thai-Son;Lin, Chia-Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2572-2589
    • /
    • 2014
  • Reversible data hiding is a technique for recovering original images without any distortion after secret data are extracted from the image. The technique continues to attract attention from many researchers. In this paper, we introduce a new reversible data hiding scheme based on the adjacent index differences of vector quantization (VQ) indices. The proposed scheme exploits the differences between two adjacent indices to embed secret data. Experimental results show that our scheme can achieve a lower compression rate than an earlier scheme by Yang and Lin. Our scheme's average compression rate, 0.44 bpp, outperforms that of Yang and Lin's scheme, which averages 0.53 bpp. Moreover, the embedding capacity of our scheme can rise to 1.45 bpi, which also is superior to that of Chang et al.'s scheme [35] (1.00 bpi)Yang and Lin's scheme [27] (0.91 bpi) as well as Chang et al.'s scheme [26] (0.74 bpi).

Sparse Matrix Compression Technique and Hardware Design for Lightweight Deep Learning Accelerators (경량 딥러닝 가속기를 위한 희소 행렬 압축 기법 및 하드웨어 설계)

  • Kim, Sunhee;Shin, Dongyeob;Lim, Yong-Seok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.17 no.4
    • /
    • pp.53-62
    • /
    • 2021
  • Deep learning models such as convolutional neural networks and recurrent neual networks process a huge amounts of data, so they require a lot of storage and consume a lot of time and power due to memory access. Recently, research is being conducted to reduce memory usage and access by compressing data using the feature that many of deep learning data are highly sparse and localized. In this paper, we propose a compression-decompression method of storing only the non-zero data and the location information of the non-zero data excluding zero data. In order to make the location information of non-zero data, the matrix data is divided into sections uniformly. And whether there is non-zero data in the corresponding section is indicated. In this case, section division is not executed only once, but repeatedly executed, and location information is stored in each step. Therefore, it can be properly compressed according to the ratio and distribution of zero data. In addition, we propose a hardware structure that enables compression and decompression without complex operations. It was designed and verified with Verilog, and it was confirmed that it can be used in hardware deep learning accelerators.

Blind Classification of Speech Compression Methods using Structural Analysis of Bitstreams (비트스트림의 구조 분석을 이용한 음성 부호화 방식 추정 기법)

  • Yoo, Hoon;Park, Cheol-Sun;Park, Young-Mi;Kim, Jong-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.1
    • /
    • pp.59-64
    • /
    • 2012
  • This paper addresses a blind estimation and classification algorithm of the speech compression methods by using analysis on the structure of compressed bitstreams. Various speech compression methods including vocoders are developed in order to transmit or store the speech signals at very low bitrates. As a key feature, the vocoders contain the block structure inevitably. In classification of each compression method, we use the Measure of Inter-Block Correlation (MIBC) to check whether the bitstream includes the block structure or not, and to estimate the block length. Moreover, for the compression methods with the same block length, the proposed algorithm estimates the corresponding compression method correctly by using that each compression method has different correlation characteristics in each bit location. Experimental results indicate that the proposed algorithm classifies the speech compression methods robustly for various types and lengths of speech signals in noisy environment.

FDR Test Compression Algorithm based on Frequency-ordered (Frequency-ordered 기반 FDR 테스트패턴 압축 알고리즘)

  • Mun, Changmin;Kim, Dooyoung;Park, Sungju
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.106-113
    • /
    • 2014
  • Recently, to reduce test cost by efficiently compressing test patterns for SOCs(System-on-a-chip), different compression techniques have been proposed including the FDR(Frequency-directed run-length) algorithm. FDR is extended to EFDR(Extended-FDR), SAFDR(Shifted-Alternate-FDR) and VPDFDR(Variable Prefix Dual-FDR) to improve the compression ratio. In this paper, a frequency-ordered modification is proposed to further augment the compression ratios of FDR, EFDR, SAFRD and VPDFDR. The compression ratio can be maximized by using frequency-ordered method and consequently the overall manufacturing test cost and time can be reduced significantly.

A GIS Vector Data Compression Method Considering Dynamic Updates

  • Chun Woo-Je;Joo Yong-Jin;Moon Kyung-Ky;Lee Yong-Ik;Park Soo-Hong
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.355-364
    • /
    • 2005
  • Vector data sets (e.g. maps) are currently major sources of displaying, querying, and identifying locations of spatial features in a variety of applications. Especially in mobile environment, the needs for using spatial data is increasing, and the relative large size of vector maps need to be smaller. Recently, there have been several studies about vector map compression. There was clustering-based compression method with novel encoding/decoding scheme. However, precedent studies did not consider that spatial data have to be updated periodically. This paper explores the problem of existing clustering-based compression method. We propose an adaptive approximation method that is capable of handling data updates as well as reducing error levels. Experimental evaluation showed that when an updated event occurred the proposed adaptive approximation method showed enhanced positional accuracy compared with simple cluster based compression method.

  • PDF

JPEG-based Variable Block-Size Image Compression using CIE La*b* Color Space

  • Kahu, Samruddhi Y.;Bhurchandi, Kishor M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5056-5078
    • /
    • 2018
  • In this work we propose a compression technique that makes use of linear and perceptually uniform CIE $La^*b^*$ color space in the JPEG image compression framework to improve its performance at lower bitrates. To generate quantization matrices suitable for the linear and perceptually uniform CIE $La^*b^*$ color space, a novel linear Contrast Sensitivity Function (CSF) is used. The compression performance in terms of Compression Ratio (CR) and Peak Signal to Noise Ratio (PSNR), is further improved by utilizing image dependent, variable and non-uniform image sub-blocks generated using a proposed histogram-based merging technique. Experimental results indicate that the proposed linear CSF based quantization technique yields, on an average, 8% increase in CR for the same reconstructed image quality in terms of PSNR as compared to the conventional YCbCr color space. The proposed scheme also outperforms JPEG in terms of CR by an average of 45.01% for the same reconstructed image quality.

(Very Low Bitrate Image Compression Coding Based on Fractal) (프랙탈 기반 저전송율 영상 압축 부호화)

  • 곽성근
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.8
    • /
    • pp.1085-1092
    • /
    • 2002
  • Studies on image information processing have been performed since long time ago because in daily life most of information are acquired by the since of sight. Since there should be a lot of data to describe image as a digital form, data compression is required in order to store or transmit digital image. Lately among most of image compression methods adopted on image compression standards, transform coding methods have been primarily used which transforms the correlations between pixels of image on frequency domain before image compression. It is blown that the standard methods using especially DCT features blocking effect which is the major cause of degrading the quality of image at high compression rate. Fractal encoding using quadtree partition is applied after reducing original image, and we are to find a optimal encoding for the number of scaling bit and offset bit.

  • PDF