• Title/Summary/Keyword: Data compression

Search Result 2,113, Processing Time 0.035 seconds

A Study of on Extension Compression Algorithm of Mixed Text by Hangeul-Alphabet

  • Ji, Kang-yoo;Cho, Mi-nam;Hong, Sung-soo;Park, Soo-bong
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.446-449
    • /
    • 2002
  • This paper represents a improved data compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet from. Original LZW algorithm efficiently compress a alphabet text file but inefficiently compress a 2 byte completion Hangout text file. To solve this problem, data compression algorithm using 2 byte prefix field and 2 byte suffix field for compression table have developed. But it have a another problem that is compression ratio of alphabet text file decreased. In this paper, we proposes improved LZW algorithm, that is, compression table in the Extended LZW(ELZW) algorithm uses 2 byte prefix field for pointer of a table and 1 byte suffix field for repeat counter. where, a prefix field uses a pointer(index) of compression table and a suffix field uses a counter of overlapping or recursion text data in compression table. To increase compression ratio, after construction of compression table, table data are properly packed as different bit string in accordance with a alphabet, Hangout, and pointer respectively. Therefore, proposed ELZW algorithm is superior to 1 byte LZW algorithm as 7.0125 percent and superior to 2 byte LZW algorithm as 11.725 percent. This paper represents a improved data Compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet form. This document is an example of what your camera-ready manuscript to ITC-CSCC 2002 should look like. Authors are asked to conform to the directions reported in this document.

  • PDF

Volumetric Data Encoding Using Daubechies Wavelet Filter (Daubechies 웨이블릿 필터를 사용한 볼륨 데이터 인코딩)

  • Hur, Young-Ju;Park, Sang-Hun
    • The KIPS Transactions:PartA
    • /
    • v.13A no.7 s.104
    • /
    • pp.639-646
    • /
    • 2006
  • Data compression technologies enable us to store and transfer large amount of data efficiently, and become more and more important due to increasing data size and the network traffic. Moreover, as a result of the increase of computing power, volumetric data produced from various applied science and engineering fields has been getting much larger. In this Paper, we present a volume compression scheme which exploits Daubeches wavelet transform. The proposed scheme basically supports lossy compression for 3D volume data, and provides unit-wise random accessibility. Since our scheme shows far lower error rates than the previous compression methods based on Haar filter, it could be used well for interactive visualization applications as well as large volume data compression requiring image fidelity.

An Adaptive Data Compression Algorithm for Video Data (사진데이타를 위한 한 Adaptive Data Compression 방법)

  • 김재균
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.12 no.2
    • /
    • pp.1-10
    • /
    • 1975
  • This paper presents an adaptive data compression algorithm for video data. The coling complexity due to the high correlation in the given data sequence is alleviated by coding the difference data, sequence rather than the data sequence itself. The adaptation to the nonstationary statistics of the data is confined within a code set, which consists of two constant length cades and six modified Shannon.Fano codes. lt is assumed that the probability distributions of tile difference data sequence and of the data entropy are Laplacian and Gaussion, respectively. The adaptive coding performance is compared for two code selection criteria: entropy and $P_r$[difference value=0]=$P_0$. It is shown that data compression ratio 2 : 1 is achievable with the adaptive coding. The gain by the adaptive coding over the fixed coding is shown to be about 10% in compression ratio and 15% in code efficiency. In addition, $P_0$ is found to he not only a convenient criterion for code selection, but also such efficient a parameter as to perform almost like entropy.

  • PDF

ECG data compression using wavelet transform and adaptive fractal interpolation (웨이브렛 변환과 적응 프랙탈 보간을 이용한 심전도 데이터 압축)

  • 윤영노;이우희
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.12
    • /
    • pp.45-61
    • /
    • 1996
  • This paper presents the ECG data compression using wavelet transform (WT) and adaptive fractal interpolation (AFI). The WT has the subband coding scheme. The fractal compression method represents any range of ECG signal by fractal interpolation parameters. Specially, the AFI used the adaptive range sizes and got good performance for ECG data cmpression. In this algorithm, the AFI is applied into the low frequency part of WT. The MIT/BIH arhythmia data was used for evaluation. The compression rate using WT and AFI algorithm is better than the compression rate using AFI. The WT and AFI algorithm yields compression ratio as high as 21.0 wihtout any entropy coding.

  • PDF

EEG Data Compression Using the Feature of Wavelet Packet Coefficients (웨이블릿 패킷 분해를 이용한 EEG 신호압축)

  • Cho, Hyun-Sook;Lee, Hyoung;Hwang, Sun-Tae
    • Journal of Information Technology Applications and Management
    • /
    • v.10 no.4
    • /
    • pp.159-168
    • /
    • 2003
  • This paper is concerned with the compression of EEG signals using wavelet-packet based techniques. EEG data compression is desirable for a number of reasons. Primarily it decreases for transmission time, archival storage space, and in portable systems, it decreases memory requirements or increases channels and bandwidth. Upon wavelet decomposition, inherent redundancies in the signal can be removed through thresholding to achieve data compression. We proposed the energy cumulative function for deciding of the threshold value and it works very innovative of EEG data.

  • PDF

ECG Data Compression Technique Using Wavelet Transform and Vector Quantization on PMS-B Algorithm (웨이브렛 변환과 평균예측검색 알고리즘의 벡터양자화를 이용한 심전도 데이터 압축기법)

  • Eun, J.S.;Shin, J.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1996 no.11
    • /
    • pp.225-228
    • /
    • 1996
  • ECG data are used for the diagnostic purposes with many clinical situations, especially heart disease. In this paper, an efficient ECG data compression technique by wavelet transform and high-speed vector quantization on PMS-B algorithm is proposed. In general, ECG data compression techniques are divided into two categories: direct and transform methods. The direct data compression techniques are AZTEC, TP, CORTES, FAN and SAPA algorithms, besides the transform methods include K-L, Fourier, Walsh, and wavelet transforms. In this paper, we applied wavelet analysis to the ECG data. In particular, vector quantization on PMS-B algorithm to the wavelet coefficients in the higher frequency regions, but scalar quantized in the lower frequency regions by PCM. Finally, the quantized indices were compressed by LZW lossless entropy encoder. As the result of simulation, it turns out to get sufficient compression ratio while keeping clinically acceptable PRD.

  • PDF

The Compression of Normal Vectors to Prevent Visulal Distortion in Shading 3D Mesh Models (3D 메쉬 모델의 쉐이딩 시 시각적 왜곡을 방지하는 법선 벡터 압축에 관한 연구)

  • Mun, Hyun-Sik;Jeong, Chae-Bong;Kim, Jay-Jung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-7
    • /
    • 2008
  • Data compression becomes increasingly an important issue for reducing data storage spaces as well as transmis-sion time in network environments. In 3D geometric models, the normal vectors of faces or meshes take a major portion of the data so that the compression of the vectors, which involves the trade off between the distortion of the images and compression ratios, plays a key role in reducing the size of the models. So, raising the compression ratio when the normal vector is compressed and minimizing the visual distortion of shape model's shading after compression are important. According to the recent papers, normal vector compression is useful to heighten com-pression ratio and to improve memory efficiency. But, the study about distortion of shading when the normal vector is compressed is rare relatively. In this paper, new normal vector compression method which is clustering normal vectors and assigning Representative Normal Vector (RNV) to each cluster and using the angular deviation from actual normal vector is proposed. And, using this new method, Visually Undistinguishable Lossy Compression (VULC) algorithm which distortion of shape model's shading by angular deviation of normal vector cannot be identified visually has been developed. And, being applied to the complicated shape models, this algorithm gave a good effectiveness.

An Optimized Iterative Semantic Compression Algorithm And Parallel Processing for Large Scale Data

  • Jin, Ran;Chen, Gang;Tung, Anthony K.H.;Shou, Lidan;Ooi, Beng Chin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2761-2781
    • /
    • 2018
  • With the continuous growth of data size and the use of compression technology, data reduction has great research value and practical significance. Aiming at the shortcomings of the existing semantic compression algorithm, this paper is based on the analysis of ItCompress algorithm, and designs a method of bidirectional order selection based on interval partitioning, which named An Optimized Iterative Semantic Compression Algorithm (Optimized ItCompress Algorithm). In order to further improve the speed of the algorithm, we propose a parallel optimization iterative semantic compression algorithm using GPU (POICAG) and an optimized iterative semantic compression algorithm using Spark (DOICAS). A lot of valid experiments are carried out on four kinds of datasets, which fully verified the efficiency of the proposed algorithm.

2D ECG Compression Method Using Sorting and Mean Normalization (정렬과 평균 정규화를 이용한 2D ECG 신호 압축 방법)

  • Lee, Gyu-Bong;Joo, Young-Bok;Han, Chan-Ho;Huh, Kyung-Moo;Park, Kil-Houm
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.193-195
    • /
    • 2009
  • In this paper, we propose an effective compression method for electrocardiogram(ECG) signals. 1-D ECG signals are reconstructed to 2-D ECG data by period and complexity sorting schemes with image compression techniques to Increase inter and intra-beat correlation. The proposed method added block division and mean-period normalization techniques on top of conventional 2-D data ECG compression methods. JPEG 2000 is chosen for compression of 2-D ECG data. Standard MIT-BIH arrhythmia database is used for evaluation and experiment. The results show that the proposed method outperforms compared to the most recent literature especially in case of high compression rate.

  • PDF

Palette-based Color Attribute Compression for Point Cloud Data

  • Cui, Li;Jang, Euee S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3108-3120
    • /
    • 2019
  • Point cloud is widely used in 3D applications due to the recent advancement of 3D data acquisition technology. Polygonal mesh-based compression has been dominant since it can replace many points sharing a surface with a set of vertices with mesh structure. Recent point cloud-based applications demand more point-based interactivity, which makes point cloud compression (PCC) becomes more attractive than 3D mesh compression. Interestingly, an exploration activity has been started to explore the feasibility of PCC standard in MPEG. In this paper, a new color attribute compression method is presented for point cloud data. The proposed method utilizes the spatial redundancy among color attribute data to construct a color palette. The color palette is constructed by using K-means clustering method and each color data in point cloud is represented by the index of its similar color in palette. To further improve the compression efficiency, the spatial redundancy between the indices of neighboring colors is also removed by marking them using a flag bit. Experimental results show that the proposed method achieves a better improvement of RD performance compared with that of the MPEG PCC reference software.