• 제목/요약/키워드: Data compression

검색결과 2,113건 처리시간 0.029초

A Study of on Extension Compression Algorithm of Mixed Text by Hangeul-Alphabet

  • Ji, Kang-yoo;Cho, Mi-nam;Hong, Sung-soo;Park, Soo-bong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.446-449
    • /
    • 2002
  • This paper represents a improved data compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet from. Original LZW algorithm efficiently compress a alphabet text file but inefficiently compress a 2 byte completion Hangout text file. To solve this problem, data compression algorithm using 2 byte prefix field and 2 byte suffix field for compression table have developed. But it have a another problem that is compression ratio of alphabet text file decreased. In this paper, we proposes improved LZW algorithm, that is, compression table in the Extended LZW(ELZW) algorithm uses 2 byte prefix field for pointer of a table and 1 byte suffix field for repeat counter. where, a prefix field uses a pointer(index) of compression table and a suffix field uses a counter of overlapping or recursion text data in compression table. To increase compression ratio, after construction of compression table, table data are properly packed as different bit string in accordance with a alphabet, Hangout, and pointer respectively. Therefore, proposed ELZW algorithm is superior to 1 byte LZW algorithm as 7.0125 percent and superior to 2 byte LZW algorithm as 11.725 percent. This paper represents a improved data Compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet form. This document is an example of what your camera-ready manuscript to ITC-CSCC 2002 should look like. Authors are asked to conform to the directions reported in this document.

  • PDF

Daubechies 웨이블릿 필터를 사용한 볼륨 데이터 인코딩 (Volumetric Data Encoding Using Daubechies Wavelet Filter)

  • 허영주;박상훈
    • 정보처리학회논문지A
    • /
    • 제13A권7호
    • /
    • pp.639-646
    • /
    • 2006
  • 데이터 압축 기술은 대용량의 데이터를 효율적으로 저장하고 전송할 수 있게 해주는 기술로, 요구되는 데이터의 용량이 커지고 네트워크의 트래픽이 증가함에 따라 그 중요도가 점점 더 높아지고 있다. 특히 다양한 응용과학과 공학 분야에서 산출되는 볼륨 데이터는 컴퓨팅 기술의 발전에 힘입어 그 용량이 점점 더 증가하는 추세에 있다. 본 논문에서는 Daubechies 웨이블릿 변환을 적용해서 볼륨 데이터를 압축하는 기법을 제안한다. 구현된 D4 웨이블릿 필터 기반 압축 기법은 3차원 볼륨 데이터에 대한 손실 압축과 블록 단위의 무작위 추출 복원을 지원한다. 본 기법은 기존의 Harr 필터를 이용한 압축 방식에 비해 복원 데이터의 손실율이 낮기 때문에, 정밀한 복원 영상이 중요시되는 대용량 데이터의 압축 및 인터렉티브 가시화 응용에 유용하게 사용될 수 있다.

사진데이타를 위한 한 Adaptive Data Compression 방법 (An Adaptive Data Compression Algorithm for Video Data)

  • 김재균
    • 대한전자공학회논문지
    • /
    • 제12권2호
    • /
    • pp.1-10
    • /
    • 1975
  • 본 논문은 사진데이타에 쉽게 적용할수있는 한 adaptive data compression 방법을 노표시하였다. 이웃 sample data 사이의 높은 correlation때문에 발생할 부호화 복잡성을 간편한 sample difference data로 대처하였으며, 자단의 statistical nonstationarity에 적응키 위해서 여덟가지 부호(code)로 구성된 code set중에서 최적부호를 선택토록 하였다. code erst는 두가지 등장부호와 여섯가지 보완형 Shannon-Fanro 부호로 되었다. difference data의 확률분포는 Laplacian model로, entropy의 확률분포는 Gaussian model대 하였다. 부호선별 Paranleter로서 entropy와 Pr[차이값=0]=Po를 비교하였다. 콤퓨타 실험결과 이 adaptive coding 방법으로 2대 1의 데이타 감축비를 얻었다. 이 방법은 fixed coding에 비해서 데이타 감축비와 부호화효율에서 약 10%와 15%의 이득을 주었다. 또한 도는 entropy보다 휠신 편리한 부호선별 parameter인 중시에 entropy 경우와 1% 내외의 좋은 결과를 얻을수 있음이 확인되었다. This paper presents an adaptive data compression algorithm for video data. The coling complexity due to the high correlation in the given data sequence is alleviated by coding the difference data, sequence rather than the data sequence itself. The adaptation to the nonstationary statistics of the data is confined within a code set, which consists of two constant length cades and six modified Shannon·Fano codes. lt is assumed that the probability distributions of tile difference data sequence and of the data entropy are Laplacian and Gaussion, respectively. The adaptive coding performance is compared for two code selection criteria: entropy and pr[difference value=0]=po. It is shown that data compression ratio 2 : 1 is achievable with the adaptive coding. The gain by the adaptive coding over the fixed coding is shown to be about 10% in compression ratio and 15% in code efficiency. In addition, po is found to he not only a convenient criterion for code selection, but also such efficient a parameter as to perform almost like entropy.

  • PDF

웨이브렛 변환과 적응 프랙탈 보간을 이용한 심전도 데이터 압축 (ECG data compression using wavelet transform and adaptive fractal interpolation)

  • 윤영노;이우희
    • 전자공학회논문지B
    • /
    • 제33B권12호
    • /
    • pp.45-61
    • /
    • 1996
  • This paper presents the ECG data compression using wavelet transform (WT) and adaptive fractal interpolation (AFI). The WT has the subband coding scheme. The fractal compression method represents any range of ECG signal by fractal interpolation parameters. Specially, the AFI used the adaptive range sizes and got good performance for ECG data cmpression. In this algorithm, the AFI is applied into the low frequency part of WT. The MIT/BIH arhythmia data was used for evaluation. The compression rate using WT and AFI algorithm is better than the compression rate using AFI. The WT and AFI algorithm yields compression ratio as high as 21.0 wihtout any entropy coding.

  • PDF

웨이블릿 패킷 분해를 이용한 EEG 신호압축 (EEG Data Compression Using the Feature of Wavelet Packet Coefficients)

  • 조현숙;이형;황선태
    • Journal of Information Technology Applications and Management
    • /
    • 제10권4호
    • /
    • pp.159-168
    • /
    • 2003
  • This paper is concerned with the compression of EEG signals using wavelet-packet based techniques. EEG data compression is desirable for a number of reasons. Primarily it decreases for transmission time, archival storage space, and in portable systems, it decreases memory requirements or increases channels and bandwidth. Upon wavelet decomposition, inherent redundancies in the signal can be removed through thresholding to achieve data compression. We proposed the energy cumulative function for deciding of the threshold value and it works very innovative of EEG data.

  • PDF

웨이브렛 변환과 평균예측검색 알고리즘의 벡터양자화를 이용한 심전도 데이터 압축기법 (ECG Data Compression Technique Using Wavelet Transform and Vector Quantization on PMS-B Algorithm)

  • 은종숙;신재호
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1996년도 추계학술대회
    • /
    • pp.225-228
    • /
    • 1996
  • ECG data are used for the diagnostic purposes with many clinical situations, especially heart disease. In this paper, an efficient ECG data compression technique by wavelet transform and high-speed vector quantization on PMS-B algorithm is proposed. In general, ECG data compression techniques are divided into two categories: direct and transform methods. The direct data compression techniques are AZTEC, TP, CORTES, FAN and SAPA algorithms, besides the transform methods include K-L, Fourier, Walsh, and wavelet transforms. In this paper, we applied wavelet analysis to the ECG data. In particular, vector quantization on PMS-B algorithm to the wavelet coefficients in the higher frequency regions, but scalar quantized in the lower frequency regions by PCM. Finally, the quantized indices were compressed by LZW lossless entropy encoder. As the result of simulation, it turns out to get sufficient compression ratio while keeping clinically acceptable PRD.

  • PDF

3D 메쉬 모델의 쉐이딩 시 시각적 왜곡을 방지하는 법선 벡터 압축에 관한 연구 (The Compression of Normal Vectors to Prevent Visulal Distortion in Shading 3D Mesh Models)

  • 문현식;정채봉;김재정
    • 한국CDE학회논문집
    • /
    • 제13권1호
    • /
    • pp.1-7
    • /
    • 2008
  • Data compression becomes increasingly an important issue for reducing data storage spaces as well as transmis-sion time in network environments. In 3D geometric models, the normal vectors of faces or meshes take a major portion of the data so that the compression of the vectors, which involves the trade off between the distortion of the images and compression ratios, plays a key role in reducing the size of the models. So, raising the compression ratio when the normal vector is compressed and minimizing the visual distortion of shape model's shading after compression are important. According to the recent papers, normal vector compression is useful to heighten com-pression ratio and to improve memory efficiency. But, the study about distortion of shading when the normal vector is compressed is rare relatively. In this paper, new normal vector compression method which is clustering normal vectors and assigning Representative Normal Vector (RNV) to each cluster and using the angular deviation from actual normal vector is proposed. And, using this new method, Visually Undistinguishable Lossy Compression (VULC) algorithm which distortion of shape model's shading by angular deviation of normal vector cannot be identified visually has been developed. And, being applied to the complicated shape models, this algorithm gave a good effectiveness.

An Optimized Iterative Semantic Compression Algorithm And Parallel Processing for Large Scale Data

  • Jin, Ran;Chen, Gang;Tung, Anthony K.H.;Shou, Lidan;Ooi, Beng Chin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권6호
    • /
    • pp.2761-2781
    • /
    • 2018
  • With the continuous growth of data size and the use of compression technology, data reduction has great research value and practical significance. Aiming at the shortcomings of the existing semantic compression algorithm, this paper is based on the analysis of ItCompress algorithm, and designs a method of bidirectional order selection based on interval partitioning, which named An Optimized Iterative Semantic Compression Algorithm (Optimized ItCompress Algorithm). In order to further improve the speed of the algorithm, we propose a parallel optimization iterative semantic compression algorithm using GPU (POICAG) and an optimized iterative semantic compression algorithm using Spark (DOICAS). A lot of valid experiments are carried out on four kinds of datasets, which fully verified the efficiency of the proposed algorithm.

정렬과 평균 정규화를 이용한 2D ECG 신호 압축 방법 (2D ECG Compression Method Using Sorting and Mean Normalization)

  • 이규봉;주영복;한찬호;허경무;박길흠
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.193-195
    • /
    • 2009
  • In this paper, we propose an effective compression method for electrocardiogram(ECG) signals. 1-D ECG signals are reconstructed to 2-D ECG data by period and complexity sorting schemes with image compression techniques to Increase inter and intra-beat correlation. The proposed method added block division and mean-period normalization techniques on top of conventional 2-D data ECG compression methods. JPEG 2000 is chosen for compression of 2-D ECG data. Standard MIT-BIH arrhythmia database is used for evaluation and experiment. The results show that the proposed method outperforms compared to the most recent literature especially in case of high compression rate.

  • PDF

Palette-based Color Attribute Compression for Point Cloud Data

  • Cui, Li;Jang, Euee S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권6호
    • /
    • pp.3108-3120
    • /
    • 2019
  • Point cloud is widely used in 3D applications due to the recent advancement of 3D data acquisition technology. Polygonal mesh-based compression has been dominant since it can replace many points sharing a surface with a set of vertices with mesh structure. Recent point cloud-based applications demand more point-based interactivity, which makes point cloud compression (PCC) becomes more attractive than 3D mesh compression. Interestingly, an exploration activity has been started to explore the feasibility of PCC standard in MPEG. In this paper, a new color attribute compression method is presented for point cloud data. The proposed method utilizes the spatial redundancy among color attribute data to construct a color palette. The color palette is constructed by using K-means clustering method and each color data in point cloud is represented by the index of its similar color in palette. To further improve the compression efficiency, the spatial redundancy between the indices of neighboring colors is also removed by marking them using a flag bit. Experimental results show that the proposed method achieves a better improvement of RD performance compared with that of the MPEG PCC reference software.