• Title/Summary/Keyword: lossless compress

Search Result 26, Processing Time 0.019 seconds

A study on optimal Image Data Multiresolution Representation and Compression Through Wavelet Transform (Wavelet 변환을 이용한 최적 영상 데이터 다해상도 표현 및 압축에 관한 연구)

  • Kang, Gyung-Mo;Jeoung, Ki-Sam;Lee, Myoung-Ho
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1994 no.12
    • /
    • pp.31-38
    • /
    • 1994
  • This paper proposed signal decomposition and multiresolution representation through wavelet transform using wavelet orthonormal basis. And it suggested most appropriate filter for scaling function in multiresoltion representation and compared two compression method, arithmetic coding and Huffman coding. Results are as follows 1. Daub18 coefficient is most appropriate in computing time, energy compaction, image quality. 2. In case of image browsing that should be small in size and good for recognition, it is reasonable to decompose to 3 scale using pyramidal algorithm. 3. For the case of progressive transmittion where requires most grateful image reconstruction from least number of sampls or reconstruction at any target rate, I embedded the data in order of significance after scaling to 5 step. 4. Medical images such as information loss is fatal have to be compressed by lossless method. As a result from compressing 5 scaled data through arithmetic coding and Huffman coding, I obtained that arithmetic coding is better than huffman coding in processing time and compression ratio. And in case of arithmetic coding I could compress to 38% to original image data.

  • PDF

A Lossless Image Compression using Wavelet Transform with 9/7 Integer Coefficient Filter Bank (9/7텝을 갖는 정수 웨이브릿 변환을 이용한 무손실 정지영상 압축)

  • Chu Hyung Suk;Seo Young Cheon;Jun Hee Sung;Lee Tae Ho;An Chong Koo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.1 no.1
    • /
    • pp.82-88
    • /
    • 2000
  • In this paper, We compare the Harr wavelet of the S+P transform with various integer coefficient filter banks and apply 9/7 ICFB to the wavelet transform. In addition, we propose a entropy-coding method that exploits the multiresolution structure and the feedback of the prediction error, and can efficiently compress the transformed image for progressive transmission. Simulation results are included to compare to the compression ratio using the S+P transform with different types of images.

  • PDF

Region Classification and Image Based on Region-Based Prediction (RBP) Model

  • Cassio-M.Yorozuya;Yu-Liu;Masayuki-Nakajima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.165-170
    • /
    • 1998
  • This paper presents a new prediction method RBP region-based prediction model where the context used for prediction contains regions instead of individual pixels. There is a meaningful property that RBP can partition a cartoon image into two distinctive types of regions, one containing full-color backgrounds and the other containing boundaries, edges and home-chromatic areas. With the development of computer techniques, synthetic images created with CG (computer graphics) becomes attactive. Like the demand on data compression, it is imperative to efficiently compress synthetic images such as cartoon animation generated with CG for storage of finite capacity and transmission of narrow bandwidth. This paper a lossy compression method to full-color regions and a lossless compression method to homo-chromatic and boundaries regions. Two criteria for partitioning are described, constant criterion and variable criterion. The latter criterion, in form of a linear function, gives the different threshold for classification in terms of contents of the image of interest. We carry out experiments by applying our method to a sequence of cartoon animation. We carry out experiments by applying our method to a sequence of cartoon animation. Compared with the available image compression standard MPEG-1, our method gives the superior results in both compression ratio and complexity.

  • PDF

Effective Compression Technique for Secure Transmission and Storage of GIS Digital Map (GIS 디지털 맵의 안전한 전송 및 저장을 위한 효율적인 압축 기법)

  • Jang, Bong-Joo;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.210-218
    • /
    • 2011
  • Generally, GIS digital map has been represented and transmitted by ASCII and Binary data forms. Among these forms, Binary form has been widely used in many GIS application fields for the transmission of mass map data. In this paper, we present a hierarchical compression technique of polyline and polygon components for effective storage and transmission of vector map with various degree of decision. These components are core geometric components that represent main layers in vector map. The proposed technique performs firstly the energy compaction of all polyline and polygon components in spatial domain for the lossless compression of detailed vector map and compress independently integer parts and fraction parts of 64bit floating points. From experimental results, we confirmed that the proposed technique has superior compressive performance to the conventional data compression of 7z, zip, rar and gz.

The Cooperative Parallel X-Match Data Compression Algorithm (협동 병렬 X-Match 데이타 압축 알고리즘)

  • 윤상균
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.10
    • /
    • pp.586-594
    • /
    • 2003
  • X-Match algorithm is a lossless compression algorithm suitable for hardware implementation owing to its simplicity. It can compress 32 bits per clock cycle and is suitable for real time compression. However, as the bus width increases 64-bit, the compression unit also need to increase. This paper proposes the cooperative parallel X-Match (X-MatchCP) algorithm, which improves the compression speed by performing the two X-Match algorithms in parallel. It searches the all dictionary for two words, combines the compression codes of two words generated by parallel X-Match compression and outputs the combined code while the previous parallel X-Match algorithm searches an individual dictionary. The compression ratio in X-MatchCP is almost the same as in X-Match. X-MatchCP algorithm is described and simulated by Verilog hardware description language.

Side-Channel Archive Framework Using Deep Learning-Based Leakage Compression (딥러닝을 이용한 부채널 데이터 압축 프레임 워크)

  • Sangyun Jung;Sunghyun Jin;Heeseok Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.379-392
    • /
    • 2024
  • With the rapid increase in data, saving storage space and improving the efficiency of data transmission have become critical issues, making the research on the efficiency of data compression technologies increasingly important. Lossless algorithms can precisely restore original data but have limited compression ratios, whereas lossy algorithms provide higher compression rates at the expense of some data loss. There has been active research in data compression using deep learning-based algorithms, especially the autoencoder model. This study proposes a new side-channel analysis data compressor utilizing autoencoders. This compressor achieves higher compression rates than Deflate while maintaining the characteristics of side-channel data. The encoder, using locally connected layers, effectively preserves the temporal characteristics of side-channel data, and the decoder maintains fast decompression times with a multi-layer perceptron. Through correlation power analysis, the proposed compressor has been proven to compress data without losing the characteristics of side-channel data.