• Title/Summary/Keyword: Data Compression

Search Result 2,104, Processing Time 0.031 seconds

On-the-fly Data Compression for Efficient TCP Transmission

  • Wang, Min;Wang, Junfeng;Mou, Xuan;Han, Sunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.471-489
    • /
    • 2013
  • Data compression at the transport layer could both reduce transmitted bytes over network links and increase the transmitted application data (TCP PDU) in one RTT at the same network conditions. Therefore, it is able to improve transmission efficiency on Internet, especially on the networks with limited bandwidth or long delay links. In this paper, we propose an on-the-fly TCP data compression scheme, i.e., the TCPComp, to enhance TCP performance. This scheme is primarily composed of the compression decision mechanism and the compression ratio estimation algorithm. When the application data arrives at the transport layer, the compression decision mechanism is applied to determine which data block could be compressed. The compression ratio estimation algorithm is employed to predict compression ratios of upcoming application data for determining the proper size of the next data block so as to maximize compression efficiency. Furthermore, the assessment criteria for TCP data compression scheme are systematically developed. Experimental results show that the scheme can effectively reduce transmitted TCP segments and bytes, leading to greater transmission efficiency compared with the standard TCP and other TCP compression schemes.

A Twin Symbol Encoding Technique Based on Run-Length for Efficient Test Data Compression

  • Park, Jae-Seok;Kang, Sung-Ho
    • ETRI Journal
    • /
    • v.33 no.1
    • /
    • pp.140-143
    • /
    • 2011
  • Recent test data compression techniques raise concerns regarding power dissipation and compression efficiency. This letter proposes a new test data compression scheme, twin symbol encoding, that supports block division skills that can reduce hardware overhead. Our experimental results show that the proposed technique achieves both a high compression ratio and low-power dissipation. Therefore, the proposed scheme is an attractive solution for efficient test data compression.

BTC Algorithm Utilizing Compression Method of Bitmap and Quantization data for Image Compression (비트맵과 양자화 데이터 압축 기법을 사용한 BTC 영상 압축 알고리즘)

  • Cho, Moonki;Yoon, Yungsup
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.10
    • /
    • pp.135-141
    • /
    • 2012
  • To reduce frame memory size usage in LCD overdrive, block truncation coding (BTC) image compression is commonly used. For maximization of compression ratio, BTC image compression is need to compress bitmap or quantization data. In this paper, for high compression ratio, we propose CMBQ-BTC (CMBQ : compression method bitmap data and quantization data) algorithm. Experimental results show that proposed algorithm is efficient as compared with PSNR and compression ratio of the conventional BTC method.

A Study for Efficiency Improvement of Compression Algorithm with Selective Data Distinction (선별적 데이터 판별에 의한 압축 알고리즘 효율 개선에 관한 연구)

  • Jang, Seung Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.902-908
    • /
    • 2013
  • This paper suggests to compress data selectively for improvement of data compression efficiency, not to perform unconditional compression on data. Whether to compress or not is determined by selective data distinction. By doing so, we can avoid unnecessary compression in the case of low compression efficiency. Cutting down the unnecessary compression, we can improve the performance of the pre-compression algorithm. Especially, the data algorithm which was already compressed could not be compressed efficiently in many cases, even if apply compression algorithm again. Even in these cases, we don't have to compress data unnecessarily. We implemented the proposed function actually and performed experiments with implementation. The experimental results showed normal operation.

A New ROM Compression Method for Continuous Data (연속된 데이터를 위한 새로운 롬 압축 방식)

  • 양병도;김이섭
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.5
    • /
    • pp.354-360
    • /
    • 2003
  • A new ROM compression method for continuous data is proposed. The ROM compression method is based on two proposed ROM compression algorithms. The first one is a region select ROM compression algorithm that stores only regions including data after dividing data into many small regions by magnitude and address. The second is a quantization ROM and error ROM compression algorithm that divides data into quantized data and their errors. Using these algorithms, 40~60% ROM size reductions aye achieved for various continuous data.

A Simulation Framework for Wireless Compressed Data Broadcast

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.315-322
    • /
    • 2023
  • Intelligent IoT environments that accommodate a very large number of clients require technologies that provide secure information service regardless of the number of clients. Wireless data broadcast is an information service technique that ensures scalability to deliver data to all clients simultaneously regardless of the number of clients. In wireless data broadcasting, clients access the wireless channel linearly to explore the data, so the access time of clients is greatly affected by the broadcast cycle. Data compression-based data broadcasting can reduce the broadcast cycle and thus reduce client access time. Therefore, a simulation framework that can evaluate the performance of data broadcasting by applying different data compression algorithms is essential and important. In this paper, we propose a simulation framework to evaluate the performance of data broadcasting that can adopt data compression. We design the framework that enables to apply different data compression algorithms according to the data characteristics. In addition to evaluating the performance according to the data, the proposed framework can also evaluate the performance according to the data scheduling technique and the kind of queries the client wants to process. We implement the proposed framework and evaluate the performance of data broadcasting using the framework applying data compression algorithms to demonstrate the performances of data compression broadcasting.

A Study of Big Time Series Data Compression based on CNN Algorithm (CNN 기반 대용량 시계열 데이터 압축 기법연구)

  • Sang-Ho Hwang;Sungho Kim;Sung Jae Kim;Tae Geun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • In this paper, we implement a lossless compression technique for time-series data generated by IoT (Internet of Things) devices to reduce the disk spaces. The proposed compression technique reduces the size of the encoded data by selectively applying CNN (Convolutional Neural Networks) or Delta encoding depending on the situation in the Forecasting algorithm that performs prediction on time series data. In addition, the proposed technique sequentially performs zigzag encoding, splitting, and bit packing to increase the compression ratio. We showed that the proposed compression method has a compression ratio of up to 1.60 for the original data.

Compression of time-varying volume data using Daubechies D4 filter (Daubechies D4 필터를 사용한 시간가변(time-varying) 볼륨 데이터의 압축)

  • Hur, Young-Ju;Lee, Joong-Youn;Koo, Gee-Bum
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.982-987
    • /
    • 2007
  • The necessity of data compression scheme for volume data has been increased because of the increase of data capacity and the amount of network uses. Now we have various kinds of compression schemes, and we can choose one of them depending on the data types, application fields, the preferences, etc. However, the capacity of data which is produced by application scientists has been excessively increased, and the format of most scientific data is 3D volume. For 2D image or 3D moving pictures, many kinds of standards are established and widely used, but for 3D volume data, specially time-varying volume data, it is very difficult to find any applicable compression schemes. In this paper, we present a compression scheme for encoding time-varying volume data. This scheme is aimed to encoding time-varying volume data for visualization. This scheme uses MPEG's I- and P-frame concept for raising compression ratio. Also, it transforms volume data using Daubechies D4 filter before encoding, so that the image quality is better than other wavelet-based compression schemes. This encoding scheme encodes time-varying volume data composed of single precision floating-point data. In addition, this scheme provides the random reconstruction accessibility for an unit, and can be used for compressing large time-varying volume data using correlation between frames while preserving image qualities.

  • PDF

A Pattern Matching Extended Compression Algorithm for DNA Sequences

  • Murugan., A;Punitha., K
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.196-202
    • /
    • 2021
  • DNA sequencing provides fundamental data in genomics, bioinformatics, biology and many other research areas. With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Those large volumes of data also require a fast transmission, effective storage, superior functionality and provision of quick access to any record. Data storage costs have a considerable proportion of total cost in the formation and analysis of DNA sequences. In particular, there is a need of highly control of disk storage capacity of DNA sequences but the standard compression techniques unsuccessful to compress these sequences. Several specialized techniques were introduced for this purpose. Therefore, to overcome all these above challenges, lossless compression techniques have become necessary. In this paper, it is described a new DNA compression mechanism of pattern matching extended Compression algorithm that read the input sequence as segments and find the matching pattern and store it in a permanent or temporary table based on number of bases. The remaining unmatched sequence is been converted into the binary form and then it is been grouped into binary bits i.e. of seven bits and gain these bits are been converted into an ASCII form. Finally, the proposed algorithm dynamically calculates the compression ratio. Thus the results show that pattern matching extended Compression algorithm outperforms cutting-edge compressors and proves its efficiency in terms of compression ratio regardless of the file size of the data.

Wavelet Compression Method with Minimum Delay for Mobile Tele-cardiology Applications (이동형 Tele-cardiology 시스템 적용을 위한 최저 지연을 가진 웨이브릿 압축 기법)

  • Kim Byoung-Soo;Yoo Sun-Kook;Lee Moon-Hyoung
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.11
    • /
    • pp.786-792
    • /
    • 2004
  • A wavelet based ECG data compression has become an attractive and efficient method in many mobile tele-cardiology applications. But large data size required for high compression performance leads a serious delay. In this paper, new wavelet compression method with minimum delay is proposed. It is based on deciding the type and compression ratio(CR) of block organically according to the standard deviation of input ECG data with minimum block size. Compression performances of the proposed algorithm for different MIT ECG Records were analyzed comparing other ECG compression algorithm. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via random noise simulation models. The results show that the proposed algorithm has both lower PRD than other algorithm on same CR and minimum time in the data acquisition, processing and transmission.