• 제목/요약/키워드: Data compression

검색결과 2,114건 처리시간 0.041초

On-the-fly Data Compression for Efficient TCP Transmission

  • Wang, Min;Wang, Junfeng;Mou, Xuan;Han, Sunyoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권3호
    • /
    • pp.471-489
    • /
    • 2013
  • Data compression at the transport layer could both reduce transmitted bytes over network links and increase the transmitted application data (TCP PDU) in one RTT at the same network conditions. Therefore, it is able to improve transmission efficiency on Internet, especially on the networks with limited bandwidth or long delay links. In this paper, we propose an on-the-fly TCP data compression scheme, i.e., the TCPComp, to enhance TCP performance. This scheme is primarily composed of the compression decision mechanism and the compression ratio estimation algorithm. When the application data arrives at the transport layer, the compression decision mechanism is applied to determine which data block could be compressed. The compression ratio estimation algorithm is employed to predict compression ratios of upcoming application data for determining the proper size of the next data block so as to maximize compression efficiency. Furthermore, the assessment criteria for TCP data compression scheme are systematically developed. Experimental results show that the scheme can effectively reduce transmitted TCP segments and bytes, leading to greater transmission efficiency compared with the standard TCP and other TCP compression schemes.

A Twin Symbol Encoding Technique Based on Run-Length for Efficient Test Data Compression

  • Park, Jae-Seok;Kang, Sung-Ho
    • ETRI Journal
    • /
    • 제33권1호
    • /
    • pp.140-143
    • /
    • 2011
  • Recent test data compression techniques raise concerns regarding power dissipation and compression efficiency. This letter proposes a new test data compression scheme, twin symbol encoding, that supports block division skills that can reduce hardware overhead. Our experimental results show that the proposed technique achieves both a high compression ratio and low-power dissipation. Therefore, the proposed scheme is an attractive solution for efficient test data compression.

비트맵과 양자화 데이터 압축 기법을 사용한 BTC 영상 압축 알고리즘 (BTC Algorithm Utilizing Compression Method of Bitmap and Quantization data for Image Compression)

  • 조문기;윤영섭
    • 전자공학회논문지
    • /
    • 제49권10호
    • /
    • pp.135-141
    • /
    • 2012
  • LCD 오버드라이브에서 프레임 메모리 크기를 줄이는 방법으로, BTC 영상 압축이 널리 사용되고 있다. BTC 영상 압축에서압축률을 높이기 위해서는 비트맵 데이터를 압축하거나 양자화 데이터의 압축이 필요하다. 본 논문에서는 압축률을 높이기 위해서 CMBQ-BTC (CMBQ : compression method bitmap and quantization data) 알고리즘을 제안한다. 시뮬레이션으로 기존의 BTC 알고리즘과 PSNR 및 압축비율의 비교를 통해서, 제안한 알고리즘의 효율성을 확인하였다.

선별적 데이터 판별에 의한 압축 알고리즘 효율 개선에 관한 연구 (A Study for Efficiency Improvement of Compression Algorithm with Selective Data Distinction)

  • 장승주
    • 한국정보통신학회논문지
    • /
    • 제17권4호
    • /
    • pp.902-908
    • /
    • 2013
  • 본 논문은 데이터 압축 효율 향상을 위하여 데이터에 대해서 무조건적인 압축을 시행하는 것이 아니라 선별적으로 데이타를 판별하여 압축을 하도록 한다. 선별적 데이터 판별을 통해서 압축 여부를 판단하게 된다. 이렇게 함으로써 압축 효율이 좋지 않은 경우에 대한 회피를 통해서 불필요한 압축을 하지 않을 수 있도록 한다. 불필요한 연산을 줄임으로써 압축 알고리즘의 성능 향상을 꾀할 수 있다. 특히, 이미 압축 알고리즘이 적용이 된 데이타의 경우에는 불필요한 압축을 하지 않도록 한다. 본 논문에서 제안하는 기능에 대해 실제 구현하고, 구현된 내용에 대해서 실험을 수행하였다. 본 논문에서 제시한 내용에 대해서 실험한 결과 정상적인 동작이 됨을 확인할 수 있었다.

연속된 데이터를 위한 새로운 롬 압축 방식 (A New ROM Compression Method for Continuous Data)

  • 양병도;김이섭
    • 대한전자공학회논문지SD
    • /
    • 제40권5호
    • /
    • pp.354-360
    • /
    • 2003
  • 연속된 데이터를 위한 새로운 롬 압축 방식이 제한 되었다. 이 롬 압축 방식은 새롭게 제안된 두 롬 압축 알고리즘들을 기반으로 한다. 하나는 영역 선택 롬 압축 알고리즘이다. 이 방식은 크기와 주소로 여러 영역들로 나눈 후 데이터를 포함하는 영역들만을 선택적으로 저장하는 방식이다. 다른 하나는 양자화 롬과 오차 롬 압축 알고리즘이다. 이 방식은 양자화된 데이터와 양자화에 의한 오차를 나누어 저장하는 방식이다. 이 두 알고리즘을 사용하면 다양한 연속된 데이터들에 대하여 40∼60%의 롬 크기의 감소를 얻을 수 있다.

A Simulation Framework for Wireless Compressed Data Broadcast

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • 제11권2호
    • /
    • pp.315-322
    • /
    • 2023
  • Intelligent IoT environments that accommodate a very large number of clients require technologies that provide secure information service regardless of the number of clients. Wireless data broadcast is an information service technique that ensures scalability to deliver data to all clients simultaneously regardless of the number of clients. In wireless data broadcasting, clients access the wireless channel linearly to explore the data, so the access time of clients is greatly affected by the broadcast cycle. Data compression-based data broadcasting can reduce the broadcast cycle and thus reduce client access time. Therefore, a simulation framework that can evaluate the performance of data broadcasting by applying different data compression algorithms is essential and important. In this paper, we propose a simulation framework to evaluate the performance of data broadcasting that can adopt data compression. We design the framework that enables to apply different data compression algorithms according to the data characteristics. In addition to evaluating the performance according to the data, the proposed framework can also evaluate the performance according to the data scheduling technique and the kind of queries the client wants to process. We implement the proposed framework and evaluate the performance of data broadcasting using the framework applying data compression algorithms to demonstrate the performances of data compression broadcasting.

CNN 기반 대용량 시계열 데이터 압축 기법연구 (A Study of Big Time Series Data Compression based on CNN Algorithm)

  • 황상호;김성호;김성재;김태근
    • 대한임베디드공학회논문지
    • /
    • 제18권1호
    • /
    • pp.1-7
    • /
    • 2023
  • In this paper, we implement a lossless compression technique for time-series data generated by IoT (Internet of Things) devices to reduce the disk spaces. The proposed compression technique reduces the size of the encoded data by selectively applying CNN (Convolutional Neural Networks) or Delta encoding depending on the situation in the Forecasting algorithm that performs prediction on time series data. In addition, the proposed technique sequentially performs zigzag encoding, splitting, and bit packing to increase the compression ratio. We showed that the proposed compression method has a compression ratio of up to 1.60 for the original data.

Daubechies D4 필터를 사용한 시간가변(time-varying) 볼륨 데이터의 압축 (Compression of time-varying volume data using Daubechies D4 filter)

  • 허영주;이중연;구기범
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2007년도 학술대회 1부
    • /
    • pp.982-987
    • /
    • 2007
  • The necessity of data compression scheme for volume data has been increased because of the increase of data capacity and the amount of network uses. Now we have various kinds of compression schemes, and we can choose one of them depending on the data types, application fields, the preferences, etc. However, the capacity of data which is produced by application scientists has been excessively increased, and the format of most scientific data is 3D volume. For 2D image or 3D moving pictures, many kinds of standards are established and widely used, but for 3D volume data, specially time-varying volume data, it is very difficult to find any applicable compression schemes. In this paper, we present a compression scheme for encoding time-varying volume data. This scheme is aimed to encoding time-varying volume data for visualization. This scheme uses MPEG's I- and P-frame concept for raising compression ratio. Also, it transforms volume data using Daubechies D4 filter before encoding, so that the image quality is better than other wavelet-based compression schemes. This encoding scheme encodes time-varying volume data composed of single precision floating-point data. In addition, this scheme provides the random reconstruction accessibility for an unit, and can be used for compressing large time-varying volume data using correlation between frames while preserving image qualities.

  • PDF

A Pattern Matching Extended Compression Algorithm for DNA Sequences

  • Murugan., A;Punitha., K
    • International Journal of Computer Science & Network Security
    • /
    • 제21권8호
    • /
    • pp.196-202
    • /
    • 2021
  • DNA sequencing provides fundamental data in genomics, bioinformatics, biology and many other research areas. With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Those large volumes of data also require a fast transmission, effective storage, superior functionality and provision of quick access to any record. Data storage costs have a considerable proportion of total cost in the formation and analysis of DNA sequences. In particular, there is a need of highly control of disk storage capacity of DNA sequences but the standard compression techniques unsuccessful to compress these sequences. Several specialized techniques were introduced for this purpose. Therefore, to overcome all these above challenges, lossless compression techniques have become necessary. In this paper, it is described a new DNA compression mechanism of pattern matching extended Compression algorithm that read the input sequence as segments and find the matching pattern and store it in a permanent or temporary table based on number of bases. The remaining unmatched sequence is been converted into the binary form and then it is been grouped into binary bits i.e. of seven bits and gain these bits are been converted into an ASCII form. Finally, the proposed algorithm dynamically calculates the compression ratio. Thus the results show that pattern matching extended Compression algorithm outperforms cutting-edge compressors and proves its efficiency in terms of compression ratio regardless of the file size of the data.

이동형 Tele-cardiology 시스템 적용을 위한 최저 지연을 가진 웨이브릿 압축 기법 (Wavelet Compression Method with Minimum Delay for Mobile Tele-cardiology Applications)

  • 김병수;유선국;이문형
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권11호
    • /
    • pp.786-792
    • /
    • 2004
  • A wavelet based ECG data compression has become an attractive and efficient method in many mobile tele-cardiology applications. But large data size required for high compression performance leads a serious delay. In this paper, new wavelet compression method with minimum delay is proposed. It is based on deciding the type and compression ratio(CR) of block organically according to the standard deviation of input ECG data with minimum block size. Compression performances of the proposed algorithm for different MIT ECG Records were analyzed comparing other ECG compression algorithm. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via random noise simulation models. The results show that the proposed algorithm has both lower PRD than other algorithm on same CR and minimum time in the data acquisition, processing and transmission.