• Title/Summary/Keyword: Information compression

Search Result 2,191, Processing Time 0.03 seconds

A Study for Efficiency Improvement of Compression Algorithm with Selective Data Distinction (선별적 데이터 판별에 의한 압축 알고리즘 효율 개선에 관한 연구)

  • Jang, Seung Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.902-908
    • /
    • 2013
  • This paper suggests to compress data selectively for improvement of data compression efficiency, not to perform unconditional compression on data. Whether to compress or not is determined by selective data distinction. By doing so, we can avoid unnecessary compression in the case of low compression efficiency. Cutting down the unnecessary compression, we can improve the performance of the pre-compression algorithm. Especially, the data algorithm which was already compressed could not be compressed efficiently in many cases, even if apply compression algorithm again. Even in these cases, we don't have to compress data unnecessarily. We implemented the proposed function actually and performed experiments with implementation. The experimental results showed normal operation.

Lossless Compression Algorithm using Spatial and Temporal Information (시간과 공간정보를 이용한 무손실 압축 알고리즘)

  • Kim, Young Ro;Chung, Ji Yung
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.5 no.3
    • /
    • pp.141-145
    • /
    • 2009
  • In this paper, we propose an efficient lossless compression algorithm using spatial and temporal information. The proposed method obtains higher lossless compression of images than other lossless compression techniques. It is divided into two parts, a motion adaptation based predictor part and a residual error coding part. The proposed nonlinear predictor can reduce prediction error by learning from its past prediction errors. The predictor decides the proper selection of the spatial and temporal prediction values according to each past prediction error. The reduced error is coded by existing context coding method. Experimental results show that the proposed algorithm has better performance than those of existing context modeling methods.

Image compression using K-mean clustering algorithm

  • Munshi, Amani;Alshehri, Asma;Alharbi, Bayan;AlGhamdi, Eman;Banajjar, Esraa;Albogami, Meznah;Alshanbari, Hanan S.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.275-280
    • /
    • 2021
  • With the development of communication networks, the processes of exchanging and transmitting information rapidly developed. As millions of images are sent via social media every day, also wireless sensor networks are now used in all applications to capture images such as those used in traffic lights, roads and malls. Therefore, there is a need to reduce the size of these images while maintaining an acceptable degree of quality. In this paper, we use Python software to apply K-mean Clustering algorithm to compress RGB images. The PSNR, MSE, and SSIM are utilized to measure the image quality after image compression. The results of compression reduced the image size to nearly half the size of the original images using k = 64. In the SSIM measure, the higher the K, the greater the similarity between the two images which is a good indicator to a significant reduction in image size. Our proposed compression technique powered by the K-Mean clustering algorithm is useful for compressing images and reducing the size of images.

An Optimized Iterative Semantic Compression Algorithm And Parallel Processing for Large Scale Data

  • Jin, Ran;Chen, Gang;Tung, Anthony K.H.;Shou, Lidan;Ooi, Beng Chin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2761-2781
    • /
    • 2018
  • With the continuous growth of data size and the use of compression technology, data reduction has great research value and practical significance. Aiming at the shortcomings of the existing semantic compression algorithm, this paper is based on the analysis of ItCompress algorithm, and designs a method of bidirectional order selection based on interval partitioning, which named An Optimized Iterative Semantic Compression Algorithm (Optimized ItCompress Algorithm). In order to further improve the speed of the algorithm, we propose a parallel optimization iterative semantic compression algorithm using GPU (POICAG) and an optimized iterative semantic compression algorithm using Spark (DOICAS). A lot of valid experiments are carried out on four kinds of datasets, which fully verified the efficiency of the proposed algorithm.

PoW-BC: A PoW Consensus Protocol Based on Block Compression

  • Yu, Bin;Li, Xiaofeng;Zhao, He
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1389-1408
    • /
    • 2021
  • Proof-of-Work (PoW) is the first and still most common consensus protocol in blockchain. But it is costly and energy intensive, aiming at addressing these problems, we propose a consensus algorithm named Proof-of-Work-and-Block-Compression (PoW-BC). PoW-BC is an improvement of PoW to compress blocks and adjust consensus parameters. The algorithm is designed to encourage the reduction of block size, which improves transmission efficiency and reduces disk space for storing blocks. The transaction optimization model and block compression model are proposed to compress block data with a smaller compression ratio and less compression/ decompression duration. Block compression ratio is used to adjust mining difficulty and transaction count of PoW-BC consensus protocol according to the consensus parameters adjustment model. Through experiment and analysis, it shows that PoW-BC improves transaction throughput, and reduces block interval and energy consumption.

Rebuilding of Image Compression Algorithm Based on the DCT (discrete cosine transform) (이산코사인변환 기반 이미지 압축 알고리즘에 관한 재구성)

  • Nam, Soo-Tai;Jin, Chan-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.84-89
    • /
    • 2019
  • JPEG is a most widely used standard image compression technology. This research introduces the JPEG image compression algorithm and describes each step in the compression and decompression. Image compression is the application of data compression on digital images. The DCT (discrete cosine transform) is a technique for converting a time domain to a frequency domain. First, the image is divided into 8 by 8 pixel blocks. Second, working from top to bottom left to right, the DCT is applied to each block. Third, each block is compressed through quantization. Fourth, the matrix of compressed blocks that make up the image is stored in a greatly reduced amount of space. Finally if desired, the image is reconstructed through decompression, a process using IDCT (inverse discrete cosine transform). The purpose of this research is to review all the processes of image compression / decompression using the discrete cosine transform method.

A Simulation Framework for Wireless Compressed Data Broadcast

  • Seokjin Im
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.2
    • /
    • pp.315-322
    • /
    • 2023
  • Intelligent IoT environments that accommodate a very large number of clients require technologies that provide secure information service regardless of the number of clients. Wireless data broadcast is an information service technique that ensures scalability to deliver data to all clients simultaneously regardless of the number of clients. In wireless data broadcasting, clients access the wireless channel linearly to explore the data, so the access time of clients is greatly affected by the broadcast cycle. Data compression-based data broadcasting can reduce the broadcast cycle and thus reduce client access time. Therefore, a simulation framework that can evaluate the performance of data broadcasting by applying different data compression algorithms is essential and important. In this paper, we propose a simulation framework to evaluate the performance of data broadcasting that can adopt data compression. We design the framework that enables to apply different data compression algorithms according to the data characteristics. In addition to evaluating the performance according to the data, the proposed framework can also evaluate the performance according to the data scheduling technique and the kind of queries the client wants to process. We implement the proposed framework and evaluate the performance of data broadcasting using the framework applying data compression algorithms to demonstrate the performances of data compression broadcasting.

Design of FRACTAL Image Compression Decoder (FRACTAL 영상 압축 Decoder 설계)

  • 김용배;박형근;임순자;김용환
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.623-626
    • /
    • 1999
  • As the information society is advanced, the needs for mass information storage and retrieval grows. Digital image information is stored in retrieval systems, broadcasting in television transmission and exchanged over several kinds of telecommunication media. A major problem is that digital images are represented with large amount of data. The useful feature of image compression is that transmitting rapidly a lot of data in less time. Therefore we proposed a parallel Fractal trans-formation unit in Fractal Image compression system.

  • PDF

An Improvement of Lossless Image Compression for Mobile Game (모바일 게임을 위한 개선된 무손실 이미지 압축)

  • Kim Se-Woong;Jo Byung-Ho
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.231-238
    • /
    • 2006
  • In this paper, the method to make lossless image compression that holds considerable part of total volume of mobile game has been proposed. To increase the compression rate, we compress the image by Deflate algorithm defined in RFC 1951 after reorganize it at preprocessing stage before conducting actual compression. At the stage of preprocessing, we obtained the size of a dictionary based on the information of image which is the feature of Dictionary-Based Coding, and increased the better compression rate than compressing in a general manner using in a way of restructuring image by pixel packing method and DPCM prediction technique. It has shown that the method increased 9.7% of compression rate compare with existing mobile image format, after conducting the test of compression rate applying the suggested compression method into various mobile games.

Time Stamp Compression in RTP Protocols using Enhanced Negotiation Bits Decision Algorithm (RTP 프로토콜에서 Time Stamp필드의 압축을 위한 향상된 협상비트 결정 알고리즘)

  • Kim, Kyung-Shin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.55-61
    • /
    • 2013
  • The important issue in header compression would be how to compress the dynamic field increasing constantly between consecutive packets in the head of IP wireless networks. Existent header compression scheme that can eliminated repeated field in header are RFC2507, RFC3095 and E-ROHC scheme. In this paper, I propose a new method of compressing TS fields, which are the Dynamic fields of the RTP packet, into BCB (Basic Compression Bits) basic bits or NCB (Negotiation Compression Bits, BCB + additional bits) bits. In order to verify the proposed header compression method, I have simulation about proposed video packets of IP wireless networks. using Visual SLAM.