• Title/Summary/Keyword: Compression Algorithm

검색결과 1,100건 처리시간 0.024초

무손실.손실 영상 압축을 위한 웨이브릿 기반 알고리즘에 관한 연구 (A Study on the Wavelet Based Algorithm for Lossless and Lossy Image Compression)

  • 안종구;추형석
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제55권3호
    • /
    • pp.124-130
    • /
    • 2006
  • A wavelet-based image compression system allowing both lossless and lossy image compression is proposed in this paper. The proposed algorithm consists of the two stages. The first stage uses the wavelet packet transform and the quad-tree coding scheme for the lossy compression. In the second stage, the residue image taken between the original image and the lossy reconstruction image is coded for the lossless image compression by using the integer wavelet transform and the context based predictive technique with feedback error. The proposed wavelet-based algorithm, allowing an optional lossless reconstruction of a given image, transmits progressively image materials and chooses an appropriate wavelet filter in each stage. The lossy compression result of the proposed algorithm improves up to the maximum 1 dB PSNR performance of the high frequency image, compared to that of JPEG-2000 algorithm and that of S+P algorithm. In addition, the lossless compression result of the proposed algorithm improves up to the maximum 0.39 compression rates of the high frequency image, compared to that of the existing algorithm.

An Optimized Iterative Semantic Compression Algorithm And Parallel Processing for Large Scale Data

  • Jin, Ran;Chen, Gang;Tung, Anthony K.H.;Shou, Lidan;Ooi, Beng Chin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권6호
    • /
    • pp.2761-2781
    • /
    • 2018
  • With the continuous growth of data size and the use of compression technology, data reduction has great research value and practical significance. Aiming at the shortcomings of the existing semantic compression algorithm, this paper is based on the analysis of ItCompress algorithm, and designs a method of bidirectional order selection based on interval partitioning, which named An Optimized Iterative Semantic Compression Algorithm (Optimized ItCompress Algorithm). In order to further improve the speed of the algorithm, we propose a parallel optimization iterative semantic compression algorithm using GPU (POICAG) and an optimized iterative semantic compression algorithm using Spark (DOICAS). A lot of valid experiments are carried out on four kinds of datasets, which fully verified the efficiency of the proposed algorithm.

Adaptive Prediction for Lossless Image Compression

  • Park, Sang-Ho
    • 한국정보기술응용학회:학술대회논문집
    • /
    • 한국정보기술응용학회 2005년도 6th 2005 International Conference on Computers, Communications and System
    • /
    • pp.169-172
    • /
    • 2005
  • Genetic algorithm based predictor for lossless image compression is propsed. We describe a genetic algorithm to learn predictive model for lossless image compression. The error image can be further compressed using entropy coding such as Huffman coding or arithmetic coding. We show that the proposed algorithm can be feasible to lossless image compression algorithm.

  • PDF

Multi-Description Image Compression Coding Algorithm Based on Depth Learning

  • Yong Zhang;Guoteng Hui;Lei Zhang
    • Journal of Information Processing Systems
    • /
    • 제19권2호
    • /
    • pp.232-239
    • /
    • 2023
  • Aiming at the poor compression quality of traditional image compression coding (ICC) algorithm, a multi-description ICC algorithm based on depth learning is put forward in this study. In this study, first an image compression algorithm was designed based on multi-description coding theory. Image compression samples were collected, and the measurement matrix was calculated. Then, it processed the multi-description ICC sample set by using the convolutional self-coding neural system in depth learning. Compressing the wavelet coefficients after coding and synthesizing the multi-description image band sparse matrix obtained the multi-description ICC sequence. Averaging the multi-description image coding data in accordance with the effective single point's position could finally realize the compression coding of multi-description images. According to experimental results, the designed algorithm consumes less time for image compression, and exhibits better image compression quality and better image reconstruction effect.

Compression-friendly Image Encryption Algorithm Based on Order Relation

  • Ganzorig Gankhuyag;Yoonsik Choe
    • Journal of Internet Technology
    • /
    • 제21권4호
    • /
    • pp.1013-1024
    • /
    • 2020
  • In this paper, we introduce an image encryption algorithm that can be used in combination with compression algorithms. Existing encryption algorithms focus on either encryption strength or speed without compression, whereas the proposed algorithm improves compression efficiency while ensuring security. Our encryption algorithm decomposes images into pixel values and pixel intensity subsets, and computes the order of permutations. An encrypted image becomes unpredictable after permutation. Order permutation reduces the discontinuity between signals in an image, increasing compression efficiency. The experimental results show that the security strength of the proposed algorithm is similar to that of existing algorithms. Additionally, we tested the algorithm on the JPEG and the JPEG2000 with variable compression ratios. Compared to existing methods applied without encryption, the proposed algorithm significantly increases PSNR and SSIM values.

선별적 데이터 판별에 의한 압축 알고리즘 효율 개선에 관한 연구 (A Study for Efficiency Improvement of Compression Algorithm with Selective Data Distinction)

  • 장승주
    • 한국정보통신학회논문지
    • /
    • 제17권4호
    • /
    • pp.902-908
    • /
    • 2013
  • 본 논문은 데이터 압축 효율 향상을 위하여 데이터에 대해서 무조건적인 압축을 시행하는 것이 아니라 선별적으로 데이타를 판별하여 압축을 하도록 한다. 선별적 데이터 판별을 통해서 압축 여부를 판단하게 된다. 이렇게 함으로써 압축 효율이 좋지 않은 경우에 대한 회피를 통해서 불필요한 압축을 하지 않을 수 있도록 한다. 불필요한 연산을 줄임으로써 압축 알고리즘의 성능 향상을 꾀할 수 있다. 특히, 이미 압축 알고리즘이 적용이 된 데이타의 경우에는 불필요한 압축을 하지 않도록 한다. 본 논문에서 제안하는 기능에 대해 실제 구현하고, 구현된 내용에 대해서 실험을 수행하였다. 본 논문에서 제시한 내용에 대해서 실험한 결과 정상적인 동작이 됨을 확인할 수 있었다.

지리정보 데이터 압축률 향상을 위한 Run-Length/Byte-Packing 압축 알고리즘 설계 및 구현 (Design and Implementation of Run-Length/Byte-Packing Compression Algorithm to Improve Compressibility of Geographic Information Data)

  • 윤석환;양승수;박석천
    • 한국정보통신학회논문지
    • /
    • 제21권10호
    • /
    • pp.1935-1942
    • /
    • 2017
  • 최근 압축 알고리즘이 지리정보 데이터를 압축하기 위한 방법으로 가장 많이 사용되고 있다. 그러나 이와 같은 압축 알고리즘은 지리정보 데이터 압축에 실제 적용하기에는 지도 데이터의 연속성이 미흡하고, 단일 데이터로 압축할 수 없기 때문에 압축률이 저하된다는 문제점이 있다. 따라서 본 논문에서는 이러한 문제점을 개선하기 위해 압축 알고리즘들의 장점을 취합해 지리정보 데이터 압축을 가능하게 하고, 압축 및 복원 속도를 향상시킨 Run-Length/Byte-Packing 압축 알고리즘을 설계 및 구현하였다. 구현한 알고리즘을 평가한 결과 기존 압축 알고리즘에 비해 제안 알고리즘이 평균 약 5% 향상된 것을 확인하였으며, 압축률과 복원 속도가 향상되었다는 것을 확인하였다.

A Study of on Extension Compression Algorithm of Mixed Text by Hangeul-Alphabet

  • Ji, Kang-yoo;Cho, Mi-nam;Hong, Sung-soo;Park, Soo-bong
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -1
    • /
    • pp.446-449
    • /
    • 2002
  • This paper represents a improved data compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet from. Original LZW algorithm efficiently compress a alphabet text file but inefficiently compress a 2 byte completion Hangout text file. To solve this problem, data compression algorithm using 2 byte prefix field and 2 byte suffix field for compression table have developed. But it have a another problem that is compression ratio of alphabet text file decreased. In this paper, we proposes improved LZW algorithm, that is, compression table in the Extended LZW(ELZW) algorithm uses 2 byte prefix field for pointer of a table and 1 byte suffix field for repeat counter. where, a prefix field uses a pointer(index) of compression table and a suffix field uses a counter of overlapping or recursion text data in compression table. To increase compression ratio, after construction of compression table, table data are properly packed as different bit string in accordance with a alphabet, Hangout, and pointer respectively. Therefore, proposed ELZW algorithm is superior to 1 byte LZW algorithm as 7.0125 percent and superior to 2 byte LZW algorithm as 11.725 percent. This paper represents a improved data Compression algorithm of mixed text file by 2 byte completion Hangout and 1 byte alphabet form. This document is an example of what your camera-ready manuscript to ITC-CSCC 2002 should look like. Authors are asked to conform to the directions reported in this document.

  • PDF

Medical Image Compression using Adaptive Subband Threshold

  • Vidhya, K
    • Journal of Electrical Engineering and Technology
    • /
    • 제11권2호
    • /
    • pp.499-507
    • /
    • 2016
  • Medical imaging techniques such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT) and Ultrasound (US) produce a large amount of digital medical images. Hence, compression of digital images becomes essential and is very much desired in medical applications to solve both storage and transmission problems. But at the same time, an efficient image compression scheme that reduces the size of medical images without sacrificing diagnostic information is required. This paper proposes a novel threshold-based medical image compression algorithm to reduce the size of the medical image without degradation in the diagnostic information. This algorithm discusses a novel type of thresholding to maximize Compression Ratio (CR) without sacrificing diagnostic information. The compression algorithm is designed to get image with high optimum compression efficiency and also with high fidelity, especially for Peak Signal to Noise Ratio (PSNR) greater than or equal to 36 dB. This value of PSNR is chosen because it has been suggested by previous researchers that medical images, if have PSNR from 30 dB to 50 dB, will retain diagnostic information. The compression algorithm utilizes one-level wavelet decomposition with threshold-based coefficient selection.

이동형 Tele-cardiology 시스템 적용을 위한 최저 지연을 가진 웨이브릿 압축 기법 (Wavelet Compression Method with Minimum Delay for Mobile Tele-cardiology Applications)

  • 김병수;유선국;이문형
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제53권11호
    • /
    • pp.786-792
    • /
    • 2004
  • A wavelet based ECG data compression has become an attractive and efficient method in many mobile tele-cardiology applications. But large data size required for high compression performance leads a serious delay. In this paper, new wavelet compression method with minimum delay is proposed. It is based on deciding the type and compression ratio(CR) of block organically according to the standard deviation of input ECG data with minimum block size. Compression performances of the proposed algorithm for different MIT ECG Records were analyzed comparing other ECG compression algorithm. In addition to the processing delay measurement, compression efficiency and reconstruction sensitivity to error were also evaluated via random noise simulation models. The results show that the proposed algorithm has both lower PRD than other algorithm on same CR and minimum time in the data acquisition, processing and transmission.