• Title/Summary/Keyword: Lempel-Ziv 압축방법

Search Result 10, Processing Time 0.023 seconds

An efficient Hardware Architecture of Lempel-Ziv Compressor for Real Time Data Compression (실시간 데이터 압축을 위한 Lempel-Ziv 압축기의 효과적인 구조의 제안)

  • 진용선;정정화
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.37-44
    • /
    • 2000
  • In this paper, an efficient hardware architecture of Lempel-Ziv compressor for real time data compression is proposed. The accumulated shift operations in the Lempel-Ziv algorithm are the major problem, because many shift operations are needed to prepare a dictionary buffer and matching symbols. A new efficient architecture for the fast processing of Lempel-Ziv algorithm is presented in this paper. In this architecture, the optimization technique for dictionary size, a new comparing method of multi symbol and a rotational FIFO structure are used to control shift operations easily. For the functional verification, this architecture was modeled by C programming language, and its operation was verified by running on commercial DSP processor. Also, the design of overall architecture in VHDL was synthesized on commercial FPGA chip. The result of critical path analysis shows that this architecture runs well at the input bit rate of 256kbps with 33MHz clock frequency.

  • PDF

DEM_Comp Software for Effective Compression of Large DEM Data Sets (대용량 DEM 데이터의 효율적 압축을 위한 DEM_Comp 소프트웨어 개발)

  • Kang, In-Gu;Yun, Hong-Sik;Wei, Gwang-Jae;Lee, Dong-Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.265-271
    • /
    • 2010
  • This paper discusses a new software package, DEM_Comp, developed for effectively compressing large digital elevation model (DEM) data sets based on Lempel-Ziv-Welch (LZW) compression and Huffman coding. DEM_Comp was developed using the $C^{++}$ language running on a Windows-series operating system. DEM_Comp was also tested on various test sites with different territorial attributes, and the results were evaluated. Recently, a high-resolution version of the DEM has been obtained using new equipment and the related technologies of LiDAR (LIght Detection And Radar) and SAR (Synthetic Aperture Radar). DEM compression is useful because it helps reduce the disk space or transmission bandwidth. Generally, data compression is divided into two processes: i) analyzing the relationships in the data and ii) deciding on the compression and storage methods. DEM_Comp was developed using a three-step compression algorithm applying a DEM with a regular grid, Lempel-Ziv compression, and Huffman coding. When pre-processing alone was used on high- and low-relief terrain, the efficiency was approximately 83%, but after completing all three steps of the algorithm, this increased to 97%. Compared with general commercial compression software, these results show approximately 14% better performance. DEM_Comp as developed in this research features a more efficient way of distributing, storing, and managing large high-resolution DEMs.

A Lossless Vector Data Compression Using the Hybrid Approach of BytePacking and Lempel-Ziv in Embedded DBMS (임베디드 DBMS에서 바이트패킹과 Lempel-Ziv 방법을 혼합한 무손실 벡터 데이터 압축 기법)

  • Moon, Gyeong-Gi;Joo, Yong-Jin;Park, Soo-Hong
    • Spatial Information Research
    • /
    • v.19 no.1
    • /
    • pp.107-116
    • /
    • 2011
  • Due to development of environment of wireless Internet, location based services on the basis of spatial data have been increased such as real time traffic information as well as CNS(Car Navigation System) to provide mobile user with route guidance to the destination. However, the current application adopting the file-based system has limitation of managing and storing the huge amount of spatial data. In order to supplement this challenge, research which is capable of managing large amounts of spatial data based on embedded database system is surely demanded. For this reason, this study aims to suggest the lossless compression technique by using the hybrid approach of BytePacking and Lempel-Ziv which can be applicable in DBMS so as to save a mass spatial data efficiently. We apply the proposed compression technique to actual the Seoul and Inchcon metropolitan area and compared the existing method with suggested one using the same data through analyzing the query processing duration until the reconstruction. As a result of comparison, we have come to the conclusion that suggested technique is far more performance on spatial data demanding high location accuracy than the previous techniques.

Finding the longest match in data compression using suffix trees (접미사 트리를 이용한 압축 기법에서 가장 긴 매치 찾기)

  • 나중채;박근수
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1999.10a
    • /
    • pp.658-660
    • /
    • 1999
  • Ziv-Lempel 코딩 방식은 문자열이 반복해서 나올 때 뒤에 나오는 문자열을 앞에 나온 문자열에 대한 포인터로 대칭시킴으로써 압축을 한다. 따라서 이 방식을 위해서는 앞서 나온 문자열을 유지하는 사전과 문자열 매칭이 필수적이다. 그래서 이 두 가지에 효율적인 자료구조인 접미사 트리를 Ziv-Lempel 코딩 방식에 적용시키려고, 그 이후에 Fiala, Greene와 Larsson은 각각 McCreight와 Ukkonen의 접미사 트리 생성 알고리즘을 LZ77 코딩에 이용하였다. 접미사 트리를 이용한 Zv-Lempel 코딩에는 만들어진 사전, 즉 접미사 트리와 앞으로 압축될 문자열과의 가장 긴 매치는 찾는 과정이 있다. 이는 단순히 접미사 트리의 루트부터 차례로 검색해 나가도 되지만 이렇게 했을 때 걸리는 시간은 노드에서 자식을 찾는데 걸리는 분기 결정 시간에 의해 좌우된다. 즉 분기에 성형 시간 이상이 걸리면 가장 긴 매치를 찾는데도 역시 선형 시간 이상이 걸린다. 게다가 이 방법은 자기 중복(self-overlapping)의 이점을 살릴 수가 없다. Rodeh, Pratt와 Even은 McCreight의 생성 알고리즘을 이용할 때 가장 긴 매치를 바로 찾을 수 있다는 것을 발견했다. 그러나 Ukkonend의 알고리즘에 대해서는 아직 이러한 방법이 알려지지 않았다. 본 논문에서는 Ukkonen의 알고리즘에 몇가지 작업을 추가하여 전체적으로 선형시간안에 가장 긴 매치를 찾는 방법을 소개한다.

  • PDF

Data compresson for high speed data transmission (고속전송을 위한 V.42bis 데이터 압축 기법의 개선)

  • Cho, Sung-Ryul;Choi, Hyuk;Kim, Tae-Young;Kim, Tae-Jeong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.7
    • /
    • pp.1817-1823
    • /
    • 1998
  • V.42bis, a type of LZW(Lempel-Ziv-Welch) code, is well-known as theinter national standard is asynchronous data compression. In this paper, we analyze several undesirable phenomena arising from the application of v.42bis to high speed data transmission, and we propose a modified technique to overcome them. the proposed technique determines the proper size of the dictionary, one of important factors affecting the compression ratio, and improves the method of dictionary generation for a higher compression ratio. Furthermore, we analyze the problem of excessive mode changes and solve it to a certain degree by adjusting the threshold for mode change. By doing this, we can achieve smiller variation of the compression ratio in time. This improvement chtributes to easier and better design and control of the buffer in high speed data transmission.

  • PDF

Data compression algorithm with two-byte codeword representation (2바이트 코드워드 표현방법에 의한 자료압축 알고리듬)

  • 양영일;김도현
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.3
    • /
    • pp.23-36
    • /
    • 1997
  • In tis paper, sthe new data model for the hardware implementation of lempel-ziv compression algorithm was proposed. Traditional model generates the codeword which consists of 3 bytes, the last symbol, the position and the matched length. MSB (most significant bit) of the last symbol is the comparession flag and the remaining seven bits represent the character. We confined the value of the matched length to 128 instead of 256, which can be coded with seven bits only. In the proposed model, the codeword consists of 2 bytes, the merged symbol and the position. MSB of the merged symbol is the comression flag. The remaining seven bits represent the character or the matched length according to the value of the compression flag. The proposed model reduces the compression ratio by 5% compared with the traditional model. The proposed model can be adopted to the existing hardware architectures. The incremental factors of the compression ratio are also analyzed in this paper.

  • PDF

A Lossless Medical Image Compression Using Variable Block (가변 블록을 이용한 의료영상 무손실 압축)

  • Lee, Jong-Sil;Gwon, O-Sang;Gu, Ja-Il;Han, Yeong-Hwan;Hong, Seung-Hong
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.4
    • /
    • pp.361-367
    • /
    • 1998
  • We student tow image characteristics, the smoothness and the similarity, which give rise to local and global redundancy in image representation. The smoothness means that the gray level values within a given block vary gradually rather than abruptly. The similarity means that any patterns in an image repeat itself anywhere in the rest of the image. In this sense, we proposed a lossless medical image compression scheme which exploits both types of redundancy. The proposed method segments the image into variable size blocks and encodes them depending on characteristics of the blocks. The proposed compression schemes works better 10~40[%] than other compression scheme such as the Huffman, the arithmetic, the Lempel-Ziv, HINT(Hierachical Interpolation) and the lossless scheme of JPEG with one predictor.

  • PDF

A Novel Error Detection Algorithm Based on the Structural Pattern of LZ78-Compression Data (LZ78 압축 데이터의 구조적 패턴에 기반한 새로운 오류 검출 알고리즘)

  • Gong, Myongsik;Kwon, Beom;Kim, Jinwoo;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1356-1363
    • /
    • 2016
  • In this paper, we propose a novel error detection algorithm for LZ78-compressed data. The conventional error detection method adds a certain number of parity bits in transmission, and the receiver checks the number of bits representing '1' to detect the errors. These conventional methods use additional bits resulting in increased redundancy in the compressed data which results in reduced effectiveness of the final compressed data. In this paper, we propose error detection algorithm using the structural properties of LZ78 compression without using additional bits in the compressed data. The simulation results show that the error detection ratio of the proposed algorithm is about 1.3 times better for error detection than conventional algorithms.

Framework Implementation of Image-Based Indoor Localization System Using Parallel Distributed Computing (병렬 분산 처리를 이용한 영상 기반 실내 위치인식 시스템의 프레임워크 구현)

  • Kwon, Beom;Jeon, Donghyun;Kim, Jongyoo;Kim, Junghwan;Kim, Doyoung;Song, Hyewon;Lee, Sanghoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1490-1501
    • /
    • 2016
  • In this paper, we propose an image-based indoor localization system using parallel distributed computing. In order to reduce computation time for indoor localization, an scale invariant feature transform (SIFT) algorithm is performed in parallel by using Apache Spark. Toward this goal, we propose a novel image processing interface of Apache Spark. The experimental results show that the speed of the proposed system is about 3.6 times better than that of the conventional system.

Compression of Multispectral Images (멀티 스펙트럴 영상들의 압축)

  • Enrico Piazza
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.1
    • /
    • pp.28-39
    • /
    • 2003
  • This paper is an overview of research contributions by the authors to the use of compression techniques to handle high resolution, multi-spectral images. Originally developed in the remote sensing context, the same techniques are here applied to food and medical images. The objective is to point out the potential of this kind of processing in different contexts such as remote sensing, food monitoring, and medical imaging and to stimulate new research exploitations. Compression is based on the simple assumption that it is possible to find out a relationship between pixels close one each other in multi-spectral images it translates to the possibility to say that there is a certain degree of correlation within pixels belonging to the same band in a close neighbourhood. Once found a correlation based on certain coefficient on one band, the coefficients of this relationship are, in turn, quite probably, similar to the ones calculated in one of the other bands. Based upon this second observation, an algorithm was developed, able to reduce the number of bit/pixel from 16 to 4 in satellite remote sensed multi-spectral images. A comparison is carried out between different methods about their speed and compression ratio. As reference it was taken the behaviour of three common algorithms, LZW (Lempel-Ziv-Welch), Huffman and RLE (Run Length Encoding), as they are used in common graphic format such as GIF, JPEG and PCX. The Presented methods have similar results in both speed and compression ratio to the commonly used programs and are to be preferred when the decompression must be carried out on line, inside a main program or when there is the need of a custom made compression algorithm.

  • PDF