• Title/Summary/Keyword: Vector compression

Search Result 262, Processing Time 0.027 seconds

Medical Image Data Compression Using a Variable Block Size Vector Quantization (가변 블록 벡터양자화를 이용한 의용영상 데타터 압축)

  • 박종규;정회룡
    • Journal of Biomedical Engineering Research
    • /
    • v.10 no.2
    • /
    • pp.173-178
    • /
    • 1989
  • A vector quantization technique using a variable block size was applied to image compression of digitized X -ray films. Whether the size of VQ block should be subdivided or not is determined experimentally by the threshold value. The simulation result shows that the performance of the proposed vector quantizer is suitable for the medical image coding, which is applicable to PACS( Picture Archiving and Communication System).

  • PDF

A Study on the Advanced Vector Quantization Algorithm for Edge Preserving (윤관보존을 위한 개선된 벡터 양자화 알고리즘에 관한 연구)

  • 김백기;이대영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.12
    • /
    • pp.72-80
    • /
    • 1994
  • In this paper, we present a digital image data compression method using vector quantization preserving edges. A new vector quantization algorithm is proposed using a new sampling method and edge region extraction. The codebook generation time is faster than existing algorithms and the quality of decompressed images is much improved. Extrimental results suggest that the resultant compression ratio and PSNR are better than those of BPVQ and HMVQ methods.

  • PDF

Image Data Compression Using Laplacian Pyramid Processing and Vector Quantization (라플라시안 피라미드 프로세싱과 백터 양자화 방법을 이용한 영상 데이타 압축)

  • Park, G.H.;Cha, I.H.;Youn, D.H.
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1347-1351
    • /
    • 1987
  • This thesis aims at studying laplacian pyramid vector quantization which keeps a simple compression algorithm and stability against various kinds of image data. To this end, images are devied into two groups according to their statistical characteristics. At 0.860 bits/pixel and 0.360 bits/pixel respectively, laplacian pyramid vector quantization is compared to the existing spatial domain vector quantization and transform coding under the same condition in both objective and subjective value. The laplacian pyramid vector quantization is much more stable against the statistical characteristics of images than the existing vector quantization and transform coding.

  • PDF

Vector Quantization for Medical Image Compression Based on DCT and Fuzzy C-Means

  • Supot, Sookpotharom;Nopparat, Rantsaena;Surapan, Airphaiboon;Manas, Sangworasil
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.285-288
    • /
    • 2002
  • Compression of magnetic resonance images (MRI) has proved to be more difficult than other medical imaging modalities. In an average sized hospital, many tora bytes of digital imaging data (MRI) are generated every year, almost all of which has to be kept. The medical image compression is currently being performed by using different algorithms. In this paper, Fuzzy C-Means (FCM) algorithm is used for the Vector Quantization (VQ). First, a digital image is divided into subblocks of fixed size, which consists of 4${\times}$4 blocks of pixels. By performing 2-D Discrete Cosine Transform (DCT), we select six DCT coefficients to form the feature vector. And using FCM algorithm in constructing the VQ codebook. By doing so, the algorithm can make good time quality, and reduce the processing time while constructing the VQ codebook.

  • PDF

An Image Compression Technique with Lifting Scheme and PVQ (Lifting Scheme과 PVQ를 이용한 영상압축 기법)

  • 정전대;김학렬;신재호
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06a
    • /
    • pp.159-163
    • /
    • 1996
  • In this paper, a new image compression technique, which uses lifting scheme and pyramid vector quantization, is proposed. Lifting scheme is a new technique to generate wavelets and to perform wavelet transform, and pyramid vector quantization is a kind of vector quantization which dose not have codebook neither codebook generation algorithm. For the purpose of realizing more compression rate, an arithmetic entropy coder is used. Proposed algorithm is compared with other wavelet based image coder and with JPEG which uses DCT and adaptive Huffman entropy coder. Simulation results showed that the performance of proposed algorithm is much better than that of others in point of PSNR and bpp.

  • PDF

Effective Compression Technique for Secure Transmission and Storage of GIS Digital Map (GIS 디지털 맵의 안전한 전송 및 저장을 위한 효율적인 압축 기법)

  • Jang, Bong-Joo;Moon, Kwang-Seok;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.210-218
    • /
    • 2011
  • Generally, GIS digital map has been represented and transmitted by ASCII and Binary data forms. Among these forms, Binary form has been widely used in many GIS application fields for the transmission of mass map data. In this paper, we present a hierarchical compression technique of polyline and polygon components for effective storage and transmission of vector map with various degree of decision. These components are core geometric components that represent main layers in vector map. The proposed technique performs firstly the energy compaction of all polyline and polygon components in spatial domain for the lossless compression of detailed vector map and compress independently integer parts and fraction parts of 64bit floating points. From experimental results, we confirmed that the proposed technique has superior compressive performance to the conventional data compression of 7z, zip, rar and gz.

Segmented Douglas-Peucker Algorithm Based on the Node Importance

  • Wang, Xiaofei;Yang, Wei;Liu, Yan;Sun, Rui;Hu, Jun;Yang, Longcheng;Hou, Boyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1562-1578
    • /
    • 2020
  • Vector data compression algorithm can meet requirements of different levels and scales by reducing the data amount of vector graphics, so as to reduce the transmission, processing time and storage overhead of data. In view of the fact that large threshold leading to comparatively large error in Douglas-Peucker vector data compression algorithm, which has difficulty in maintaining the uncertainty of shape features and threshold selection, a segmented Douglas-Peucker algorithm based on node importance is proposed. Firstly, the algorithm uses the vertical chord ratio as the main feature to detect and extract the critical points with large contribution to the shape of the curve, so as to ensure its basic shape. Then, combined with the radial distance constraint, it selects the maximum point as the critical point, and introduces the threshold related to the scale to merge and adjust the critical points, so as to realize local feature extraction between two critical points to meet the requirements in accuracy. Finally, through a large number of different vector data sets, the improved algorithm is analyzed and evaluated from qualitative and quantitative aspects. Experimental results indicate that the improved vector data compression algorithm is better than Douglas-Peucker algorithm in shape retention, compression error, results simplification and time efficiency.

A Study on the Enhancement of Image Distortion for the Hybrid Fractal System with SOFM Vector Quantizer (SOFM 벡터 양자화기와 프랙탈 혼합 시스템의 영상 왜곡특성 향상에 관한 연구)

  • 김영정;김상희;박원우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.3 no.1
    • /
    • pp.41-47
    • /
    • 2002
  • Fractal image compression can reduce the size of image data by the contractive mapping that is affine transformation to find the block(called as range block) which is the most similar to the original image. Even though fractal image compression is regarded as an efficient way to reduce the data size, it has high distortion rate and requires long encoding time. In this paper, we presented a hybrid fractal image compression system with the modified SOFM Vector Quantizer which uses improved competitive learning method. The simulation results showed that the VQ hybrid fractal using improved competitive loaming SOFM has better distortion rate than the VQ hybrid fractal using normal SOFM.

  • PDF

A Study on Inter Prediction Mode Determination using the Variance in the Motion Vectors (움직임 벡터의 변화량을 이용한 인터 예측 모드 결정에 관한 연구)

  • Kim, June;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.13 no.1
    • /
    • pp.109-112
    • /
    • 2014
  • H.264/AVC is an international video coding standard that is established in cooperation with ITU-T VCEG and ISO/IEC MPEG, which shows improved code and efficiency than the previous video standards. Motion estimation using various macroblock from 44 to 1616 among the compression techniques of H.264/AVC contributes much to high compression efficiency. Generally, in the case of small motion vector or low complexity about P slice is decided $P16{\times}16$ mode encoding method. But according to circumstances, macroblock is decided $P16{\times}16$ mode despite large motion vector. If the motion vector variance is more than threshold and final select mode is $P16{\times}16$ mode, it is switched to $P8{\times}8$ mode, so this paper shows that the storage capacity is reduced. The results of experiment show that the proposed algorithm increases the compression efficiency of the H.264/AVC algorithm to 0.4%, even reducing the time and without increasing complexity.

Multidimensional uniform cubic lattice vector quantization for wavelet transform coding (웨이브렛변환 영상 부호화를 위한 다차원 큐빅 격자 구조 벡터 양자화)

  • 황재식;이용진;박현욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1515-1522
    • /
    • 1997
  • Several image coding algorithms have been developed for the telecommunication and multimedia systems with high image quality and high compression ratio. In order to achieve low entropy and distortion, the system should pay great cost of computation time and memory. In this paper, the uniform cubic lattice is chosen for Lattice Vector Quantization (LVQ) because of its generic simplicity. As a transform coding, the Discrete Wavelet Transform (DWT) is applied to the images because of its multiresolution property. The proposed algorithm is basically composed of the biorthogonal DWT and the uniform cubic LVQ. The multiresolution property of the DWT is actively used to optimize the entropy and the distortion on the basis of the distortion-rate function. The vector codebooks are also designed to be optimal at each subimage which is analyzed by the biorthogonal DWT. For compression efficiency, the vector codebook has different dimension depending on the variance of subimage. The simulation results show that the performance of the proposed coding mdthod is superior to the others in terms of the computation complexity and the PSNR in the range of entropy below 0.25 bpp.

  • PDF