• Title/Summary/Keyword: Information compression

Search Result 2,191, Processing Time 0.026 seconds

Real - Time Applications of Video Compression in the Field of Medical Environments

  • K. Siva Kumar;P. Bindhu Madhavi;K. Janaki
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.73-76
    • /
    • 2023
  • We introduce DCNN and DRAE appraoches for compression of medical videos, in order to decrease file size and storage requirements, there is an increasing need for medical video compression nowadays. Using a lossy compression technique, a higher compression ratio can be attained, but information will be lost and possible diagnostic mistakes may follow. The requirement to store medical video in lossless format results from this. The aim of utilizing a lossless compression tool is to maximize compression because the traditional lossless compression technique yields a poor compression ratio. The temporal and spatial redundancy seen in video sequences can be successfully utilized by the proposed DCNN and DRAE encoding. This paper describes the lossless encoding mode and shows how a compression ratio greater than 2 (2:1) can be achieved.

High Efficient Entropy Coding For Edge Image Compression

  • Han, Jong-Woo;Kim, Do-Hyun;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.5
    • /
    • pp.31-40
    • /
    • 2016
  • In this paper, we analyse the characteristics of the edge image and propose a new entropy coding optimized to the compression of the edge image. The pixel values of the edge image have the Gaussian distribution around '0', and most of the pixel values are '0'. By using this analysis, the Zero Block technique is utilized in spatial domain. And the Intra Prediction Mode of the edge image is similar to the mode of the surrounding blocks or likely to be the Planar Mode or the Horizontal Mode. In this paper, we make use of the MPM technique that produces the Intra Prediction Mode with high probability modes. By utilizing the above properties, we design a new entropy coding method that is suitable for edge image and perform the compression. In case the existing compression techniques are applied to edge image, compression ratio is low and the algorithm is complicated as more than necessity and the running time is very long, because those techniques are based on the natural images. However, the compression ratio and the running time of the proposed technique is high and very short, respectively, because the proposed algorithm is optimized to the compression of the edge image. Experimental results indicate that the proposed algorithm provides better visual and PSNR performance up to 11 times than the JPEG.

Lossless image compression using subband decomposition and BW transform (대역분할과 BW 변환을 이용한 무손실 영상압축)

  • 윤정오;박영호;황찬식
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.5 no.1
    • /
    • pp.102-107
    • /
    • 2000
  • In general text compression techniques cannot be used directly in image compression because the model of text and image are different Recently, a new class of text compression, namely, block-sorting algorithm which involves Burrows and Wheeler transformation(BWT) gives excellent results in text compression. However, if we apply it directly into image compression, the result is poor. So, we propose simple method in order to improve the lossless compression performance of image. The proposed method can be divided into three steps. It is decomposed into ten subbands with the help of symmetric short kernel filter. The resulting subbands are block-sorted according to the method by BWT, and the redundancy is removed with the help of an adaptive arithmetic coder. Experimental results show that the proposed method is better than lossless JPEG and LZ-based compression method(PKZIP).

  • PDF

Performance Evaluation of ECG Compression Algorithms using Classification of Signals based PQSRT Wave Features (PQRST파 특징 기반 신호의 분류를 이용한 심전도 압축 알고리즘 성능 평가)

  • Koo, Jung-Joo;Choi, Goang-Seog
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.313-320
    • /
    • 2012
  • An ECG(Electrocardiogram) compression can increase the processing speed of system as well as reduce amount of signal transmission and data storage of long-term records. Whereas conventional performance evaluations of loss or lossless compression algorithms measure PRD(Percent RMS Difference) and CR(Compression Ratio) in the viewpoint of engineers, this paper focused on the performance evaluations of compression algorithms in the viewpoint of diagnostician who diagnosis ECG. Generally, for not effecting the diagnosis in the ECG compression, the position, length, amplitude and waveform of the restored signal of PQRST wave should not be damaged. AZTEC, a typical ECG compression algorithm, is validated its effectiveness in conventional performance evaluation. In this paper, we propose novel performance evaluation of AZTEC in the viewpoint of diagnostician.

Vector Map Simplification Using Poyline Curvature

  • Pham, Ngoc-Giao;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.249-254
    • /
    • 2017
  • Digital vector maps must be compressed effectively for transmission or storage in Web GIS (geographic information system) and mobile GIS applications. This paper presents a polyline compression method that consists of polyline feature-based hybrid simplification and second derivative-based data compression. Experimental results verify that our method has higher simplification and compression efficiency than conventional methods and produces good quality compressed maps.

Deep Learning based HEVC Double Compression Detection (딥러닝 기술 기반 HEVC로 압축된 영상의 이중 압축 검출 기술)

  • Uddin, Kutub;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1134-1142
    • /
    • 2019
  • Detection of double compression is one of the most efficient ways of remarking the validity of videos. Many methods have been introduced to detect HEVC double compression with different coding parameters. However, HEVC double compression detection under the same coding environments is still a challenging task in video forensic. In this paper, we introduce a novel method based on the frame partitioning information in intra prediction mode for detecting double compression in with the same coding environments. We propose to extract statistical feature and Deep Convolution Neural Network (DCNN) feature from the difference of partitioning picture including Coding Unit (CU) and Transform Unit (TU) information. Finally, a softmax layer is integrated to perform the classification of the videos into single and double compression by combing the statistical and the DCNN features. Experimental results show the effectiveness of the statistical and the DCNN features with an average accuracy of 87.5% for WVGA and 84.1% for HD dataset.

Maximizing WSQ Compression Rate by Considering Fingerprint Image Quality (지문 영상 품질을 고려한 WSQ 최대 압축)

  • Hong, Seung-Woo;Lee, Sung-Ju;Chung, Yong-Wha;Choi, Woo-Yong;Moon, Dae-Sung;Moon, Ki-Young;Jin, Chang-Long;Kim, Hak-Il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.3
    • /
    • pp.23-30
    • /
    • 2010
  • Compression techniques can be applied to large-scale fingerprint systems to store or transmit fingerprint data efficiently. In this paper, we investigate the effects of FBI WSQ fingerprint image compression on the performance of a fingerprint verification system using multiple linear regressions. We propose a maximum compression using fingerprint image quality score. Based on the experiments, we can confirm that the proposed approach can compress the fingerprint images up to 3 times more than the fixed compression ratio without significant degradation of the verification accuracy.

Lossless VQ Indices Compression Based on the High Correlation of Adjacent Image Blocks

  • Wang, Zhi-Hui;Yang, Hai-Rui;Chang, Chin-Chen;Horng, Gwoboa;Huang, Ying-Hsuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2913-2929
    • /
    • 2014
  • Traditional vector quantization (VQ) schemes encode image blocks as VQ indices, in which there is significant similarity between the image block and the codeword of the VQ index. Thus, the method can compress an image and maintain good image quality. This paper proposes a novel lossless VQ indices compression algorithm to further compress the VQ index table. Our scheme exploits the high correlation of adjacent image blocks to search for the same VQ index with the current encoding index from the neighboring indices. To increase compression efficiency, codewords in the codebook are sorted according to the degree of similarity of adjacent VQ indices to generate a state codebook to find the same index with the current encoding index. Note that the repetition indices both on the search path and in the state codebooks are excluded to increase the possibility for matching the current encoding index. Experimental results illustrated the superiority of our scheme over other compression schemes in the index domain.

Improvement of OPW-TR Algorithm for Compressing GPS Trajectory Data

  • Meng, Qingbin;Yu, Xiaoqiang;Yao, Chunlong;Li, Xu;Li, Peng;Zhao, Xin
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.533-545
    • /
    • 2017
  • Massive volumes of GPS trajectory data bring challenges to storage and processing. These issues can be addressed by compression algorithm which can reduce the size of the trajectory data. A key requirement for GPS trajectory compression algorithm is to reduce the size of the trajectory data while minimizing the loss of information. Synchronized Euclidean distance (SED) as an important error measure is adopted by most of the existing algorithms. In order to further reduce the SED error, an improved algorithm for open window time ratio (OPW-TR) called local optimum open window time ratio (LO-OPW-TR) is proposed. In order to make SED error smaller, the anchor points are selected by calculating point's accumulated synchronized Euclidean distance (ASED). A variety of error metrics are used for the algorithm evaluation. The experimental results show that the errors of our algorithm are smaller than the existing algorithms in terms of SED and speed errors under the same compression ratio.

Adaptive Image Enhancement in the DCT Compression Domain Using Retinex Theory (Retinex 이론을 이용한 DCT 압축 영역에서의 적응 영상 향상)

  • Jeon, Seon-Dong;Kim, Sang-Hee
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.913-914
    • /
    • 2008
  • This paper presents a method of adaptive image enhancement with dynamic range compression and contrast enhancement. The dynamic range compression is to adaptively enhance the dark area using illumination component of DCT compression block. The contrast enhancement is to modify the image contrast using retinex theory that uses the HVS properties. The block artifacts and other noises, caused by processing in the compression domain, were removed by after processing.

  • PDF