• Title/Summary/Keyword: Compressed data

Search Result 589, Processing Time 0.033 seconds

Sampling Techniques for Wireless Data Broadcast in Communication (통신에서의 무선 데이터 방송을 위한 샘플링 기법)

  • Lee, Sun Yui;Park, Gooman;Kim, Jin Young
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.3
    • /
    • pp.57-61
    • /
    • 2015
  • This paper describes the basic principles of 3D broadcast system and proposes new 3D broadcast technology that reduces the amount of data by applying CS(Compressed Sensing). Differences between Sampling theory and the CS technology concept was described. CS algorithm SS-CoSaMP(Single-Space Compressive Sampling Matched Pursuit) and AMP(Approximate Message Passing) was described. Image data compressed and restored by these algorithm was compared. Calculation time of the algorithm having a low complexity is determined.

Hiding Secret Data in an Image Using Codeword Imitation

  • Wang, Zhi-Hui;Chang, Chin-Chen;Tsai, Pei-Yu
    • Journal of Information Processing Systems
    • /
    • v.6 no.4
    • /
    • pp.435-452
    • /
    • 2010
  • This paper proposes a novel reversible data hiding scheme based on a Vector Quantization (VQ) codebook. The proposed scheme uses the principle component analysis (PCA) algorithm to sort the codebook and to find two similar codewords of an image block. According to the secret to be embedded and the difference between those two similar codewords, the original image block is transformed into a difference number table. Finally, this table is compressed by entropy coding and sent to the receiver. The experimental results demonstrate that the proposed scheme can achieve greater hiding capacity, about five bits per index, with an acceptable bit rate. At the receiver end, after the compressed code has been decoded, the image can be recovered to a VQ compressed image.

A Study on the Compressed Code for Biological Signal (생체신호 데이터의 압축코드 알고리즘에 관한 연구)

  • Hong, Seung-Hong;Son, Chang-Il;Min, Hong-Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.5 no.1
    • /
    • pp.93-102
    • /
    • 1984
  • In this paper, the real-time compressed code generation method for the biological signal data, especially for the Electrocardiogram, is studied. For this purpose, variable length code is introduced. And from this code, we get a exactly the same reconstructed signal data as the original. Experimental results show that this program reduces the data rate by a factor of about 8, and codes the result in a form convenient for analysis.

  • PDF

A Queriable XML Compression using Inferred Data Types (추론한 데이타 타입을 이용한 질의 가능 XML 압축)

  • ;;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.441-451
    • /
    • 2005
  • HTML is mostly stored in native file systems instead of specialized repositories such as a database. Like HTML, XML, the standard for the exchange and the representation of data in the Internet, is mostly resident on native file systems. However. since XML data is irregular and verbose, the disk space and the network bandwidth are wasted compared to those of regularly structured data. To overcome this inefficiency of XML data, the research on the compression of XML data has been conducted. Among recently proposed XML compression techniques, some techniques do not support querying compressed data, while other techniques which support querying compressed data blindly encode data values using predefined encoding methods without considering the types of data values which necessitates partial decompression for processing range queries. As a result, the query performance on compressed XML data is degraded. Thus, this research proposes an XML compression technique which supports direct and efficient evaluations of queries on compressed XML data. This XML compression technique adopts an encoding method, called dictionary encoding, to encode each tag of XML data and applies proper encoding methods for encoding data values according to the inferred types of data values. Also, through the implementation and the performance evaluation of the XML compression technique proposed in this research, it is shown that the implemented XML compressor efficiently compresses real-life XML data lets and achieves significant improvements on query performance for compressed XML data.

Continued image Sending in DICOM of usefulness Cosideration in Angiography (혈관조영술에서 동영상 전송의 유용성 고찰)

  • Park, Young-Sung;Lee, Jong-Woong;Jung, Hee-Dong;Kim, Jae-Yeul;Hwang, Sun-Gwang
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.9 no.2
    • /
    • pp.39-43
    • /
    • 2007
  • In angiography, the global standard agreements of DICOM is lossless. But it brings on overload and takes too much store space in DICOM sever. Because of all those things we transmit images which is classified in subjective way. But this cause data loss and would be lead doctors to make wrong reading. As a result of that we try to transmit continued image (raw data) to reduce those mistakes. We got angiography images from the equipment(Allura FD20-Philips). And compressed it in two different methods(lossless & lossy fair). and then transmitted them to PACS system. We compared the quality of QC phantom images that are compressed by different compress method and compared spatial resolution of each images after CD copy. Then compared each Image's data volume(lossless & lossy fair). We measured spatial resolution of each image. All of them had indicated 401p/mm. We measured spatial resolution of each image after CD copy. We got also same conclusion (401p/mm). The volume of continued image (raw data) was 127.8MB(360.5 sheets on average) compressed in lossless and 29.5MB(360.5 sheets) compressed in lossy fair. In case of classified image, it was 47.35MB(133.7 sheets) in lossless and 4.5MB(133.7 sheets) in lossy fair. In case of angiography the diagnosis is based on continued image(raw data). But we transmit classified image. Because transmitting continued image causes some problems in PACS system especially transmission and store field. We transmit classified image compressed in lossless But it is subjective and would be different depend on radiologist. therefore it would make doctors do wrong reading when patients transfer another hospital. So we suggest that transmit continued image(raw data) compressed in lossy fair. It reduces about 60% of data volume compared with classified image. And the image quality is same after CD copy.

  • PDF

Novel Secure Hybrid Image Steganography Technique Based on Pattern Matching

  • Hamza, Ali;Shehzad, Danish;Sarfraz, Muhammad Shahzad;Habib, Usman;Shafi, Numan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.1051-1077
    • /
    • 2021
  • The secure communication of information is a major concern over the internet. The information must be protected before transmitting over a communication channel to avoid security violations. In this paper, a new hybrid method called compressed encrypted data embedding (CEDE) is proposed. In CEDE, the secret information is first compressed with Lempel Ziv Welch (LZW) compression algorithm. Then, the compressed secret information is encrypted using the Advanced Encryption Standard (AES) symmetric block cipher. In the last step, the encrypted information is embedded into an image of size 512 × 512 pixels by using image steganography. In the steganographic technique, the compressed and encrypted secret data bits are divided into pairs of two bits and pixels of the cover image are also arranged in four pairs. The four pairs of secret data are compared with the respective four pairs of each cover pixel which leads to sixteen possibilities of matching in between secret data pairs and pairs of cover pixels. The least significant bits (LSBs) of current and imminent pixels are modified according to the matching case number. The proposed technique provides double-folded security and the results show that stego image carries a high capacity of secret data with adequate peak signal to noise ratio (PSNR) and lower mean square error (MSE) when compared with existing methods in the literature.

Evaluation of Image Quality for Compressed SENSE(CS) Method in Cerebrovascular MRI: Comparison with SENSE Method (뇌혈관자기공영영상에서 Compressed SENSE(CS) 기법에 대한 영상의 질 평가: SENSE 기법과 비교)

  • Goo, Eun-Hoe
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.7
    • /
    • pp.999-1005
    • /
    • 2021
  • The object of this research is CS, which increases resolution while shortening inspection time, is applied to MRA to compare the quality of images for SENSE and CS techniques and to evaluate SNR and CNR to find out the optimal techniques and to provide them as clinical basic data based on this information. Data were analyzed on 32 patients who performed TOF MRA tests at a university hospital in Chung cheong-do (15 males, 17 females), ICA stenosis:10, M1 Aneurysm:10, and average age 53 ± 4.15). In the inspection, the inspection equipment was Ingenia CX 3.0T, Archieva 3.0T, and 32 channel head coil and 3D gradient echo as a method for equipment data. SNR and CNR of each image were measured by quantitative analysis, and the quality of the image was evaluated by dividing the observer's observation into 5 grades for qualitative evaluation. Imaging evaluation is described as being significant when the p-value is 0.05 or less when the paired T-test and Wilcoxon test are performed. Quantitative analysis of SNR and CNR in TOF MRA images Compared to the SENSE method, the CS method is a method measurement method (p <0.05). As an observer's evaluation, the sharpness of blood vessels: CS (4.45 ± 0.41), overall image quality: CS (4.77 ± 0.18), background suppression of images: CS (4.57 ± 0.18) all resulted in high CS technique (p = 0.000). In conclusion, the Compressed SENSE TOF MRA technique shows superior results when comparing and evaluating the SENSE and Compressed SENSE techniques in increased flow rate magnetic resonance angiography. The results are thought to be the clinical basis material in the 3D TOF MRA examination for brain disease.

Comparisons of Practical Performance for Constructing Compressed Suffix Arrays (압축된 써픽스 배열 구축의 실제적인 성능 비교)

  • Park, Chi-Seong;Kim, Min-Hwan;Lee, Suk-Hwan;Kwon, Ki-Ryong;Kim, Dong-Kyue
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.169-175
    • /
    • 2007
  • Suffix arrays, fundamental full-text index data structures, can be efficiently used where patterns are queried many times. Although many useful full-text index data structures have been proposed, their O(nlogn)-bit space consumption motivates researchers to develop more space-efficient ones. However, their space efficient versions such as the compressed suffix array and the FM-index have been developed; those can not reduce the practical working space because their constructions are based on the existing suffix array. Recently, two direct construction algorithms of compressed suffix arrays from the text without constructing the suffix array have been proposed. In this paper, we compare practical performance of these algorithms of compressed suffix arrays with that of various algorithms of suffix arrays by measuring the construction times, the peak memory usages during construction and the sizes of their final outputs.

A Breakthrough in Sensing and Measurement Technologies: Compressed Sensing and Super-Resolution for Geophysical Exploration (센싱 및 계측 기술에서의 혁신: 지구물리 탐사를 위한 압축센싱 및 초고해상도 기술)

  • Kong, Seung-Hyun;Han, Seung-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.4
    • /
    • pp.335-341
    • /
    • 2011
  • Most sensing and instrumentation systems should have very higher sampling rate than required data rate not to miss important information. This means that the system can be inefficient in some cases. This paper introduces two new research areas about information acquisition with high accuracy from less number of sampled data. One is Compressed Sensing technology (which obtains original information with as little samples as possible) and the other is Super-Resolution technology (which gains very high-resolution information from restrictively sampled data). This paper explains fundamental theories and reconstruction algorithms of compressed sensing technology and describes several applications to geophysical exploration. In addition, this paper explains the fundamentals of super-resolution technology and introduces recent research results and its applications, e.g. FRI (Finite Rate of Innovation) and LIMS (Least-squares based Iterative Multipath Super-resolution). In conclusion, this paper discusses how these technologies can be used in geophysical exploration systems.

Video Indexing and Retrieval of MPEG Video using Motion and DCT Coefficients in Compressed Domain (움직임과 DCT 계수를 이용한 압축영역에서 MPEG 비디오의 인덱싱과 검색)

  • 박한엽;최연성;김무영;강진석;장경훈;송왕철;김장형
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.2
    • /
    • pp.121-132
    • /
    • 2000
  • Most of video indexing applications depend on fast and efficient archiving, browsing, retrieval techniques. A number of techniques have been approached about only pixel domain analysis until now. Those approaches brought about the costly overhead of decompressing because the most of multimedia data is typically stored in compressed format. But with a compressed video data, if we can analyze the compressed data directly. then we avoid the costly overhead such as in pixel domain. In this paper, we analyze the information of compressed video stream directly, and then extract the available features for video indexing. We have derived the technique for cut detection using these features, and the stream is divided into shots. Also we propose a new brief key frame selection technique and an efficient video indexing method using the spatial informations(DT coefficients) and also the temporal informations(motion vectors).

  • PDF