• Title/Summary/Keyword: DEFLATE

Search Result 13, Processing Time 0.018 seconds

Proposal for Decoding-Compatible Parallel Deflate Algorithm by Inserting Control Header Composed of Non-Compressed Blocks (비 압축 블록으로 구성된 제어 헤더 삽입을 통한 압축 해제 호환성 있는 병렬 처리 Deflate 알고리즘 제안)

  • Kim Jung Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.207-216
    • /
    • 2023
  • For decoding-compatible parallel Deflate algorithm, this study proposed a new method of the control header being made in such a way that essential information for parallel compression and decompression are stored in the Disposed Bit Area (DBA) of the non-compression block and being inserted into the compressed blocks. Through this, parallel compression and decompression are possible while maintaining perfect compatibility with the existing decoder. After applying this method, the compression time was reduced by up to 71.2% compared to the sequential processing method, and the parallel decompression time was reduced by up to 65.7%. In particular, it is well known that parallel decompression is impossible due to the structural limitations of the Deflate algorithm. However, the decoder equipped with the proposed method enables high-speed parallel decompression at the algorithm level and maintains compatibility, so that parallelly compressed data can be decoded normally by existing decoder programs.

Malicious Code Injection Vulnerability Analysis in the Deflate Algorithm (Deflate 압축 알고리즘에서 악성코드 주입 취약점 분석)

  • Kim, Jung-hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.869-879
    • /
    • 2022
  • Through this study, we discovered that among three types of compressed data blocks generated through the Deflate algorithm, No-Payload Non-Compressed Block type (NPNCB) which has no literal data can be randomly generated and inserted between normal compressed blocks. In the header of the non-compressed block, there is a data area that exists only for byte alignment, and we called this area as DBA (Disposed Bit Area), where an attacker can hide various malicious codes and data. Finally we found the vulnerability that hides malicious codes or arbitrary data through inserting NPNCBs with infected DBA between normal compressed blocks according to a pre-designed attack scenario. Experiments show that even though contaminated NPNCB blocks were inserted between normal compressed blocks, commercial programs decoded normally contaminated zip file without any warning, and malicious code could be executed by the malicious decoder.

A Method of Recovery for Damaged ZIP Files (손상된 ZIP 파일 복구 기법)

  • Jung, Byungjoon;Han, Jaehyeok;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.5
    • /
    • pp.1107-1115
    • /
    • 2017
  • The most commonly used PKZIP format is a ZIP file, as well as a file format used in MS Office files and application files for Android smartphones. PKZIP format files, which are widely used in various areas, require structural analysis from the viewpoint of digital forensics and should be able to recover when files are damaged. However, previous studies have focused only on recovering data or extracting meaningful data using the Deflate compression algorithm used in ZIP files. Although most of the data resides in compressed data in the ZIP file, there is also forensically meaningful data in the rest of the ZIP file, so you need to restore it to a normal ZIP file format. Therefore, this paper presents a technique to recover a damaged ZIP file to a normal ZIP file when given.

An Improvement of Lossless Image Compression for Mobile Game (모바일 게임을 위한 개선된 무손실 이미지 압축)

  • Kim Se-Woong;Jo Byung-Ho
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.231-238
    • /
    • 2006
  • In this paper, the method to make lossless image compression that holds considerable part of total volume of mobile game has been proposed. To increase the compression rate, we compress the image by Deflate algorithm defined in RFC 1951 after reorganize it at preprocessing stage before conducting actual compression. At the stage of preprocessing, we obtained the size of a dictionary based on the information of image which is the feature of Dictionary-Based Coding, and increased the better compression rate than compressing in a general manner using in a way of restructuring image by pixel packing method and DPCM prediction technique. It has shown that the method increased 9.7% of compression rate compare with existing mobile image format, after conducting the test of compression rate applying the suggested compression method into various mobile games.

Estimation of Target and Completion Pressure during the Cuff Inflation Phase in Blood Pressure Measurement (혈압측정시 가압 단계에서 목표압력 및 측정 종료압력 추정)

  • Oh, Hong-Sic;Lee, Jong-Shill;Kim, Young-Soo;Shen, Dong-Fan;Kim, In-Young;Chee, Young-Joan
    • Journal of Biomedical Engineering Research
    • /
    • v.29 no.5
    • /
    • pp.371-375
    • /
    • 2008
  • In blood pressure measurement, the oscillometric method detects and analyzes the pulse pressure oscillation while deflating the cuff around the arm. For its principle, one has to inflate cuff pressure above the subject's systolic pressure and deflate below the diastolic pressure. Most of the commercialized devices inflate until the fixed target pressure and deflate until the fixed completion pressure because there is no way to know the systolic and diastolic pressure before measurement. Too high target pressure makes stress to the subject and too low target pressure makes big error or long measurement time because of re-inflation. There are similar problems for inadequate completion pressure. In this study, we suggest new algorithm to set proper target and completion pressure for each subject by analyzing pressure waveform while inflating period. We compared our proposed method and auscultation method to see the errors of estimation. The differences between the two measurements were -4.02$\pm$4.80mmHg, -10.50$\pm$10.57mmHg and -0.78$\pm$5.l7mmHg for mean arterial pressure, systolic pressure and diastolic pressure respectively. Consequently, we could set the target pressure by 30 mmHg higher than our estimation and we could stop at 20mmHg lower than our estimated diastolic pressure. Using this method, we could reduce the measurement time.

Design of Dynamic Buffer Assignment and Message model for Large-scale Process Monitoring of Personalized Health Data (개인화된 건강 데이터의 대량 처리 모니터링을 위한 메시지 모델 및 동적 버퍼 할당 설계)

  • Jeon, Young-Jun;Hwang, Hee-Joung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.6
    • /
    • pp.187-193
    • /
    • 2015
  • The ICT healing platform sets a couple of goals including preventing chronic diseases and sending out early disease warnings based on personal information such as bio-signals and life habits. The 2-step open system(TOS) had a relay designed between the healing platform and the storage of personal health data. It also took into account a publish/subscribe(pub/sub) service based on large-scale connections to transmit(monitor) the data processing process in real time. In the early design of TOS pub/sub, however, the same buffers were allocated regardless of connection idling and type of message in order to encode connection messages into a deflate algorithm. Proposed in this study, the dynamic buffer allocation was performed as follows: the message transmission type of each connection was first put to queuing; each queue was extracted for its feature, computed, and converted into vector through tf-idf, then being entered into a k-means cluster and forming a cluster; connections categorized under a certain cluster would re-allocate the resources according to the resource table of the cluster; the centroid of each cluster would select a queuing pattern to represent the cluster in advance and present it as a resource reference table(encoding efficiency by the buffer sizes); and the proposed design would perform trade-off between the calculation resources and the network bandwidth for cluster and feature calculations to efficiently allocate the encoding buffer resources of TOS to the network connections, thus contributing to the increased tps(number of real-time data processing and monitoring connections per unit hour) of TOS.

Detecting Surface Changes Triggered by Recent Volcanic Activities at Kīlauea, Hawai'i, by using the SAR Interferometric Technique: Preliminary Report (SAR 간섭기법을 활용한 하와이 킬라우에아 화산의 2018 분화 활동 관측)

  • Jo, MinJeong;Osmanoglu, Batuhan;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_4
    • /
    • pp.1545-1553
    • /
    • 2018
  • Recent eruptive activity at Kīlauea Volcano started on at the end of April in 2018 showed rapid ground deflation between May and June in 2018. On summit area Halema'uma'u lava lake continued to drop at high speed and Kīlauea's summit continued to deflate. GPS receivers and electronic tiltmeters detected the surface deformation greater than 2 meters. We explored the time-series surface deformation at Kīlauea Volcano, focusing on the early stage of eruptive activity, using multi-temporal COSMO-SkyMed SAR imagery. The observed maximum deformation in line-of-sight (LOS) direction was about -1.5 meter, and it indicates approximately -1.9 meter in subsiding direction by applying incidence angle. The results showed that summit began to deflate just after the event started and most of deformation occurred between early May and the end of June. Moreover, we confirmed that summit's deflation rarely happened since July 2018, which means volcanic activity entered a stable stage. The best-fit magma source model based on time-series surface deformation demonstrated that magma chambers were lying at depths between 2-3 km, and it showed a deepening trend in time. Along with the change of source depth, the center of each magma model moved toward the southwest according to the time. These results have a potential risk of including bias coming from single track observation. Therefore, to complement the initial results, we need to generate precise magma source model based on three-dimensional measurements in further research.

Ultrasound-guided Femorosciatic Nerve Block by Orthopaedist for Ankle Fracture Operation (족관절 골절 수술을 위한 정형외과 의사의 초음파 유도 대퇴좌골 신경 차단)

  • Kang, Chan;Hwang, Deuk-Soo;Kim, Young-Mo;Kim, Pil-Sung;Jun, You-Sun;Hwang, Jung-Mo;Han, Sun-Cheol
    • Journal of Korean Foot and Ankle Society
    • /
    • v.14 no.1
    • /
    • pp.90-96
    • /
    • 2010
  • Purpose: The purpose of this study is to investigate the usefulness of ultrasound-guided femorosciatic nerve block by orthopaedist to operate the fracture around ankle. Materials and Methods: Twenty-two patients, who had an operation for fracture around the ankle under a ultrasound-guided femorosciatic nerve block from January to April 2010, were the targets of this study. We measured the time spent for the ultrasound-guided femorosciatic nerve block, the time taken to start the operation after the nerve block, the time taken to deflate the tourniquet because of a tourniquet pain, the time passed until feeling a postoperative pain after the operation, etc. We also studied the complications and satisfaction of the anesthesia. Results: It took 6.2 (3 to 12) minutes for the nerve block, 46.1 (28 to 75) minutes to start the operation, 52.5 (22 to 78) minutes until feeling a tourniquet pain and 11.5 (7.5 to 19) hours until starting to feeing a postoperative pain. There was no complication by anesthesia and 21 people (95.5%) were satisfied with anesthesia by ultrasound-guided femorosciatic nerve block. Conclusion: Ultrasound-guided femorosciatic nerve block by orthopaedist in the fracture around ankle reduces anesthetic and nerve injury complication, and leads to high anesthetic success rate. Also it is considered as an effective method to alleviate postoperative pain.

Side-Channel Archive Framework Using Deep Learning-Based Leakage Compression (딥러닝을 이용한 부채널 데이터 압축 프레임 워크)

  • Sangyun Jung;Sunghyun Jin;Heeseok Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.379-392
    • /
    • 2024
  • With the rapid increase in data, saving storage space and improving the efficiency of data transmission have become critical issues, making the research on the efficiency of data compression technologies increasingly important. Lossless algorithms can precisely restore original data but have limited compression ratios, whereas lossy algorithms provide higher compression rates at the expense of some data loss. There has been active research in data compression using deep learning-based algorithms, especially the autoencoder model. This study proposes a new side-channel analysis data compressor utilizing autoencoders. This compressor achieves higher compression rates than Deflate while maintaining the characteristics of side-channel data. The encoder, using locally connected layers, effectively preserves the temporal characteristics of side-channel data, and the decoder maintains fast decompression times with a multi-layer perceptron. Through correlation power analysis, the proposed compressor has been proven to compress data without losing the characteristics of side-channel data.

Threshold-dependent Occupancy Control Schemes for 3GPP's ARQ (3GPP의 ARQ를 위한 threshold에 의존하는 점유량 조절 방식)

  • Shin, Woo-Cheol;Park, Jin-Kyung;Ha, Jun;Choi, Cheon-Won
    • Journal of IKEEE
    • /
    • v.9 no.2 s.17
    • /
    • pp.123-135
    • /
    • 2005
  • 3GPP RLC protocol specification adopted a window-controlled selective-repeat ARQ scheme for provisioning reliable data transmission. Inevitably, the re-ordering issue arises in the 3GPP's ARQ since it belongs to the selective-repeat ARQ clan. A long re-ordering time results in the degradation of throughput and delay performance, and may invoke the overflow of the re-ordering buffer. Also, the re-ordering time must be regulated to meet the requirements of some services which are both loss-sensitive and delay-sensitive. In the 3GPP's ARQ, we may deflate the occupancy of the re-ordering buffer by reducing the window size and/or length of the status report period. Such a decrease, however, deteriorates the throughput and delay performance and encroaches the resource of the reverse channel. Aiming at reducing the occupancy at the re-ordering buffer while suppressing the degradation of throughput and delay performance, we propose threshold-dependent occupancy control schemes, identified as post-threshold and pre-threshold schemes, as supplements to the 3GPP's ARQ. For judging the effectiveness of the proposed schemes, we investigate peak occupancy, maximum throughput and average delay in the practical environment involving fading channels. From the simulation results, we observe that the proposed schemes invoke the performance trade-off between occupancy and throughput in general. Also, we reveal that the post-threshold scheme is able to improve the throughput and delay performance of the ordinary 3GPP's ARQ without inflating the occupancy of the re-ordering buffer.

  • PDF