• Title/Summary/Keyword: time compression method

Search Result 713, Processing Time 0.03 seconds

Grouping the Range Blocks Depending on the Variance Coherence

  • Lee, Yun-Jung;Kim, Young-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.12
    • /
    • pp.1665-1670
    • /
    • 2004
  • The general fractal image compression provides a high compression rate, but it requires a large encoding time. In order to overcome this disadvantage, many researchers have introduced various methods that reduce the total number of domain blocks considering their block similarities or control the number of searching domain block depending on its distribution. In this paper, we propose a method that can reduce the number of searching domain blocks employing the variance coherence of intensity values and also the number of range blocks requiring the domain block search through the classification of range blocks. This proposed method effectively reduces the encoding time and also a negligible drop of the quality as compared with the previous methods requiring the search of all range blocks.

  • PDF

A study of Fractal Image Compression with HV partition (HV 분할 방식을 이용한 fractal 영상 압축에 관한 연구)

  • Lee, Moon-Jik;Chung, Chin-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2240-2242
    • /
    • 1998
  • Image coding based on fractal theory presents highly compressed image. In this paper, we discuss about compression of Image using HV partition method. HV partition scheme devides the image adaptively in horizontal and vertical axis. And for reducing the enconding time for the domain-range comparison, we use classification scheme, which uses the order of brightness of the rectangular portion of the image. This paper focused on the technique to reduce coding time which is a problem in traditional fractal compression by adaptive selection of image and its classification method.

  • PDF

Study on an Efficiency Medical Images Export, Import According to the Type of Compression (압축타입에 따른 효율적인 의료영상 Import, Export에 관한 고찰)

  • Park, Bum-Jin;Jeong, Jae-Ho
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.16 no.1
    • /
    • pp.1-5
    • /
    • 2014
  • This study is about efficiency export and import of medical images According to the Type of Compression. PACS to be used in many hospitals and medical images export is growing more and more because cheaper and good usability than film system. Thereby export department takes a lot of time, which may cause the patient discomfort. Compression images takes less time for images export, import than nocompression images. therefore, if no significant problems clinicians to view the images, this is one method to compressed images export for reduce the time and it will provide less cost and shorter time for patient.

  • PDF

Effects of Chest Compression Quality between Rescuer's Simplified Verbal-Order Method and Continued Verbal-Order Method during Cardiopulmonary Resuscitation (심폐소생술 시 구조자의 간소화된 구령방법과 연속된 구령방법 간의 가슴압박 질 효과)

  • Baek, Hong-Seok;Park, Sang-Sub
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.4
    • /
    • pp.320-330
    • /
    • 2013
  • The purpose of this study is to increase efficiency of CPR through comparing the chest-compression quality between rescuer's simplified verbal order method and the continued verbal order method by utilizing voice meter during CPR. Subjects were 89 people(45 people for the experimental group, 44 people for the control group) who completed the 15-week CPR curriculum as undergraduates for the department of Emergency Medical Technology in C Province and were carried out by being randomly extracted. The group division was set for the experimental group as the group with the simplified verbal order and for the control group as the group with the continued verbal order. The period of measurement was progressed primarily(November 10, November 28, 2011) and secondarily(September 3-September 4, 2012). An analysis was used SPSS WIN 12.0 program. As a result of research, as for the implementation of appropriate chest compression(time, %), the quality was higher(p<.05) in the experimental group(102.86 times, 67.79%) than the control group(85.31 times, 55.84%). As a result of research, the chest compression(time, %) in the experimental group(102.86 times, 67.79%) had the higher effect of chest compression quality(p<.05) than the control group(85.31 times, 55.84%). On the other hand, the operation of weak chest compression(time) was higher in control group(61.13 times) than experimental group(35.54 times). The proper chest compression was shown(p<.05) in men of the experimental group as for gender and in over 60kg of the experimental group as for weight.

Application of Biosignal Data Compression for u-Health Sensor Network System (u-헬스 센서 네트워크 시스템의 생체신호 압축 처리)

  • Lee, Yong-Gyu;Park, Ji-Ho;Yoon, Gil-Won
    • Journal of Sensor Science and Technology
    • /
    • v.21 no.5
    • /
    • pp.352-358
    • /
    • 2012
  • A sensor network system can be an efficient tool for healthcare telemetry for multiple users due to its power efficiency. One drawback is its limited data size. This paper proposed a real-time application of data compression/decompression method in u-Health monitoring system in order to improve the network efficiency. Our high priority was given to maintain a high quality of signal reconstruction since it is important to receive undistorted waveform. Our method consisted of down sampling coding and differential Huffman coding. Down sampling was applied based on the Nyquist-Shannon sampling theorem and signal amplitude was taken into account to increase compression rate in the differential Huffman coding. Our method was successfully tested in a ZigBee and WLAN dual network. Electrocardiogram (ECG) had an average compression ratio of 3.99 : 1 with 0.24% percentage root mean square difference (PRD). Photoplethysmogram (PPG) showed an average CR of 37.99 : 1 with 0.16% PRD. Our method produced an outstanding PRD compared to other previous reports.

Extracting Graphics Information for Better Video Compression

  • Hong, Kang Woon;Ryu, Won;Choi, Jun Kyun;Lim, Choong-Gyoo
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.743-751
    • /
    • 2015
  • Cloud gaming services are heavily dependent on the efficiency of real-time video streaming technology owing to the limited bandwidths of wire or wireless networks through which consecutive frame images are delivered to gamers. Video compression algorithms typically take advantage of similarities among video frame images or in a single video frame image. This paper presents a method for computing and extracting both graphics information and an object's boundary from consecutive frame images of a game application. The method will allow video compression algorithms to determine the positions and sizes of similar image blocks, which in turn, will help achieve better video compression ratios. The proposed method can be easily implemented using function call interception, a programmable graphics pipeline, and off-screen rendering. It is implemented using the most widely used Direct3D API and applied to a well-known sample application to verify its feasibility and analyze its performance. The proposed method computes various kinds of graphics information with minimal overhead.

Program Execution Speed Improvement using Executable Compression Method on Embedded Systems (임베디드 시스템에서 실행 가능 압축 기법을 이용한 프로그램 초기 실행 속도 향상)

  • Jeon, Chang-Kyu;Lew, Kyeung-Seek;Kim, Yong-Deak
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • The performance improvement of the secondary storage is very slow compared to the main memory and processor. The data is loaded from secondary storage to memory for the execution of an application. At this time, there is a bottleneck. In this paper, we propose an Executable Compression Method to speed up the initial loading time of application. and we examined the performance. So we implemented the two applications. The one is a compressor for Execution Binary File. and The other is a decoder of Executable Compressed application file on the Embedded System. Using the test binary files, we performed the speed test in the six files. At the result, one result showed that the performance was decreased. but others had a increased performance. the average increasing rate was almost 29% at the initial loading time. The level of compression had different characteristics of the file. And the performance level was dependent on the file compressed size and uncompress time. so the optimized compression algorithm will be needed to apply the execution binary file.

ECG Signal Compression based on Adaptive Multi-level Code (적응적 멀티 레벨 코드 기반의 심전도 신호 압축)

  • Kim, Jungjoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.519-526
    • /
    • 2013
  • ECG signal has the feature that is repeated in a cycle of P, Q, R, S, and T waves and is sampled at a high sampling frequency in general. By using the feature of periodic ECG signals, maximizing compression efficiency while minimizing the loss of important information for diagnosis is required. However, the periodic characteristics of such amplitude and period is not constant by measuring time and patients. Even though measured at the same time, the patient's characteristics display different periodic intervals. In this paper, an adaptive multi-level coding is provided by coding adaptively the dominant and non-dominant signal interval of the ECG signal. The proposed method can maximize the compression efficiency by using a multi-level code that applies different compression ratios considering information loss associated with the dominant signal intervals and non-dominant signal intervals. For the case of long time measurement, this method has a merit of maximizing compression ratio compared with existing compression methods that do not use the periodicity of the ECG signal and for the lossless compression coding of non-dominant signal intervals, the method has an advantage that can be stored without loss of information. The effectiveness of the ECG signal compression is proved throughout the experiment on ECG signal of MIT-BIH arrhythmia database.

CNN based Image Restoration Method for the Reduction of Compression Artifacts (압축 왜곡 감소를 위한 CNN 기반 이미지 화질개선 알고리즘)

  • Lee, Yooho;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.5
    • /
    • pp.676-684
    • /
    • 2022
  • As realistic media are widespread in various image processing areas, image or video compression is one of the key technologies to enable real-time applications with limited network bandwidth. Generally, image or video compression cause the unnecessary compression artifacts, such as blocking artifacts and ringing effects. In this study, we propose a Deep Residual Channel-attention Network, so called DRCAN, which consists of an input layer, a feature extractor and an output layer. Experimental results showed that the proposed DRCAN can reduced the total memory size and the inference time by as low as 47% and 59%, respectively. In addition, DRCAN can achieve a better peak signal-to-noise ratio and structural similarity index measure for compressed images compared to the previous methods.

Development of Remeshing Algorithm using Mesh Compression Method (격자 압축법을 이용한 격자 재구성 알고리즘 개발)

  • Hong J. T.;Yang D. Y.
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2000.10a
    • /
    • pp.62-65
    • /
    • 2000
  • For saving time and cost of experiment Finite Element Method has been developed for several decades. It's the defect of FEM that when we are in processing of finite element analysis, the material if deformed so much that we can't proceed analysis any more. In this case, the remeshing process should be done on this material. In hot forging process, almost all remeshing process does not consider flash of the material. Because as mesh size become swatter, consuming time become larger. But if mesh size is big, there is the defect that the result of analysis is not so accurate. So, new remeshing algorithm is needed to save time and to get more accurate result.

  • PDF