• Title/Summary/Keyword: 시간압축

Search Result 1,990, Processing Time 0.034 seconds

A Simplified Pre-processing Method for Efficient Video Noise Reduction (효과적인 영상 잡음 제거를 위한 간략한 전처리 방법)

  • 박운기;이상희;전병우
    • Journal of Broadcast Engineering
    • /
    • v.6 no.2
    • /
    • pp.139-147
    • /
    • 2001
  • Since various noises degrade not only image quality but also compression efficiency in MPEG and H.263, pre-processing is necessary to reduce spatial and temporal noise and to increase ceding efficiency as well. In this paper, we propose a simplified method for noise detection, spatial and temporal noise reduction. Noise detection is based on correlation of the current pixel with its neighboring 4 pixels. Spatial noose reduction utilizes a non-rectangular median filter that is less complex than the conventional rectangular median filter. The proposed temporal filter is an IIR average filter using LUT(Look-up Table) to enhance subjective video quality. The proposed pre-processing method is very simple and efficient.

  • PDF

An Efficient Technique to Improve Compression for Low-Power Scan Test Data (저전력 테스트 데이터 압축 개선을 위한 효과적인 기법)

  • Song, Jae-Hoon;Kim, Doo-Young;Kim, Ki-Tae;Park, Sung-Ju
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.10 s.352
    • /
    • pp.104-110
    • /
    • 2006
  • The huge test data volume, test time and power consumption are major problems in system-on-a-chip testing. To tackle those problems, we propose a new test data compression technique. Initially, don't-cares in a pre-computed test cube set are assigned to reduce the test power consumption, and then, the fully specified low-power test data is transformed to improve compression efficiency by neighboring bit-wise exclusive-or (NB-XOR) scheme. Finally, the transformed test set is compressed to reduce both the test equipment storage requirements and test application time.

FDR Test Compression Algorithm based on Frequency-ordered (Frequency-ordered 기반 FDR 테스트패턴 압축 알고리즘)

  • Mun, Changmin;Kim, Dooyoung;Park, Sungju
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.5
    • /
    • pp.106-113
    • /
    • 2014
  • Recently, to reduce test cost by efficiently compressing test patterns for SOCs(System-on-a-chip), different compression techniques have been proposed including the FDR(Frequency-directed run-length) algorithm. FDR is extended to EFDR(Extended-FDR), SAFDR(Shifted-Alternate-FDR) and VPDFDR(Variable Prefix Dual-FDR) to improve the compression ratio. In this paper, a frequency-ordered modification is proposed to further augment the compression ratios of FDR, EFDR, SAFRD and VPDFDR. The compression ratio can be maximized by using frequency-ordered method and consequently the overall manufacturing test cost and time can be reduced significantly.

Fractal Image Coding by Linear Transformation of Computed Tomography (전산화단층촬영의 선형변환에 의한 프랙탈 영상 부호화)

  • Park, Jae-Hong;Park, Cheol-Woo
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.4
    • /
    • pp.241-246
    • /
    • 2017
  • The existing fractal compression method is effective in generating an artificial shape by approximating its partial regions to a domain block by re-dividing the whole image into a domain region and dividing it into several domain blocks, but it is difficult to implement a computer. In this study, it is difficult to approximate a complex block such as a large-sized block and an affine transformation because a large amount of calculation is required in searching for a combination of similar blocks through a transformation, so a large amount of coding time is required.

Hybrid Algorithm for Scene Change Detection of MPEG Sequence (MPEG 시퀸스의 장면 변화 검출을 위한 하이브리드 알고리즘)

  • Choe, Yoon-Sik;Lee, Joon-Hyoung
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.10
    • /
    • pp.156-165
    • /
    • 1998
  • In this paper, the hybrid algorithm for the scene change detection of MPEG-based compressed video data is proposed. There have been two methods to detect scene changes of video data compressed using algorithms such as MPEG or motion-JPEG: analyzing the compressed data directly, and analyzing from the retrieved data. The former has the advantage of taking less time, while the latter can obtain detail results at the expense of time and memory. Thus by combining each algorithm we detect cuts from compressed sequence, retrieve data for some selected region, and detect gradual scene changes. Simulation results verify the superiorities of the proposed algorithm in analyzing time and accuracy.

  • PDF

Lossless Image Compression Based on Deep Learning (딥 러닝 기반의 무손실 영상압축 방법)

  • Rhee, Hochang;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.67-70
    • /
    • 2022
  • 최근 딥러닝 방법의 발전하면서 영상처리 및 컴퓨터 비전의 다양한 분야에서 딥러닝 기반의 알고리즘들이 그 이전의 방법들에 비하여 큰 성능 향상을 보이고 있다. 손실 영상 압축의 경우 최근 encoder-decoder 형태의 네트웍이 영상 압축에서 사용되는 transform을 대체하고 있고, transform 결과들의 엔트로피 코딩을 위한 추가적인 encoder-decoder 네트웍을 사용하여 HEVC 수준에 버금가는 성능을 내고 있다. 무손실 압축의 경우에도 매 픽셀 예측을 CNN으로 수행하는 경우, 기존의 예측방법들에 비하여 예측성능이 크게 향상되어 JPEG-2000 Lossless, FLIF, JEPG-XL 등의 딥러닝을 사용하지 않는 방법들에 비하여 우수한 성능을 내는 것으로 보고되고 있다. 그러나 모든 픽셀에 대하여 예측값을 CNN을 통하여 계산하는 방법은, 영상의 픽셀 수 만큼 CNN을 수행해야 하므로 HD 크기 영상에 대하여 지금까지 알려진 가장 빠른 방법이 한 시간 이상 소요되는 등 비현실적인 것으로 알려져 있다. 따라서 최근에는 성능은 이보다 떨어지지만 속도를 현실적으로 줄인 방법들이 제안되고 있다. 이러한 방법들은 초기에는 FLIF나 JPEG-XL에 비하여 성능이 떨어져서, GPU를 사용하면서도 기존의 방법보다 좋지 않은 성능을 보인다는 면에서 여전히 비현실적이었다. 최근에는 신호의 특성을 더 잘 활용하는 방법들이 제안되면서 매 픽셀마다 CNN을 수행하는 방법보다는 성능이 떨어지지만, 짧은 시간 내에 FLIF나 JPEG-XL보다는 좋은 성능을 내는 현실적인 방법들이 제안되었다. 본 연구에서는 이러한 최근의 몇 가지 방법들을 살펴보고 이들보다 성능을 더 좋게 할 수 있는 보조적인 방법들과 raw image에 대한 성능을 평가한다.

  • PDF

An On-chip Cache and Main Memory Compression System Optimized by Considering the Compression rate Distribution of Compressed Blocks (압축블록의 압축률 분포를 고려해 설계한 내장캐시 및 주 메모리 압축시스템)

  • Yim, Keun-Soo;Lee, Jang-Soo;Hong, In-Pyo;Kim, Ji-Hong;Kim, Shin-Dug;Lee, Yong-Surk;Koh, Kern
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.1_2
    • /
    • pp.125-134
    • /
    • 2004
  • Recently, an on-chip compressed cache system was presented to alleviate the processor-memory Performance gap by reducing on-chip cache miss rate and expanding memory bandwidth. This research Presents an extended on-chip compressed cache system which also significantly expands main memory capacity. Several techniques are attempted to expand main memory capacity, on-chip cache capacity, and memory bandwidth as well as reduce decompression time and metadata size. To evaluate the performance of our proposed system over existing systems, we use execution-driven simulation method by modifying a superscalar microprocessor simulator. Our experimental methodology has higher accuracy than previous trace-driven simulation method. The simulation results show that our proposed system reduces execution time by 4-23% compared with conventional memory system without considering the benefits obtained from main memory expansion. The expansion rates of data and code areas of main memory are 57-120% and 27-36%, respectively.

A Partitioned Compressed-Trie for Speeding up IP Address Lookups (IP 주소 검색의 속도 향상을 위한 분할된 압축 트라이 구조)

  • Park, Jae-Hyung;Jang, Ik-Hyeon;Chung, Min-Young;Won, Yong-Gwan
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.641-646
    • /
    • 2003
  • Packet processing speed of routers as well as transmission speed of physical links gives a great effect on IP packet transfer rate in Internet. The router forwards a packet after determining the next hop to the packet's destination. IP address lookup is a main design issue for high performance routers. In this paper, we propose a partitioned compressed-trie for speeding-up IP address lookup algorithms based on tie data structure by exploiting path compression. In the ,proposed scheme, IP prefixes are divided into several compressed-tries and lookup is performed on only one partitioned compressed-trie. Memory access time for IP address lookup is lessen due to compression technique and memory required for maintaining partition does not increased.

Understanding on the Principle of Image Compression Algorithm Using on the DCT (discrete cosine transform) (이산여현변환을 이용한 이미지 압축 알고리즘 원리에 관한 연구)

  • Nam, Soo-tai;Kim, Do-goan;Jin, Chan-yong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.107-110
    • /
    • 2018
  • Image compression is the application of Data compression on digital images. The (DCT) discrete cosine transform is a technique for converting a time domain to a frequency domain. It is widely used in image compression. First, the image is divided into 8x8 pixel blocks. Apply the DCT to each block while processing from top to bottom from left to right. Each block is compressed through quantization. The space of the compressed block array constituting the image is greatly reduced. Reconstruct the image through the IDCT. The purpose of this research is to understand compression/decompression of images using the DCT method.

  • PDF

Improving the Read Performance of Compressed File Systems Considering Kernel Read-ahead Mechanism (커널의 미리읽기를 고려한 압축파일시스템의 읽기성능향상)

  • Ahn, Sung-Yong;Hyun, Seung-Hwan;Koh, Kern
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.6
    • /
    • pp.678-682
    • /
    • 2010
  • Compressed filesystem is frequently used in the embedded system to increase cost efficiency. One of the drawbacks of compressed filesystem is low read performance. Moreover, read-ahead mechanism that improves the read throughput of storage device has negative effect on the read performance of compressed filesystem, increasing read latency. Main reason is that compressed filesystem has too big read-ahead miss penalty due to decompression overhead. To solve this problem, this paper proposes new read technique considering kernel read-ahead mechanism for compressed filesystem. Proposed technique improves read throughput of device by bulk read from device and reduces decompression overhead of compressed filesystem by selective decompression. We implement proposed technique by modifying CramFS and evaluate our implementation in the Linux kernel 2.6.21. Performance evaluation results show that proposed technique reduces the average major page fault handling latency by 28%.