• Title/Summary/Keyword: Compressing Codec

Search Result 8, Processing Time 0.024 seconds

Underwater Image Preprocessing and Compression for Efficient Underwater Searches and Ultrasonic Communications

  • Kim, Dong-Hoon;Song, Jun-Yeob
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.8 no.1
    • /
    • pp.38-45
    • /
    • 2007
  • We propose a preprocessing method for removing floating particles from underwater images based on an analysis of the image features. We compared baseline JPEG and wavelet codec methods to determine the method best suited for underwater images. The proposed preprocessing method enhanced the compression ratio and resolution, and provided an efficient means of compressing the images. The wavelet codec method yielded better compression ratios and image resolutions. The results suggest that the wavelet codec method linked with the proposed preprocess method provides an efficient codec processor and transmission system for underwater images that are used for searches and transmitted via ultrasonic communications.

VLSI Architecture of High Performance Huffman Codec (고성능 허프만 코덱의 VLSI 구조)

  • Choi, Hyun-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.2
    • /
    • pp.439-446
    • /
    • 2011
  • In this paper, we proposed and implemented a dedicated hardware for Huffman coding which is a method of entropy coding to use compressing multimedia data with video coding. The proposed Huffman codec consists Huffman encoder and decoder. The Huffman encoder converts symbols to Huffman codes using look-up table. The Huffman code which has a variable length is packetized to a data format with 32 bits in data packeting block and then sequentially output in unit of a frame. The Huffman decoder converts serial bitstream to original symbols without buffering using FSM(finite state machine) which has a tree structure. The proposed hardware has a flexible operational property to program encoding and decoding hardware, so it can operate various Huffman coding. The implemented hardware was implemented in Cyclone III FPGA of Altera Inc., and it uses 3725 LUTs in the operational frequency of 365MHz

SPIHT-based Subband Division Compression Method for High-resolution Image Compression (고해상도 영상 압축을 위한 SPIHT 기반의 부대역 분할 압축 방법)

  • Kim, Woosuk;Park, Byung-Seo;Oh, Kwan-Jung;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.198-206
    • /
    • 2022
  • This paper proposes a method to solve problems that may occur when SPIHT(set partition in hierarchical trees) is used in a dedicated codec for compressing complex holograms with ultra-high resolution. The development of codecs for complex holograms can be largely divided into a method of creating dedicated compression methods and a method of using anchor codecs such as HEVC and JPEG2000 and adding post-processing techniques. In the case of creating a dedicated compression method, a separate conversion tool is required to analyze the spatial characteristics of complex holograms. Zero-tree-based algorithms in subband units such as EZW and SPIHT have a problem that when coding for high-resolution images, intact subband information is not properly transmitted during bitstream control. This paper proposes a method of dividing wavelet subbands to solve such a problem. By compressing each divided subbands, information throughout the subbands is kept uniform. The proposed method showed better restoration results than PSNR compared to the existing method.

Multi-view Synthesis Algorithm for the Better Efficiency of Codec (부복호화기 효율을 고려한 다시점 영상 합성 기법)

  • Choi, In-kyu;Cheong, Won-sik;Lee, Gwangsoon;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.375-384
    • /
    • 2016
  • In this paper, when stereo image, satellite view and corresponding depth maps were used as the input data, we propose a new method that convert these data to data format suitable for compressing, and then by using these format, intermediate view is synthesized. In the transmitter depth maps are merged to a global depth map and satellite view are converted to residual image corresponding hole region as out of frame area and occlusion region. And these images subsampled to reduce a mount of data and stereo image of main view are encoded by HEVC codec and transmitted. In the receiver intermediate views between stereo image and between stereo image and bit-rate are synthesized using decoded global depth map, residual images and stereo image. Through experiments, we confirm good quality of intermediate views synthesized by proposed format subjectively and objectively in comparison to intermediate views synthesized by MVD format versus total bit-rate.

The embodiment of the advanced EPS with the synthesis system of moving picture (동영상합성시스템을 이용한 개선된 외국인고용관리시스템(EPS) 구현)

  • Kim, Rog-Hwan;Jung, Byeong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.9
    • /
    • pp.105-113
    • /
    • 2009
  • This paper is aimed at embodying the optimal system for foreign workforce supply of nation in order to introduce qualified foreign workers at the age of eleven thousand foreigners. It is difficult to employ foreign workers qualified and it makes job rosters' confidence fall down which is the supplementary resources when selecting due to the insufficient job seekers' detailed information. Therefore, the moving control system should be added in current system to deal with these problems. For this, in this paper, we propose that the moving picture embedded system applies to the current EPS utilizing multimedia, network and database technologies as regards adding the function of the moving picture synthesis to recent system. It also suggests the advanced foreign employment control system related to the advanced system which makes employers to hire foreign workers satisfying their requirements and demand.

SHVC-based V-PCC Content ISOBMFF Encapsulation and DASH Configuration Method (SHVC 기반 V-PCC 콘텐츠 ISOBMFF 캡슐화 및 DASH 구성 방안)

  • Nam, Kwijung;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.548-560
    • /
    • 2022
  • Video based Point Cloud Compression (V-PCC) is one of the compression methods for compressing point clouds, and shows high efficiency in dynamic point cloud compression with movement due to the feature of compressing point cloud data using an existing video codec. Accordingly, V-PCC is drawing attention as a core technology for immersive content services such as AR/VR. In order to effectively service these V-PCC contents through a media streaming platform, it is necessary to encapsulate them in the existing media file format, ISO based Media File Format (ISOBMFF). However, in order to service through an adaptive streaming platform such as Dynamic Adaptive Streaming over HTTP (DASH), it is necessary to encode V-PCC contents of various qualities and store them in the server. Due to the size of the 2D media, it causes a great burden on the encoder and the server compared to the existing 2D media. As a method to solve such a problem, it may be considered to configure a streaming platform based on content obtained through V-PCC content encoding based on SHVC. Therefore, this paper encapsulates the SHVC-based V-PCC bitstream into ISOBMFF suitable for DASH service and proposes a configuration method to service it. In addition, in this paper, we propose ISOBMFF encapsulation and DASH configuration method to effectively service SHVC-based V-PCC contents, and confirm them through verification experiments.

Same music file recognition method by using similarity measurement among music feature data (음악 특징점간의 유사도 측정을 이용한 동일음원 인식 방법)

  • Sung, Bo-Kyung;Chung, Myoung-Beom;Ko, Il-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.3
    • /
    • pp.99-106
    • /
    • 2008
  • Recently, digital music retrieval is using in many fields (Web portal. audio service site etc). In existing fields, Meta data of music are used for digital music retrieval. If Meta data are not right or do not exist, it is hard to get high accurate retrieval result. Contents based information retrieval that use music itself are researched for solving upper problem. In this paper, we propose Same music recognition method using similarity measurement. Feature data of digital music are extracted from waveform of music using Simplified MFCC (Mel Frequency Cepstral Coefficient). Similarity between digital music files are measured using DTW (Dynamic time Warping) that are used in Vision and Speech recognition fields. We success all of 500 times experiment in randomly collected 1000 songs from same genre for preying of proposed same music recognition method. 500 digital music were made by mixing different compressing codec and bit-rate from 60 digital audios. We ploved that similarity measurement using DTW can recognize same music.

  • PDF

Compression of DNN Integer Weight using Video Encoder (비디오 인코더를 통한 딥러닝 모델의 정수 가중치 압축)

  • Kim, Seunghwan;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.778-789
    • /
    • 2021
  • Recently, various lightweight methods for using Convolutional Neural Network(CNN) models in mobile devices have emerged. Weight quantization, which lowers bit precision of weights, is a lightweight method that enables a model to be used through integer calculation in a mobile environment where GPU acceleration is unable. Weight quantization has already been used in various models as a lightweight method to reduce computational complexity and model size with a small loss of accuracy. Considering the size of memory and computing speed as well as the storage size of the device and the limited network environment, this paper proposes a method of compressing integer weights after quantization using a video codec as a method. To verify the performance of the proposed method, experiments were conducted on VGG16, Resnet50, and Resnet18 models trained with ImageNet and Places365 datasets. As a result, loss of accuracy less than 2% and high compression efficiency were achieved in various models. In addition, as a result of comparison with similar compression methods, it was verified that the compression efficiency was more than doubled.