• Title/Summary/Keyword: 압축 Codec

Search Result 204, Processing Time 0.029 seconds

Novel harmonic coding method for parametric audio codec (하모닉 보상방법에 기반한 파라메트릭 코덱 구현에 관한 연구)

  • Jeong, Jong-Hoon;Lee, Nam-Suk;Lee, Geon-Hyoung
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.143-144
    • /
    • 2008
  • 본 논문은 오디오 압축시 하모닉의 특성을 적용함으로써 신호의 압축률을 향상시킬 수 있도록 하는 내용을 기술하고 있다. 하모닉 코딩은 오디오 신호가 가지는 특징인 복합음(Complex tone)의 특성을 이용하는 것으로, 주파수 공간에서 정수배의 주파수가 존재하며, 정면파의 특성상 시간적으로 인접 신호들간의 유사성이 매우 높은 특징을 이용하여 압축효율을 향상시키는 방법이다. 하지만 실질적인 오디오 신호의 경우, 악기들의 harmonic stretch, 전달과정에서 발생하는 신호의 왜곡, 외부 잡음등의 특성으로 인하여 수집된 오디오 신호를 분석하는 과정에서 부정확한 하모닉의 판단이 이루어질 가능성이 높으며, 이는 압축과정에서 심각한 음질의 열화를 가져오게 된다. 따라서 본 논문에서는 프레림간의 변화 추이의 판단을 통하여 하모닉의 변화를 예측하고, 예측 오류에 대한 보상값을 전달함으로써 오디오 신호의 안정적인 압축/복원이 가능하도록 하는 신호처리 방법에 대한 내용을 기술하고있다.

  • PDF

Efficient Codebook Search Method for AMR Speech Codec (AMR 음성 압축기를 위한 효율적인 코드북 검색 방법)

  • Lee Doyoon;Park Hochong
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.93-96
    • /
    • 2001
  • ACELP 구조의 음성 압축기는 우수한 음질을 제공하지만 최적의 코드 벡터를 구하기 위한 계산량이 상당히 많은 단점이 있다. 이를 해결하기 위해서 본 논문에서는 AMR 음성 압축기의 코드북을 매우 효율적으로 검색하는 새로운 방법을 제안한다. 제안하는 코드북 검색 방법은 완전 순차적인 검색 방법을 사용하여 대략적인 코드 벡터를 구하고, 코드 벡터의 각 펄스들의 중요도를 계산하여 중요도가 낮은 펄스를 새로운 펄스로 교환하는 펄스 교환 과정을 수행하여 코드 벡터의 성능을 향상시키는 방법을 사용한다. 또한, AMR 음성 압축기의 구조에 맞도록 트랙별로 이동하면서 순차적으로 코드북을 검색하여 다수의 대략적인 코드벡터를 찾은 후, 각 코드 벡터에 대하여 펄스 교환 과정을 수행하여 최적의 코드 벡터를 구한다. 제안한 코드북 검색 방법을 AMR 음성 압축기의 모든 모드에 적용하여 코드북 검색을 위한 계산량과 성능을 측정하였으며, 모든 모드에 대하여 매우 적은 계산량으로 동등한 성능을 가지는 것을 확인하였다

  • PDF

Fast Edge Map Method And Edge Map Compression Using Edge Features (고속 Edge Map 생성 방법과 Edge 특성을 이용한 Edge Map 압축)

  • Kim, Do-Hyun;Kim, Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.45-48
    • /
    • 2015
  • 오늘날 하드웨어의 발전으로 인해 영상 해상도는 FHD를 넘어 4K UHD 이상의 영상 해상도가 사용화되고 있다. 하지만 Edge Map을 만들기 위해 일반적으로 사용하는 함수들은 Convolution 함수 일종으로서 영상의 해상도가 높을수록 더 많은 Complexity를 요구한다. 또한 현재 주요 영상 압축 기술인 JPEG, H.264/AVC High efficiency video coding(HEVC)같은 기법들은 자연 영상을 중점으로 개발되어 있어 Edge map 압축에 있어 자연 영상만큼의 효율을 보여주지 못하고 있다. 본 논문은 원 영상을 Down Scaling한 뒤 이미지를 다시 원래 사이즈로 Up Scaling하여 두 영상의 차를 이용한 Edge Map을 생성하는 새로운 방법을 소개한다. 생성된 Edge Map의 특성인 Histogram 값의 분포가 0을 중심으로 Gaussian 분포를 가지는 것을 이용한 Zero Based 코덱을 제안한다. 제안된 알고리즘을 이용하여 고 해상도 영상에서도 빠르게 Edge Map을 생성하고 제안한 코덱을 통해 해당 Edge map을 압축한 결과 다른 압축 기술보다 더 뛰어난 성능을 보여주었다.

  • PDF

A study of scene change detection in HEVC bit stream (HEVC 비트 스트림 상에서의 장면전환 검출 기법 연구)

  • Eom, Yumie;Yoo, Sung-Geun;Yoon, So-Jeong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.258-261
    • /
    • 2014
  • The era of realistic broadcast with high fidelity has come after the wide-spread distribution of UHD display and the transmission of UHD experimental broadcast in CATV. However, UHD broadcast now has constraint because it requires much amount of bandwidth and data in broadcasting transmission and production system. Not only HEVC(High Efficiency Video Codec) which has more than two times higher compression rate but also cloud-based editing system would be the key to solve the problems above. Also, fast scene change detection of videos is needed to index and search UHD videos smoothly. In this paper, therefore, a method is proposed to index and search the scene change information of large volume UHD videos compressed with high-efficiency codec. Application usages of fast detection of scene change information in various UHD video environments are considered by using this algorithm.

  • PDF

A Design of Efficient Scan Converter for Image Compression CODEC (영상압축코덱을 위한 효율적인 스캔변환기 설계)

  • Lee, Gunjoong;Ryoo, Kwangki
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.2
    • /
    • pp.386-392
    • /
    • 2015
  • Data in a image compression codec are processed with a specific regular block size. The processing order of block sized data is changed in specific function blocks and the data is packed in memory and read by a new sequence. To maintain a regular throughput rate, double buffering is normally used that interleaving two block sized memory to do concurrent read and write operations. Single buffering using only one block sized memory can be adopted to the simple data reordering, but when a complicate reordering occurs, irregular address changes prohibit from implementing adequate address generating for single buffering. This paper shows that there is a predictable and recurring regularity of changing address access orders within a finite updating counts and suggests an effective method to implement. The data reordering function using suggested idea is designed with HDL and implemented with TSMC 0.18 CMOS process library. In various scan blocks, it shows more than 40% size reduction compared with a conventional method.

Network design for correction of deterioration due to hologram compression (홀로그램 압축으로 인한 열화 보정을 위한 네트워크 설계)

  • Song, Joon Boum;jang, Junhyuck;Hwang, Yunseok;Cho, Inje
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.377-379
    • /
    • 2020
  • The hologram data is having a dependence on the pixel pitch of the SLM (spatial light modulator) and the wavelength of light, and the quality of the digital hologram is proportional to the unit pixel pitch and the total resolution. In addition, since each pixel has a complex value, the amount of data in the digital hologram also increases exponentially, and the size is bound to be very large. Therefore, in order to efficiently handle digital hologram files, it is essential to reduce the file size through a codec and store it. Recently, research on enhancing image quality damaged by the codec is actively underway. In this paper, the hologram image of JPEG Pleno, which is the standard hologram data, was used, and the image quality damage that occurs whenthe holographic image is encoded and decoded through the JPEG2000, AVC, and HEVC codec is enhanced with a deep learning network to find out whether the image quality can be improved. we also compare and quantitatively find out the degree of improvement in image quality.

  • PDF

Transform Domain Resizing for DCT-Based Codec (DCT 코덱에 기반한 변환 영역에서의 리사이징 알고리즘)

  • 신건식;장준영;강문기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.25-34
    • /
    • 2004
  • The ability to perform the same operations in the transform domain as in the spatial domain is important for efficient image transmission through a channel. We perform image resizing, which includes magnifying and reducing the size, in the discrete cosine transform(DCT) domain and the effects of the transform domain approach are analyzed in the corresponding spatial domain. Based on this analysis, the two resizing algorithms are proposed. The first one further compresses the images encoded by the compression standard by reducing the size before compression, and the other reduces the loss of information while maintaining the conventional compression rate. Because of its compatibility with standard codec, these algorithms can be easily embedded in JPEG and MPEG codecs, which are widely used for the purpose of image storage and transmission. Experimental results show a reduction of about half the bit size with similar image quality and about a 2- or 3-dB quality improvements in the similar compression rate.

A Design and Implementation of Threshold-adjusted Em Codec (Threshold-adjusted EZW Codec의 설계와 구현)

  • Chae, Hui-Jung;Lee, Ho-Seok
    • The KIPS Transactions:PartB
    • /
    • v.9B no.1
    • /
    • pp.57-66
    • /
    • 2002
  • In this paper, we propose a method for the improvement of EZW encoding algorithm. The EZW algorithm encodes wavelet coefficients using 4 symbols such as POS(POsitive), NEG(NEGative), IZ(Isolated Zero), and ZTR(ZeroTreeRoot) which are determined by the significance of wavelet coefficients. In this paper, we applied threshold to wavelet coefficients to improve the EZW algorithm. The coefficients below the threshold are adjusted to zero to generate more ZTR symbols in the encoding process. The overall EZW image compression system is constructed using run-length coding and arithmetic coding. The system shows remarkable results for various images. We finally present experimentation results.

Overview of H.264/AVC Scalable Extension (H.264/AVC-Scalable Extension의 표준화 연구동향과 알고리즘 분석)

  • Park Seong-ho;Kim Wonha;Han Woo-jin
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.515-527
    • /
    • 2005
  • A next-generation codec should be developed to be a scalable video codec(SVC) that not only maximizes the coding efficiency but also adaptively copes with the various communication devices and the variation of network environments. To meet these requirements, Joint Video Team (JVT) of ISO/IEC and ITU-T is standardizing H.264/AVC based SVC. In this paper, we introduce research directions and status on SVC standardization and also analyze techniques and algorithms adopted in the current SVC.

A VLSI Design of Discrete Wavelet Transform and Scalar Quantization for JPEG2000 CODEC (JPEG2000 CODEC을 위한 DWT및 양자화기 VLSI 설계)

  • 이경민;김영민
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.40 no.1
    • /
    • pp.45-51
    • /
    • 2003
  • JPEG200, a new international standard for still image compression based on wavelet and bit-plane coding techniques, is developed. In this paper, we design the DWT(Discrete Wavelet Transform) and quantizer for JPEG2000 CODEC. DWT handles both lossy and lossless compression using the same transform-based framework: The Daubechies 9/7 and 5/3 transforms, and quantizer is implemented as SQ(Scalar Quantization). The architecture of the proposed DWT and SQ are synthesized and verified using Xilinx FPGA technology. It operates up to 30MHz, and executes algorithms of wavelet transform and quantization for VGA 10 frame per second.