• Title/Summary/Keyword: Image Encoding

Search Result 431, Processing Time 0.031 seconds

A fast fractal decoding algorithm using averaged-image estimation (평균 영상 추정을 이용한 고속 플랙탈 영상 복원 알고리즘)

  • 문용호;박태희;김재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.9A
    • /
    • pp.2355-2364
    • /
    • 1998
  • In conventional fractal decoding procedure, the reconstructed image is obtained by a rpredefined number of iterations starting with an arbitrary initial image. Its convergence speed depends on the selection of the initial image. It should be solved to get high speed convergence. In this paper, we theoretically reveal that conventional method is approximately decomposed into the decoding of the DC and AC components. Based on this fact, we proposed a novel fast fractal decoding algorithm made up of two steps. The averaged-image considered as an optimal initial image is estimated in the first step. In the second step, the reconstructe dimag eis genrated from the output image obtained in the first step. From the simulations, it is shown that the output image of the first step approximately converges to the averaged-image with only 15% calculations for one iteration of conventional method. And the proposed method is faster than various decoding mehtods and evenly equal to conventioanl decoding with the averaged-image. In addition, the proposed method can be applied to the compressed data resulted from the various encoding methods because it does not impose any constraints in the encoding procedure to get high decoding speed.

  • PDF

Error analysis of 3-D surface parameters from space encoding range imaging (공간 부호화 레인지 센서를 이용한 3차원 표면 파라미터의 에러분석에 관한 연구)

  • 정흥상;권인소;조태훈
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.375-378
    • /
    • 1997
  • This research deals with a problem of reconstructing 3D surface structures from their 2D projections, which is an important research topic in computer vision. In order to provide robust reconstruction algorithm, that is reliable even in the presence of uncertainty in the range images, we first present a detailed model and analysis of several error sources and their effects on measuring three-dimensional surface properties using the space encoded range imaging technique. Our approach has two key elements. The first is the error modeling for the space encoding range sensor and its propagation to the 3D surface reconstruction problem. The second key element in our approach is the algorithm for removing outliers in the range image. Such analyses, to our knowledge, have never attempted before. Experimental results show that our approach is significantly reliable.

  • PDF

Wavelet-based Feature Extraction Algorithm for an Iris Recognition System

  • Panganiban, Ayra;Linsangan, Noel;Caluyo, Felicito
    • Journal of Information Processing Systems
    • /
    • v.7 no.3
    • /
    • pp.425-434
    • /
    • 2011
  • The success of iris recognition depends mainly on two factors: image acquisition and an iris recognition algorithm. In this study, we present a system that considers both factors and focuses on the latter. The proposed algorithm aims to find out the most efficient wavelet family and its coefficients for encoding the iris template of the experiment samples. The algorithm implemented in software performs segmentation, normalization, feature encoding, data storage, and matching. By using the Haar and Biorthogonal wavelet families at various levels feature encoding is performed by decomposing the normalized iris image. The vertical coefficient is encoded into the iris template and is stored in the database. The performance of the system is evaluated by using the number of degrees of freedom, False Reject Rate (FRR), False Accept Rate (FAR), and Equal Error Rate (EER) and the metrics show that the proposed algorithm can be employed for an iris recognition system.

Adaptive coding algorithm using quantizer vector codebook in HDTV (양자화기 벡터 코드북을 이용한 HDTV 영상 적응 부호화)

  • 김익환;최진수;박광춘;박길흠;하영호
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.10
    • /
    • pp.130-139
    • /
    • 1994
  • Video compression algorithms are based on removing spatial and/or temproal redundancy inherent in image sequences by predictive(DPCM) encoding, transform encoding, or a combination of predictive and transform encoding. In this paper, each 8$\times$8 DCT coefficient of DFD(displaced frame difference) is adaptively quantized by one of the four quantizers depending on total distortion level, which is determined by characteristics of HVS(human visual system) and buffer status. Therefore, the number of possible quantizer selection vectors(patterns) is 4$^{64}$. If this vectors are coded, toomany bits are required. Thus, the quantizer selection vectors are limited to 2048 for Y and 512 for each U, V by the proposed method using SWAD(sum of weighted absolute difference) for discriminating vectors. The computer simulation results, using the codebook vectors which are made by the proposed method, show that the subjective and objective image quality (PSNR) are goor with the limited bit allocation. (17Mbps)

  • PDF

Fast fractal coding based on correlation coefficients of subblocks in input image (입력 영상의 서브블록들 사이의 상관관계에 기반한 고속 프랙탈 부호화)

  • 배수정;임재권
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.669-672
    • /
    • 1998
  • In this paper, w epropose a fast fractal coding method based on correlation coefficients of subblocks in input image. In the proposed method, domain pool is selected based on correlation analysis of input image and the isometry transform for each block is chosen based on the IFS method. To investigate the performance of the proposed method, we compared image quality and encoding time with full search PIFS method and jacquin's PIFS method. Experimental results show that proposed method yields nearly the same performance in PSNR, and its encoding time is reduced for images size of 512*512 compared with full search PIFS method and jacquin's PIFS method.

  • PDF

Wavelet-Based Fast Fractal Image Compression with Multiscale Factors (레벨과 대역별 스케일 인자를 갖는 웨이브릿 기반 프랙탈 영상압축)

  • 설문규
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.589-598
    • /
    • 2003
  • In the conventional fractal image compression in the DWT(discrete wavelet transform), the domain and range blocks were classified as B${\times}$B block size first before all domain block for each range block was searched. The conventional method has a disadvantages that the encoding time takes too long, since the domain block for entire image was searched. As an enhancement to such inefficiencies and image quality, this paper proposes wavelet-based fractal image compression with multiscale factors. Thus, this proposed method uses multiscale factor along each level and band to enhance an overall image quality. In encoding process of this method, the range blocks are not searched for all the domain blocks; however, using the self affine system the range blocks are selected from the blocks in the upper level. The image qualify of the conventional method is 32.30[dB], and the proposed method is 35.97[dB]. The image quality is increased by 3.67[dB].

  • PDF

Multi-resolution Lossless Image Compression for Progressive Transmission and Multiple Decoding Using an Enhanced Edge Adaptive Hierarchical Interpolation

  • Biadgie, Yenewondim;Kim, Min-sung;Sohn, Kyung-Ah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.6017-6037
    • /
    • 2017
  • In a multi-resolution image encoding system, the image is encoded into a single file as a layer of bit streams, and then it is transmitted layer by layer progressively to reduce the transmission time across a low bandwidth connection. This encoding scheme is also suitable for multiple decoders, each with different capabilities ranging from a handheld device to a PC. In our previous work, we proposed an edge adaptive hierarchical interpolation algorithm for multi-resolution image coding system. In this paper, we enhanced its compression efficiency by adding three major components. First, its prediction accuracy is improved using context adaptive error modeling as a feedback. Second, the conditional probability of prediction errors is sharpened by removing the sign redundancy among local prediction errors by applying sign flipping. Third, the conditional probability is sharpened further by reducing the number of distinct error symbols using error remapping function. Experimental results on benchmark data sets reveal that the enhanced algorithm achieves a better compression bit rate than our previous algorithm and other algorithms. It is shown that compression bit rate is much better for images that are rich in directional edges and textures. The enhanced algorithm also shows better rate-distortion performance and visual quality at the intermediate stages of progressive image transmission.

Removing the Blocking Artifacts for Highly Compressed JPEG Images (고압축 JPEG 영상을 위한 블록킹 현상 제거)

  • Jin Soon-Jong;Kim Won-Ki;Jeong Je-Chang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.9C
    • /
    • pp.869-875
    • /
    • 2006
  • Nowadays JPEG encoder uses block based DCT and quantization to compress the size of still image. JPEG encoding method performs better compression efficiency than the other still image encoding method. However, when encoding a still image at low bit-rate, high frequency coefficients could be lost because of the coarse quantization so the blocking artifacts occur. In this paper, we propose the method of eliminating the blocking artifacts which occur when the still image is encoded by JPEG at a high compression rate. The principle of proposed algorithm is that the eliminating the blocking artifacts, which occur in the boundary of blocks, in DCT domain with $4{\times}4$ block-based method. First of all, observe the blocking artifacts with $4{\times}4$ block in DCT domain. Then eliminate the blocking effects using effective filtering method that is $4{\times}4$ block-based. Experimental results have clearly shown that our algorithm presents substantially higher quality in subjective and objective point of view than the other algorithms.

The Fractal Image Compression Based on the Wavelet Transform Using the SAS Techniques (SAS 기법을 이용한 웨이브릿 변환 기반 프랙탈 영상 압축)

  • 정태일;강경원;문광석;권기룡;류권열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.1
    • /
    • pp.19-27
    • /
    • 2001
  • The conventional fractal image compression based on wavelet transform has the disadvantage that the encoding takes many time, since it finds the optimum domain for all the range blocks. In this paper, we propose the fractal image compression based on wavelet transform using the SAS(Self Affine System) techniques. It consists of the range and domain blocks in the wavelet transform, and the range blocks select the domain which is located the relatively same position. In the encoding process, the proposed methods introduce SAS techniques that the searching process of the domains blocks is not required. Therefore, it can perform a fast encoding by reducing the computational complexity. And, the image quality is improved using the different scale factors for each level and the sub-tree in the decoding. As a result, the image quality and the compression ratio are adjustable by the scale factors.

  • PDF

Block Based Efficient JPEG Encoding Algorithm for HDR Images (블록별 양자화를 이용한 HDR 영상의 효율적인 JPEG 압축 기법)

  • Lee, Chul;Kim, Chang-Su
    • Journal of IKEEE
    • /
    • v.11 no.4
    • /
    • pp.219-226
    • /
    • 2007
  • An efficient block based two-layer JPEG encoding algorithm is proposed to compress high dynamic range (HDR) images in this work. The proposed algorithm separates an input HDR image into a tone-mapped low dynamic range (LDR) image and a ratio image, which represents the quotients of the original HDR pixels divided by the tone-mapped LDR pixels. Then, the tone-mapped LDR image is compressed using the standard JPEG scheme to preserve backward compatibility and the ratio image is encoded to minimize a cost function that models the perception of each block with different quantization parameters in the human visual system (HVS). Simulation results show that the proposed algorithm provides better performance than the conventional method, which encodes the ratio image without any prior information of blocks.

  • PDF