• Title/Summary/Keyword: SP 기법

Search Result 525, Processing Time 0.02 seconds

Fast Mode Decision Algorithm for H.264 using Mode Classification (H.264 표준에서 모드 분류를 이용한 고속 모드결정 방법)

  • Kim, Hee-Soon;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.88-96
    • /
    • 2007
  • H.264 is a new international video coding standard that can achieve considerably higher coding efficiency than conventional standards. Its coding gain has been achieved by employing advanced video coding methods. Specially, the increased number of macroblock modes and the complex mode decision procedure using the Lagrangian optimization are the main factors for increasing coding efficiency. Although H.264 obtains improved coding efficiency, it is difficult to do an real-time encoding because it considers all coding parameters in the mode decision procedure. In this paper, we propose a fast mode decision algorithm which classifies the macroblock modes in order to determine the optimal mode having low complexity quickly. Simulation results show that the proposed algorithm can reduce the encoding time by 34.95% on average without significant PSNR degradation or bit-rate increment. In addition, in order to show the validity of simulation results, we set up a low boundary condition for coding efficiency and complexity and show that the proposed algorithm satisfies the low boundary condition.

Near-lossless Coding of Multiview Texture and Depth Information for Graphics Applications (그래픽스 응용을 위한 다시점 텍스처 및 깊이 정보의 근접 무손실 부호화)

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.41-48
    • /
    • 2009
  • This Paper introduces representation and coding schemes of multiview texture and depth data for complex three-dimensional scenes. We represent input color and depth images using compressed texture and depth map pairs. The proposed X-codec encodes them further to increase compression ratio in a near-lossless way. Our system resolves two problems. First, rendering time and output visual quality depend on input image resolutions rather than scene complexity since a depth image-based rendering techniques is used. Second, the random access problem of conventional image-based rendering could be effectively solved using our image block-based compression schemes. From experimental results, the proposed approach is useful to graphics applications because it provides multiview rendering, selective decoding, and scene manipulation functionalities.

Efficient SAD Processor for Motion Estimation of H.264 (H.264 움직임 추정을 위한 효율적인 SAD 프로세서)

  • Jang, Young-Beom;Oh, Se-Man;Kim, Bee-Chul;Yoo, Hyeon-Joong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.74-81
    • /
    • 2007
  • In this paper, an efficient SAD(Sum of Absolute Differences) processor structure for motion estimation of H.264 is proposed. SAD processors are commonly used both in full search methods for motion estimation and in fast search methods for motion estimation. Proposed structure consists of SAD calculator block, combinator block, and minimum value calculator block. Especially, proposed structure is simplified by using Distributed Arithmetic for addition operation. The Verilog-HDL(Hard Description Language) coding and FPGA(Field Programmable Gate Array) implementation results for the proposed structure show 39% and 32% gate count reduction in comparison with those of the conventional structure, respectively. Due to its efficient processing scheme, the proposed SAD processor structure can be widely used in size dominant H.264 chip.

High-Resolution Image Reconstruction Considering the Inaccurate Sub-Pixel Motion Information (부정확한 부화소 단위의 움직임 정보를 고려한 고해상도 영상 재구성 연구)

  • Park, Jin-Yeol;Lee, Eun-Sil;Gang, Mun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.2
    • /
    • pp.169-178
    • /
    • 2001
  • The demand for high-resolution images is gradually increasing, whereas many imaging systems have been designed to allow a certain level of aliasing during image acquisition. Thus, digital image processing approaches have recently been investigated to reconstruct a high-resolution image from aliased low-resolution images. However, since the sub-pixel motion information is assumed to be accurate in most conventional approaches, the satisfactory high-resolution image cannot be obtained when the sub-pixel motion information is inaccurate. Therefore, in this paper we propose a new algorithm to reduce the distortion in the reconstructed high-resolution image due to the inaccuracy of sub-pixel motion information. For this purpose, we analyze the effect of inaccurate sub-pixel motion information on a high-resolution image reconstruction, and model it as zero-mean additive Gaussian errors added respectively to each low-resolution image. To reduce the distortion we apply the modified multi-channel image deconvolution approach to the problem. The validity of the proposed algorithm is both theoretically and experimentally demonstrated in this paper.

  • PDF

Noise Removal Using Complex Wavelet and Bernoulli-Gaussian Model (복소수 웨이블릿과 베르누이-가우스 모델을 이용한 잡음 제거)

  • Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.52-61
    • /
    • 2006
  • Orthogonal wavelet tansform which is generally used in image and signal processing applications has limited performance because of lack of shift invariance and low directional selectivity. To overcome these demerits complex wavelet transform has been proposed. In this paper, we present an efficient image denoising method using dual-tree complex wavelet transform and Bernoulli-Gauss prior model. In estimating hyper-parameters for Bernoulli-Gaussian model, we present two simple and non-iterative methods. We use hypothesis-testing technique in order to estimate the mixing parameter, Bernoulli random variable. Based on the estimated mixing parameter, variance for clean signal is obtained by using maximum generalized marginal likelihood (MGML) estimator. We simulate our denoising method using dual-tree complex wavelet and compare our algorithm to well blown denoising schemes. Experimental results show that the proposed method can generate good denoising results for high frequency image with low computational cost.

A block-based real-time people counting system (블록 기반 실시간 계수 시스템)

  • Park Hyun-Hee;Lee Hyung-Gu;Kim Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.22-29
    • /
    • 2006
  • In this paper, we propose a block-based real-time people counting system that can be used in various environments including showing mall entrances, elevators and escalators. The main contributions of this paper are robust background subtraction, the block-based decision method and real-time processing. For robust background subtraction obtained from a number of image sequences, we used a mixture of K Gaussian. The block-based decision method was used to determine the size of the given objects (moving people) in each block. We divided the images into $6{\times}12$ blocks and trained the mean and variance values of the specific objects in each block. This was done in order to provide real-time processing for up to 4 channels. Finally, we analyzed various actions that can occur with moving people in real world environments.

Content-Based Image Retrieval Algorithm Using HAQ Algorithm and Moment-Based Feature (HAQ 알고리즘과 Moment 기반 특징을 이용한 내용 기반 영상 검색 알고리즘)

  • 김대일;강대성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.113-120
    • /
    • 2004
  • In this paper, we propose an efficient feature extraction and image retrieval algorithm for content-based retrieval method. First, we extract the object using Gaussian edge detector for input image which is key frames of MPEG video and extract the object features that are location feature, distributed dimension feature and invariant moments feature. Next, we extract the characteristic color feature using the proposed HAQ(Histogram Analysis md Quantization) algorithm. Finally, we implement an retrieval of four features in sequence with the proposed matching method for query image which is a shot frame except the key frames of MPEG video. The purpose of this paper is to propose the novel content-based image retrieval algerian which retrieves the key frame in the shot boundary of MPEG video belonging to the scene requested by user. The experimental results show an efficient retrieval for 836 sample images in 10 music videos using the proposed algorithm.

Steganalysis Using Joint Moment of Wavelet Subbands (웨이블렛 부밴드의 조인트 모멘트를 이용한 스테그분석)

  • Park, Tae-Hee;Hyun, Seung-Hwa;Kim, Jae-Ho;Eom, Il-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.71-78
    • /
    • 2011
  • This paper propose image steganalysis scheme based on independence between parent and child subband on the multi-layer wavelet domain. The proposed method decompose cover and stego images into 12 subbands by applying 3-level Haar UWT(Undecimated Wavelet Transform), analyze statistical independency between parent and child subband. Because this independency is appeared more difference in stego image than in cover image, we can use it as feature to differenciate between cover and stego image. Therefore we extract 72D features by calculation first 3 order statistical moments from joint characteristic function between parent and child subband. Multi-layer perceptron(MLP) is applied as classifier to discriminate between cover and stego image. We test the performance of proposed scheme over various embedding rates by the LSB, SS, BSS embedding method. The proposed scheme outperforms the previous schemes in detection rate to existence of hidden message as well as exactness of discrimination.

Improving the Performance of the Capon Algorithm by Nulling Elements of an Inverse Covariance Matrix (공분산 역행렬 원소 제거 기법을 이용한 Capon 알고리듬의 성능 개선)

  • Kim, Seong-Min;Kang, Dong-Hoon;Lee, Yong-Wook;Nah, Sun-Phil;Oh, Wang-Rok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.5
    • /
    • pp.96-101
    • /
    • 2011
  • It is well known that the Capon algorithm offers better resolution compared to that of the FM (Fourier method) algorithm by minimizing the total output power while maintaining a constant gain in the look direction. Unfortunately, the DoA (Direction of Arrival) estimation performance of the Capon algorithm is drastically degraded when the SNR of received signal is low and thus, it cannot distinguish among signal sources which have similar incidence angles. In this paper, we propose a novel scheme enhancing the resolution of the Capon algorithm by ing all rows except the first row of an inverse covariance matrix.

Acoustic Signal Processing for ADCP using Zoom FFT Method to increase Frequency Resolution (주파수 해상도 증가를 위해 Zoom FFT 기법을 사용한 ADCP 음향신호처리)

  • Han, Jin-Hyun;Shim, Tae-Bo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.229-234
    • /
    • 2010
  • This paper proposed the acoustic signal processing techniques, which are applicable even in the shallow river, and will enhance the frequency resolution of the ADCP (Acoustic Doppler Current profiler). ADCP is a device that measures the velocity of a moving fluid. ADCP, in general, can be operated at ~300 Khz of center frequency due to no depth limit in the sea. However, it can hardly be used due to water depth of 30cm or shallower during the dry season in the river. Therefore, existing signal processing methods are not suitable to use in the shallow river. We are proposing an alternative acoustic signal processing method using Zoom FFT. Simulation results show that errors are reduced ${\pm}62\;cm/s$ in theory, and ${\pm}93\;cm/s$ in the experiment. The existing algorithm could not estimate the current speed at the shallow river below 30 cm, but proposed algorithm estimated the current speed that was faster than 20 cm/s at the shallow river below 30 cm.