• Title/Summary/Keyword: Block variance

Search Result 143, Processing Time 0.022 seconds

Motion-Compensated Noise Estimation for Effective Video Processing (효과적인 동영상 처리를 위한 움직임 보상 기반 잡음 예측)

  • Song, Byung-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.120-125
    • /
    • 2009
  • For effective noise removal prior to video processing, noise power or noise variance of an input video sequence needs to be found exactly, but it is actually a very difficult process. This paper presents an accurate noise variance estimation algorithm based on motion compensation between two adjacent noisy pictures. Firstly, motion estimation is performed for each block in a picture, and the residue variance of the best motion-compensated block is calculated. Then, a noise variance estimate of the picture is obtained by adaptively averaging and properly scaling the variances close to the best variance. The simulation results show that the proposed noise estimation algorithm is very accurate and stable irrespective of noise level.

A Graphical Method for Evaluating the Effect of Blocking in Response surface Designs Using Cuboidal Regions

  • Sang-Hyun Park;Dae-Heung Jang
    • Communications for Statistical Applications and Methods
    • /
    • v.5 no.3
    • /
    • pp.607-621
    • /
    • 1998
  • When fitting a response surface model, the least squares estimates of the model's parameters and the prediction variance will generally depend on how the response surface design is blocked. That is, the choice of a blocking arrangement for a response surface design can have a considerable effect on estimating the mean response and on the size of the prediction variance even if the experimental runs are the same. Therefore, care should be exercised in the selection of blocks. In this paper, we prognose a graphical method for evaluating the effect of blocking in a response surface designs using cuboidal regions in the presence of a fixed block effect. This graphical method can be used to investigate how the blocking has influence on the prediction variance throughout the entire experimental region of interest when this region is cuboidal, and compare the block effect in the cases of the orthogonal and non-orthogonalblockdesigns, resfectively.

  • PDF

Directional Block Loss Recovery sing Hypothesis Testing Problem (가설 검증 기법을 이용한 방향성을 가지는 손실 블록의 복구)

  • Hyun, Seung-Hwa;Kim, Yoo-Shin;Eom, Il-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.87-94
    • /
    • 2008
  • In this paper, we present a directional error concealment technique to compensate a lost block. Generally, the strong edge of an image has the large amounts of the variance because of its large coefficients in the wavelet domain. For estimating edge direction of a lost block, a $X^2$ hypothesis-testing problem is applied using the variance of wavelet coefficients. The lost block is interpolated according to the estimated edge direction. The pixels for interpolation is obtained from the edge direction. The proposed method outperforms the previous methods in objective and subjective qualities.

Fast Scene Change Detection Algorithm

  • Khvan, Dmitriy;Ng, Teck Sheng;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.11a
    • /
    • pp.259-262
    • /
    • 2012
  • In this paper, we propose a new fast algorithm for effective scene change detection. The proposed algorithm exploits Otsu threshold matching technique, which was proposed earlier. In this method, the current and the reference frames are divided into square blocks of particular size. After doing so, the pixel histogram of each block is generated. According to Otsu method, every histogram distribution is assumed to be bimodal, i.e. pixel distribution can be divided into two groups, based on within-group variance value. The pixel value that minimizes the within-group variance is said to be Otsu threshold. After Otsu threshold is found, the same procedure is performed at the reference frame. If the difference between Otsu threshold of a block in the current frame and co-located block in the reference frame is larger than predefined threshold, then a scene change between those two blocks is detected.

  • PDF

An Adaptive Algorithm for the Quantization Step Size Control of MPEG-2

  • Cho, Nam-Ik
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.138-145
    • /
    • 1997
  • This paper proposes an adaptive algorithm for the quantization step size control of MPEG-2, using the information obtained from the previously encoded picture. Before quantizing the DCT coefficients, the properties of reconstruction error of each macro block (MB) is predicted from the previous frame. For the prediction of the error of current MB, a block with the size of MB in the previous frame are chosen by use of the motion vector. Since the original and reconstructed images of the previous frame are available in the encoder, we can calculate the reconstruction error of this block. This error is considered as the expected error of the current MB if it is quantized with the same step size and bit rate. Comparing the error of the MB with the average of overall MBs, if it is larger than the average, small step size is given for this MB, and vice versa. As a result, the error distribution of the MB is more concentrated to the average, giving low variance and improved image quality. Especially for the low bit application, the proposed algorithm gives much smaller error variance and higher PSNR compared to TM5 (test model 5).

  • PDF

Optimal Designs of Complete Diallel Crosses

  • Park, Kuey-Chung
    • International Journal of Reliability and Applications
    • /
    • v.2 no.2
    • /
    • pp.131-135
    • /
    • 2001
  • Two general methods of construction leading to several series of universally optimal block designs for complete diallel crosses are provided in this paper. A method of constructing variance balance designs is also given.

  • PDF

The Block Decorrelation Method for Integer Ambiguity Resolution of GPS Carrier Phase Measurements (GPS 반송파 위상관측의 미지정수해를 위한 블록 비상관화 방법)

  • Tran, Binh Quoc;Lim, Sam-Sung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.30 no.8
    • /
    • pp.78-86
    • /
    • 2002
  • The GPS carrier phase measurements include integer ambiguities and the decorrelation process on the variance-covariance matrix is necessary to resolve these ambiguities efficiently. In this paper, we introduce a new method for the ambiguity de-correlation. This method divides the variance-covariance matrix into 4 smaller blocks and decorrelates them separately. The decorrelation of each block is processed recursively so that the result of the previous step is not affected by the next step. A couple of numerical examples chosen in random show that this method is better than or comparable to other decorrelation methods, however, the speed of this is relatively faster because the computations are performed on small blocks of the variance-covariance matrix.

Statistical Design of Experiments and Analysis: Hierarchical Variance Components and Wafer-Level Uniformity on Gate Poly-Silicon Critical Dimension (통계적 실험계획 및 분석: Gate Poly-Silicon의 Critical Dimension에 대한 계층적 분산 구성요소 및 웨이퍼 수준 균일성)

  • Park, Sung-min;Kim, Byeong-yun;Lee, Jeong-in
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.29 no.2
    • /
    • pp.179-189
    • /
    • 2003
  • Gate poly-silicon critical dimension is a prime characteristic of a metal-oxide-semiconductor field effect transistor. It is important to achieve the uniformity of gate poly-silicon critical dimension in order that a semiconductor device has acceptable electrical test characteristics as well as a semiconductor wafer fabrication process has a competitive net-die-per-wafer yield. However, on gate poly-silicon critical dimension, the complexity associated with a semiconductor wafer fabrication process entails hierarchical variance components according to run-to-run, wafer-to-wafer and even die-to-die production unit changes. Specifically, estimates of the hierarchical variance components are required not only for disclosing dominant sources of the variation but also for testing the wafer-level uniformity. In this paper, two experimental designs, a two-stage nested design and a randomized complete block design are considered in order to estimate the hierarchical variance components. Since gate poly-silicon critical dimensions are collected from fixed die positions within wafers, a factor representing die positions can be regarded as fixed in linear statistical models for the designs. In this context, the two-stage nested design also checks the wafer-level uniformity taking all sampled runs into account. In more detail, using variance estimates derived from randomized complete block designs, Duncan's multiple range test examines the wafer-level uniformity for each run. Consequently, a framework presented in this study could provide guidelines to practitioners on estimating the hierarchical variance components and testing the wafer-level uniformity in parallel for any characteristics concerned in semiconductor wafer fabrication processes. Statistical analysis is illustrated for an experimental dataset from a real pilot semiconductor wafer fabrication process.

An Adaptive Garbage Collection Policy for NAND Flash Memory (낸드 플래시 메모리를 위한 적응형 가비지 컬렉션 정책)

  • Han, Gyu-Tae;Kim, Sung-Jo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.5
    • /
    • pp.322-330
    • /
    • 2009
  • In order to utilize NAND flash memory as storage media which does not allow update-in-place and limits the number of block erase count, various garbage collection policies supporting wear-leveling have been investigated. Conventional garbage collection policies require cleaning-index calculation for the entire blocks to choose a block to be garbage-collected to support wear-leveling whenever a garbage collection is required, which results in performance degradation of system. This paper proposes a garbage collection policy which supports wear-leveling using a threshold value, which is in fact a variance of erase counts and by the maximum erase count of all blocks, without calculating the cleaning-index. During garbage collection, the erase cost is minimized by using the Greedy Policy if the variance is less than the threshold value. It achieves wear-leveling by excluding the block with the largest erase count from erase target blocks if the variance is larger than threshold value. The proposed scheme shows that a standard deviation approaches to zero as the erase count of blocks approaches to its upper limit and the measured speed of garbage collection is two times faster than the conventional schemes.

A complementary study on analysis of simulation results using statistical models (통계모형을 이용하여 모의실험 결과 분석하기에 대한 보완연구)

  • Kim, Ji-Hyun;Kim, Bongseong
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.569-577
    • /
    • 2022
  • Simulation studies are often conducted when it is difficult to compare the performance of nonparametric estimators theoretically. Kim and Kim (2021) showed that more systematic and accurate comparisons can be made if you analyze the simulation results using a regression model,. This study is a complementary study on Kim and Kim (2021). In the variance-covariance matrix for the error term of the regression model, only heteroscedasticity was considered and covariance was ignored in the previous study. When covariance is considered together with the heteroscedasticity, the variance-covariance matrix becomes a block diagonal matrix. In this study, a method of estimating and using the block diagonal variance-covariance matrix for the analysis was presented. This allows you to find more pairs of estimators with significant performance differences while ensuring the nominal confidence level.