• Title/Summary/Keyword: Statistical Error

Search Result 1,756, Processing Time 0.025 seconds

An Overview of Bootstrapping Method Applicable to Survey Researches in Rehabilitation Science

  • Choi, Bong-sam
    • Physical Therapy Korea
    • /
    • v.23 no.2
    • /
    • pp.93-99
    • /
    • 2016
  • Background: Parametric statistical procedures are typically conducted under the condition in which a sample distribution is statistically identical with its population. In reality, investigators use inferential statistics to estimate parameters based on the sample drawn because population distributions are unknown. The uncertainty of limited data from the sample such as lack of sample size may be a challenge in most rehabilitation studies. Objects: The purpose of this study is to review the bootstrapping method to overcome shortcomings of limited sample size in rehabilitation studies. Methods: Articles were reviewed. Results: Bootstrapping method is a statistical procedure that permits the iterative re-sampling with replacement from a sample when the population distribution is unknown. This statistical procedure is to enhance the representativeness of the population being studied and to determine estimates of the parameters when sample size are too limited to generalize the study outcome to target population. The bootstrapping method would overcome limitations such as type II error resulting from small sample sizes. An application on a typical data of a study represented how to deal with challenges of estimating a parameter from small sample size and enhance the uncertainty with optimal confidence intervals and levels. Conclusion: Bootstrapping method may be an effective statistical procedure reducing the standard error of population parameters under the condition requiring both acceptable confidence intervals and confidence level (i.e., p=.05).

Discriminative Weight Training for a Statistical Model-Based Voice Activity Detection (통계적 모델 기반의 음성 검출기를 위한 변별적 가중치 학습)

  • Kang, Sang-Ick;Jo, Q-Haing;Park, Seung-Seop;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.26 no.5
    • /
    • pp.194-198
    • /
    • 2007
  • In this paper, we apply a discriminative weight training to a statistical model-based voice activity detection(VAD). In our approach, the VAD decision rule is expressed as the geometric mean of optimally weighted likelihood ratios(LRs) based on a minimum classification error(MCE) method which is different from the previous works in that different weights are assigned to each frequency bin which is considered more realistic. According to the experimental results, the proposed approach is found to be effective for the statistical model-based VAD using the LR test.

History of the Error and the Normal Distribution in the Mid Nineteenth Century (19세기 중반 오차와 정규분포의 역사)

  • Jo, Jae-Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.737-752
    • /
    • 2008
  • About 1800, mathematicians combined analysis of error and probability theory into error theory. After developed by Gauss and Laplace, error theory was widely used in branches of natural science. Motivated by the successful applications of error theory in natural sciences, scientists like Adolph Quetelet tried to incorporate social statistics with error theory. But there were not a few differences between social science and natural science. In this paper we discussed topics raised then. The problems considered are as follows: the interpretation of individual man in society; the arguments against statistical methods; history of the measures for diversity. From the successes and failures of the $19^{th}$ century social statisticians, we can see how statistics became a science that is essential to both natural and social sciences. And we can see that those problems, which were not easy to solve for the $19^{th}$ century social statisticians, matter today too.

Methods and Sample Size Effect Evaluation for Wafer Level Statistical Bin Limits Determination with Poisson Distributions (포아송 분포를 가정한 Wafer 수준 Statistical Bin Limits 결정방법과 표본크기 효과에 대한 평가)

  • Park, Sung-Min;Kim, Young-Sig
    • IE interfaces
    • /
    • v.17 no.1
    • /
    • pp.1-12
    • /
    • 2004
  • In a modern semiconductor device manufacturing industry, statistical bin limits on wafer level test bin data are used for minimizing value added to defective product as well as protecting end customers from potential quality and reliability excursion. Most wafer level test bin data show skewed distributions. By Monte Carlo simulation, this paper evaluates methods and sample size effect regarding determination of statistical bin limits. In the simulation, it is assumed that wafer level test bin data follow the Poisson distribution. Hence, typical shapes of the data distribution can be specified in terms of the distribution's parameter. This study examines three different methods; 1) percentile based methodology; 2) data transformation; and 3) Poisson model fitting. The mean square error is adopted as a performance measure for each simulation scenario. Then, a case study is presented. Results show that the percentile and transformation based methods give more stable statistical bin limits associated with the real dataset. However, with highly skewed distributions, the transformation based method should be used with caution in determining statistical bin limits. When the data are well fitted to a certain probability distribution, the model fitting approach can be used in the determination. As for the sample size effect, the mean square error seems to reduce exponentially according to the sample size.

An Assessment of Statistical Validity of Articles Published in "Korean Journal of Oriental Medicine"-from 1995 to 2007 (한국한의학연구원 논문의 통계적 오류에 관한 연구)

  • Kang, Kyung-Won;Kim, No-Soo;Yoo, Jong-Hyang;Kang, Byung-Gab;Ko, Mi-Mi;Choi, Sun-Mi
    • Korean Journal of Oriental Medicine
    • /
    • v.14 no.2
    • /
    • pp.87-91
    • /
    • 2008
  • Background and Purpose: The purpose of this study was investigate statistical validities of previously reported articles that used various statistical techniques such as t-test and analysis of variance. Methods: To analyze the statistical procedures, 66 original articles using those statistical methods were selected from "Korean Journal of Oriental Medicine(KJOM)" published from 1995 to 2007. Results: Twenty-one articles(32%) did not report correct p-values, 33 articles(50%) used mean${\pm}$standard error(mean${\pm}$SE) and 11 articles(l7%) used mean${\pm}$standard deviation(mean${\pm}$SD). Fifty-two articles(95%) of 55 ones which were tested for normal distribution made an error in describing normal distribution. Seventeen articles misused t-test and 12 articles did not carry out the multiple comparison. Conclusions: The training of researchers with clinical statistics or the participation of statisticians in research design will reduce the significant errors in statistical interpretation of the results.

  • PDF

The Economic Design of $\bar{x}$ -S Chart Considering Measurement Error (측정오차를 고려한 $\bar{x}$ -S 관리도의 경제적 설계)

  • 유영창;강창욱
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.23 no.61
    • /
    • pp.89-98
    • /
    • 2000
  • For statistical process control, the process data are collected by the measurement system. But, the measurement system may have instrument error or/and operator error. In the measured values of products, the total observed variance consists of process variance and variance due to error of measurement system. In this paper, we design more practical T-s control chart considering estimated measurement error The effects of measurement error on the expected total cost and design parameters are investigated.

  • PDF

A Comparison Study on the Error Criteria in Nonparametric Regression Estimators

  • Chung, Sung-S.
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.2
    • /
    • pp.335-345
    • /
    • 2000
  • Most context use the classical norms on function spaces as the error criteria. Since these norms are all based on the vertical distances between the curves, these can be quite inappropriate from a visual notion of distance. Visual errors in Marron and Tsybakov(1995) correspond more closely to "what the eye sees". Simulation is performed to compare the performance of the regression smoothers in view of MISE and the visual error. It shows that the visual error can be used as a possible candidate of error criteria in the kernel regression estimation.

  • PDF

A New Nonparametric Method for Prediction Based on Mean Squared Relative Errors (평균제곱상대오차에 기반한 비모수적 예측)

  • Jeong, Seok-Oh;Shin, Key-Il
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.2
    • /
    • pp.255-264
    • /
    • 2008
  • It is common in practice to use mean squared error(MSE) for prediction. Recently, Park and Shin (2005) and Jones et al. (2007) studied prediction based on mean squared relative error(MSRE). We proposed a new nonparametric way of prediction based on MSRE substituting Jones et al. (2007) and provided a small simulation study which highly supports the proposed method.

Sequential Shape Modification for Monotone Convex Function: L2 Monotonization and Uniform Convexifiation

  • Lim, Jo-Han;Lee, Sung-Im
    • Communications for Statistical Applications and Methods
    • /
    • v.15 no.5
    • /
    • pp.675-685
    • /
    • 2008
  • This paper studies two sequential procedures to estimate a monotone convex function using $L_2$ monotonization and uniform convexification; one, denoted by FMSC, monotonizes the data first and then, convexifis the monotone estimate; the other, denoted by FCSM, first convexifies the data and then monotonizes the convex estimate. We show that two shape modifiers are not commutable and so does FMSC and FCSM. We compare them numerically in uniform error(UE) and integrated mean squared error(IMSE). The results show that FMSC has smaller uniform error(UE) and integrated mean squared error(IMSE) than those of FCSC.

PERFORMANCE EVALUATION VIA MONTE CARLO IMPORTANCE SAMPLING IN SINGLE USER DIGITAL COMMUNICATION SYSTEMS

  • Oh Man-Suk
    • Journal of the Korean Statistical Society
    • /
    • v.35 no.2
    • /
    • pp.157-166
    • /
    • 2006
  • This research proposes an efficient Monte Carlo algorithm for computing error probability in high performance digital communication st stems. It characterizes special features of the problem and suggests an importance sampling algorithm specially designed to handle the problem. It uses a shifted exponential density as the importance sampling density, and shows an adaptive way of choosing the rate and the origin of the shifted exponential density. Instead of equal allocation, an intelligent allocation of the samples is proposed so that more samples are allocated to more important part of the error probability. The algorithm uses the nested feature of the error space and avoids redundancy in estimating the probability. The algorithm is applied to an example data set and shows a great improvement in accuracy of the error probability estimation.