• Title/Summary/Keyword: jackknife resampling

Search Result 8, Processing Time 0.027 seconds

Comparison of EM with Jackknife Standard Errors and Multiple Imputation Standard Errors

  • Kang, Shin-Soo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.1079-1086
    • /
    • 2005
  • Most discussions of single imputation methods and the EM algorithm concern point estimation of population quantities with missing values. A second concern is how to get standard errors of the point estimates obtained from the filled-in data by single imputation methods and EM algorithm. Now we focus on how to estimate standard errors with incorporating the additional uncertainty due to nonresponse. There are some approaches to account for the additional uncertainty. The general two possible approaches are considered. One is the jackknife method of resampling methods. The other is multiple imputation(MI). These two approaches are reviewed and compared through simulation studies.

  • PDF

Resampling-based Test of Hypothesis in L1-Regression

  • Kim, Bu-Yong
    • Communications for Statistical Applications and Methods
    • /
    • v.11 no.3
    • /
    • pp.643-655
    • /
    • 2004
  • L$_1$-estimator in the linear regression model is widely recognized to have superior robustness in the presence of vertical outliers. While the L$_1$-estimation procedures and algorithms have been developed quite well, less progress has been made with the hypothesis test in the multiple L$_1$-regression. This article suggests computer-intensive resampling approaches, jackknife and bootstrap methods, to estimating the variance of L$_1$-estimator and the scale parameter that are required to compute the test statistics. Monte Carlo simulation studies are performed to measure the power of tests in small samples. The simulation results indicate that bootstrap estimation method is the most powerful one when it is employed to the likelihood ratio test.

A comparative study of the Gini coefficient estimators based on the regression approach

  • Mirzaei, Shahryar;Borzadaran, Gholam Reza Mohtashami;Amini, Mohammad;Jabbari, Hadi
    • Communications for Statistical Applications and Methods
    • /
    • v.24 no.4
    • /
    • pp.339-351
    • /
    • 2017
  • Resampling approaches were the first techniques employed to compute a variance for the Gini coefficient; however, many authors have shown that an analysis of the Gini coefficient and its corresponding variance can be obtained from a regression model. Despite the simplicity of the regression approach method to compute a standard error for the Gini coefficient, the use of the proposed regression model has been challenging in economics. Therefore in this paper, we focus on a comparative study among the regression approach and resampling techniques. The regression method is shown to overestimate the standard error of the Gini index. The simulations show that the Gini estimator based on the modified regression model is also consistent and asymptotically normal with less divergence from normal distribution than other resampling techniques.

Jensen's Alpha Estimation Models in Capital Asset Pricing Model

  • Phuoc, Le Tan
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.5 no.3
    • /
    • pp.19-29
    • /
    • 2018
  • This research examined the alternatives of Jensen's alpha (α) estimation models in the Capital Asset Pricing Model, discussed by Treynor (1961), Sharpe (1964), and Lintner (1965), using the robust maximum likelihood type m-estimator (MM estimator) and Bayes estimator with conjugate prior. According to finance literature and practices, alpha has often been estimated using ordinary least square (OLS) regression method and monthly return data set. A sample of 50 securities is randomly selected from the list of the S&P 500 index. Their daily and monthly returns were collected over a period of the last five years. This research showed that the robust MM estimator performed well better than the OLS and Bayes estimators in terms of efficiency. The Bayes estimator did not perform better than the OLS estimator as expected. Interestingly, we also found that daily return data set would give more accurate alpha estimation than monthly return data set in all three MM, OLS, and Bayes estimators. We also proposed an alternative market efficiency test with the hypothesis testing Ho: α = 0 and was able to prove the S&P 500 index is efficient, but not perfect. More important, those findings above are checked with and validated by Jackknife resampling results.

Bootstrapping Regression Residuals

  • Imon, A.H.M. Rahmatullah;Ali, M. Masoom
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.3
    • /
    • pp.665-682
    • /
    • 2005
  • The sample reuse bootstrap technique has been successful to attract both applied and theoretical statisticians since its origination. In recent years a good deal of attention has been focused on the applications of bootstrap methods in regression analysis. It is easier but more accurate computation methods heavily depend on high-speed computers and warrant tough mathematical justification for their validity. It is now evident that the presence of multiple unusual observations could make a great deal of damage to the inferential procedure. We suspect that bootstrap methods may not be free from this problem. We at first present few examples in favour of our suspicion and propose a new method diagnostic-before-bootstrap method for regression purpose. The usefulness of our newly proposed method is investigated through few well-known examples and a Monte Carlo simulation under a variety of error and leverage structures.

  • PDF

Application of Jackknife Method for Determination of Representative Probability Distribution of Annual Maximum Rainfall (연최대강우량의 대표확률분포형 결정을 위한 Jackknife기법의 적용)

  • Lee, Jae-Joon;Lee, Sang-Won;Kwak, Chang-Jae
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.10
    • /
    • pp.857-866
    • /
    • 2009
  • In this study, basic data is consisted annual maximum rainfall at 56 stations that has the rainfall records more than 30years in Korea. The 14 probability distributions which has been widely used in hydrologic frequency analysis are applied to the basic data. The method of moments, method of maximum likelihood and probability weighted moments method are used to estimate the parameters. And 4-tests (chi-square test, Kolmogorov-Smirnov test, Cramer von Mises test, probability plot correlation coefficient (PPCC) test) are used to determine the goodness of fit of probability distributions. This study emphasizes the necessity for considering the variability of the estimate of T-year event in hydrologic frequency analysis and proposes a framework for evaluating probability distribution models. The variability (or estimation error) of T-year event is used as a criterion for model evaluation as well as three goodness of fit criteria (SLSC, MLL, and AIC) in the framework. The Jackknife method plays a important role in estimating the variability. For the annual maxima of rainfall at 56 stations, the Gumble distribution is regarded as the best one among probability distribution models with two or three parameters.

Practice of causal inference with the propensity of being zero or one: assessing the effect of arbitrary cutoffs of propensity scores

  • Kang, Joseph;Chan, Wendy;Kim, Mi-Ok;Steiner, Peter M.
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.1
    • /
    • pp.1-20
    • /
    • 2016
  • Causal inference methodologies have been developed for the past decade to estimate the unconfounded effect of an exposure under several key assumptions. These assumptions include, but are not limited to, the stable unit treatment value assumption, the strong ignorability of treatment assignment assumption, and the assumption that propensity scores be bounded away from zero and one (the positivity assumption). Of these assumptions, the first two have received much attention in the literature. Yet the positivity assumption has been recently discussed in only a few papers. Propensity scores of zero or one are indicative of deterministic exposure so that causal effects cannot be defined for these subjects. Therefore, these subjects need to be removed because no comparable comparison groups can be found for such subjects. In this paper, using currently available causal inference methods, we evaluate the effect of arbitrary cutoffs in the distribution of propensity scores and the impact of those decisions on bias and efficiency. We propose a tree-based method that performs well in terms of bias reduction when the definition of positivity is based on a single confounder. This tree-based method can be easily implemented using the statistical software program, R. R code for the studies is available online.

Reexamination of Estimating Beta Coecient as a Risk Measure in CAPM

  • Phuoc, Le Tan;Kim, Kee S.;Su, Yingcai
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.5 no.1
    • /
    • pp.11-16
    • /
    • 2018
  • This research examines the alternative ways of estimating the coefficient of non-diversifiable risk, namely beta coefficient, in Capital Asset Pricing Model (CAPM) introduced by Sharpe (1964) that is an essential element of assessing the value of diverse assets. The non-parametric methods used in this research are the robust Least Trimmed Square (LTS) and Maximum likelihood type of M-estimator (MM-estimator). The Jackknife, the resampling technique, is also employed to validate the results. According to finance literature and common practices, these coecients have often been estimated using Ordinary Least Square (LS) regression method and monthly return data set. The empirical results of this research pointed out that the robust Least Trimmed Square (LTS) and Maximum likelihood type of M-estimator (MM-estimator) performed much better than Ordinary Least Square (LS) in terms of eciency for large-cap stocks trading actively in the United States markets. Interestingly, the empirical results also showed that daily return data would give more accurate estimation than monthly return data in both Ordinary Least Square (LS) and robust Least Trimmed Square (LTS) and Maximum likelihood type of M-estimator (MM-estimator) regressions.